Hi all,
I'm doing some tests with glusterfs and have some difficulties to
understand the relation between afr and unify translators in a
server-side configuration. What I am trying to do is a HA cluster with
high performance. I've seen that this kind of configuration is well
supported in client-side mode but my focus is server-side mode.
I have the following architecture : a cluster with 4 nodes configured to
do server-side AFR. I'm not using the UNIFY translator. There is a
dedicated network for replication. This is working pretty well. In order
to test this setup, a client machine access nodes randomly by a round
robin DNS address.
This model is inspired by examples I've seen on the official wiki. On
those examples, the AFR volume on each node is "under" a UNIFY volume.
When I start glusterfsd, a message appeared, saying : "WARNING: You have
defined only one "subvolumes" for unify volume. It may not be the
desired config, review your volume spec file. If this is how you are
testing it, you may hit some performance penalty".
This message confirms what I'm thinking : it seems that the unify volume
is useless in this kind of configuration. Is there a (good) reason in
defining a unify volume ?
I'm thinking about something else. In AFR mode, do we have to wait that
a write operation had been replicated on each node before it returns?
On a client point of view :
1. I modify a file on the client (I made a blocking write syscall)
2. The modification is made on node1 (due to rr dns)
3. The modification is replicated on other nodes by node1
4. My write syscall returns with no error
An asynchronous way would be to put step 3. after step 4. (this is what
I think... maybe it's a very bad thing).
Thank in advance,
Antoine Nguyen.