Hi all,
I am doing another round of playing with gluster-3.2 to see whether it could
be usable for our scenarios. Our current test-setup with drbd and gfs2 on two
nodes (for virtual machines running on the same nodes) proves to be a) slow
and b) not extendible to three or four nodes.
During gluster testings I am struggling with several things and as these are
all different complexes, I will do them in several mails/threads.
The first thing is: I want three nodes with bricks where the data is on (at
least) two nodes so that one can fail (or willfully get shut down) without
problems. As I see it, gluster doesn't allow for dynamic here-are-three-
bricks-give-me-2-replicas. The only solution would be to use two bricks per
node and create a distributed-replica of (n1b1-n2b1, n1b2-n3b1, n2b2-n3b2)?
[That is nodeX, brickY as nXbY.]
I did test that and run some dbench on it (I will discuss the performance in a
separate thread). The advantage of dbench is that it creates a number of files
to use for distribution, replication and balancing.
So I first started with two bricks on two nodes, run dbench for some tests and
then extended the volume. I added another two bricks from the same two nodes,
fixed the layout and migrated the data. All was well up to that point.
Then I replaced on of the bricks with a brick from the third node, which was
working okay too. Now I had (n1b1-n2b1, n2b2,n3b1). I removed what was on
node1, brick2.
I then proceeded to add the third pair of bricks as n1b2-n3b2. I used the
combined rebalance as noted in the docs. Gluster didn't complain, but then
one
directory was in-accessible and gave an io-error. On the individual bricks
this directory was still there but through the fuse-mountpoint it wasn't
usable anymore.
Deleting the volume, cleaning the bricks and doing the same again, I did come
to a different problem: All is accessible, but the on some bricks the files that
should have a size of 1MB are of 0 byte... But all files are of the correct
size on two nodes in the corresponding bricks, so that might be a balancing
glitch... Or is it just my missing understanding of gluster as these files seem
to have a mask of ---------T?
Removing the dbench-files then gives me two inaccessible files that I can't
remove through the mount. After unmount, mount, and running dbench again, I
can remove the whole dir without problems.
Still, it looks as if doing too many migrations on one evening seems to be
rather challenging for gluster?
Have fun,
Arnold
Note that the volume was fuse-mounted on one of the nodes the whole time...
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120213/7b49443b/attachment.sig>