Darren Austin
2011-Jun-22 14:01 UTC
[Gluster-users] Unexpected behaviour during replication heal
Hi, I've been evaluating GlusterFS (3.2.0) for a small replicated cluster set up on Amazon EC2, and I think i've found what might be a bug or some sort of unexpected behaviour during the self-heal process. Here's the volume info[1]: Volume Name: test-volume Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 1.2.3.4:/data Brick2: 1.2.3.5:/data I've not configured any special volume settings, or modified any .vol files by hand; and the glusterd.vol file is the one installed from the source package - so it's a pretty bog standard set up i'm testing. I've been simulating the complete hard failure of one of the servers within the cluster (IP 1.2.3.4) in order to test the replication recovery side of Gluster. From a client i'm copying a few (pre-made) large files (1GB+) of random data onto the mount, and part way through using iptables on the server at IP 1.2.3.4 to simulate it falling off the planet (basically dropping ALL outgoing and incoming packets from all the clients/peers). The client's seem to handle this fine - after a short pause in the copy, they continue to write the data to the second replicate server, which dutifully stores the data. An md5sum of the files from clients shows they are getting the complete file back from the (one) server in the cluster - so all is good thus far :) Now, when I pull the firewall down on the gluster server I took down earlier (allowing clients and peers to communicate with it again), that server has only some of the files which were copied and *part* of a file which it received before it got disconnected. The client logs show that a self-heal process has been triggered, but nothing seem to happen *at all* to bring the replicas back into sync. So I tested a few things in this situation to discover what the procedure might be to recover from this once we have a live system. On the client, I go into the gluster mounted directory and do an 'ls -al'. This triggers a partial re-sync of the brick on the peer which was inaccessible for a while - the missing files are created in the brick as ZERO size; no data is transferred from the other replica into those files and the partial file which that brick holds does not have any of the missing part copied into it. The 'ls -al' on the client lists ALL the files that were copied into the cluster (as you'd expect), and the files have the correct size information except for 1 - the file which was being actively written when I downed the peer at IP 1.2.3.4. That file's size is listed as the partial size of the file held on the disconnected peer - it is not reporting the full size as held by the peer with the complete file. However, an md5sum of the file is correct - the whole file is being read back from the peer which has it, even though the size information is wrong. A stat, touch or any other access of that file does not cause it to be synced with the brick which only has the partial copy. I now try the 'self-heal' trigger as documented on the website. A bit more success! All the zero sized files on the peer at 1.2.3.4 are now having data copied into them from the brick which has the full set of files. All the files are now in sync between the bricks except one - the partial file which was being written to at the time the peer went down. The peer at 1.2.3.4 still only has the partial file, the peer at 1.2.3.5 has the full file, and all the clients report the size as being the partial size held by the peer at 1.2.3.4, but can md5sum the file and get the correct result. No matter how much that file is accessed, it will not sync over to the other peer. So I tried a couple more things to see if I could trigger the sync... From another client (NOT the one which performed the copy of files onto the cluster), I umount'ed and re-mount'ed the volume. Further stat's, md5sum's, etc still do not trigger the sync. However, if I umount and re-mount the volume on the client which actually performed the copy procedure; as soon as I do an ls in the directory with that file in it, the sync begins. I don't even have to touch the file itself - a simple ls on the directory is all it takes to trigger. The size of the file is then correctly reported to the client also. This isn't a split-brain situation since the file on the peer at 1.2.3.4 is NOT being modified while it's out of the cluster - it's just got one or two whole files from the client, plus a partial one cut off during transfer. I'd be very grateful if someone could confirm if this is expected behaviour of the cluster or not? To me, it seems unthinkable that a volume would have the triggered to repair (with the find/stat commands), plus be umount'ed and re-mount'ed by the exact client which was writing the partial file at the time, in order to force it to be sync'ed. If this is a bug, it's a pretty impressive one in terms of reliability of cluster - what would happen if the peer which DOES have the full file goes down before the above procedure is complete? The first peer still only has the partial file, yet the clients will believe the whole file has been written to the volume - causing an inconsistent state and possible data corruption. Thanks for reading such a long message - please let me know if you need any more info to help explain why it's doing this! :) Cheers, Darren. [1] - Please, please can you make 'volume status' an alias for 'volume info', and 'peer info' an alias for 'peer status'?! I keep typing them the wrong way around! :)
Mohit Anchlia
2011-Jun-22 16:56 UTC
[Gluster-users] Unexpected behaviour during replication heal
Can you check your logs on the server and the client and see what got logged for that one file you were having issues with? On Wed, Jun 22, 2011 at 7:01 AM, Darren Austin <darren-lists at widgit.com> wrote:> Hi, > ?I've been evaluating GlusterFS (3.2.0) for a small replicated cluster set up on Amazon EC2, and I think i've found what might be a bug or some sort of unexpected behaviour during the self-heal process. > > Here's the volume info[1]: > Volume Name: test-volume > Type: Replicate > Status: Started > Number of Bricks: 2 > Transport-type: tcp > Bricks: > Brick1: 1.2.3.4:/data > Brick2: 1.2.3.5:/data > > I've not configured any special volume settings, or modified any .vol files by hand; and the glusterd.vol file is the one installed from the source package - so it's a pretty bog standard set up i'm testing. > > I've been simulating the complete hard failure of one of the servers within the cluster (IP 1.2.3.4) in order to test the replication recovery side of Gluster. ?From a client i'm copying a few (pre-made) large files (1GB+) of random data onto the mount, and part way through using iptables on the server at IP 1.2.3.4 to simulate it falling off the planet (basically dropping ALL outgoing and incoming packets from all the clients/peers). > > The client's seem to handle this fine - after a short pause in the copy, they continue to write the data to the second replicate server, which dutifully stores the data. ?An md5sum of the files from clients shows they are getting the complete file back from the (one) server in the cluster - so all is good thus far :) > > Now, when I pull the firewall down on the gluster server I took down earlier (allowing clients and peers to communicate with it again), that server has only some of the files which were copied and *part* of a file which it received before it got disconnected. > > The client logs show that a self-heal process has been triggered, but nothing seem to happen *at all* to bring the replicas back into sync. ?So I tested a few things in this situation to discover what the procedure might be to recover from this once we have a live system. > > On the client, I go into the gluster mounted directory and do an 'ls -al'. ?This triggers a partial re-sync of the brick on the peer which was inaccessible for a while - the missing files are created in the brick as ZERO size; no data is transferred from the other replica into those files and the partial file which that brick holds does not have any of the missing part copied into it. > > The 'ls -al' on the client lists ALL the files that were copied into the cluster (as you'd expect), and the files have the correct size information except for 1 - the file which was being actively written when I downed the peer at IP 1.2.3.4. > That file's size is listed as the partial size of the file held on the disconnected peer - it is not reporting the full size as held by the peer with the complete file. ?However, an md5sum of the file is correct - the whole file is being read back from the peer which has it, even though the size information is wrong. ?A stat, touch or any other access of that file does not cause it to be synced with the brick which only has the partial copy. > > I now try the 'self-heal' trigger as documented on the website. ?A bit more success! ?All the zero sized files on the peer at 1.2.3.4 are now having data copied into them from the brick which has the full set of files. > All the files are now in sync between the bricks except one - the partial file which was being written to at the time the peer went down. ?The peer at 1.2.3.4 still only has the partial file, the peer at 1.2.3.5 has the full file, and all the clients report the size as being the partial size held by the peer at 1.2.3.4, but can md5sum the file and get the correct result. > No matter how much that file is accessed, it will not sync over to the other peer. > > So I tried a couple more things to see if I could trigger the sync... From another client (NOT the one which performed the copy of files onto the cluster), I umount'ed and re-mount'ed the volume. ?Further stat's, md5sum's, etc still do not trigger the sync. > > However, if I umount and re-mount the volume on the client which actually performed the copy procedure; as soon as I do an ls in the directory with that file in it, the sync begins. ?I don't even have to touch the file itself - a simple ls on the directory is all it takes to trigger. ?The size of the file is then correctly reported to the client also. > > This isn't a split-brain situation since the file on the peer at 1.2.3.4 is NOT being modified while it's out of the cluster - it's just got one or two whole files from the client, plus a partial one cut off during transfer. > > I'd be very grateful if someone could confirm if this is expected behaviour of the cluster or not? > > To me, it seems unthinkable that a volume would have the triggered to repair (with the find/stat commands), plus be umount'ed and re-mount'ed by the exact client which was writing the partial file at the time, in order to force it to be sync'ed. > > If this is a bug, it's a pretty impressive one in terms of reliability of cluster - what would happen if the peer which DOES have the full file goes down before the above procedure is complete? ?The first peer still only has the partial file, yet the clients will believe the whole file has been written to the volume - causing an inconsistent state and possible data corruption. > > Thanks for reading such a long message - please let me know if you need any more info to help explain why it's doing this! :) > > Cheers, > > Darren. > > [1] - Please, please can you make 'volume status' an alias for 'volume info', and 'peer info' an alias for 'peer status'?! ?I keep typing them the wrong way around! :) > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >
Anand Avati
2011-Jun-22 18:14 UTC
[Gluster-users] Unexpected behaviour during replication heal
It looks like the disconnection happened in the middle of a write transaction (after the lock phase, before the unlock phase). And the server's detection of client disconnection (via TCP_KEEPALIVE) seems to have not happened before the client reconnected. The client, having witnessed the reconnection has assumed the locks have been relinquished by the server. The server, however, having noticed the same client reconnection before breakage of the original connection has not released the held locks. This explains why self-heal happens only after the first client unmounts while connectivity is fine. The removal of locks on the inode permits self-heal to proceed. Tuning the server side tcp keepalive to a smaller value should fix this problem. Can you please verify? Thanks, Avati On Wed, Jun 22, 2011 at 7:31 PM, Darren Austin <darren-lists at widgit.com>wrote:> Hi, > I've been evaluating GlusterFS (3.2.0) for a small replicated cluster set > up on Amazon EC2, and I think i've found what might be a bug or some sort of > unexpected behaviour during the self-heal process. > > Here's the volume info[1]: > Volume Name: test-volume > Type: Replicate > Status: Started > Number of Bricks: 2 > Transport-type: tcp > Bricks: > Brick1: 1.2.3.4:/data > Brick2: 1.2.3.5:/data > > I've not configured any special volume settings, or modified any .vol files > by hand; and the glusterd.vol file is the one installed from the source > package - so it's a pretty bog standard set up i'm testing. > > I've been simulating the complete hard failure of one of the servers within > the cluster (IP 1.2.3.4) in order to test the replication recovery side of > Gluster. From a client i'm copying a few (pre-made) large files (1GB+) of > random data onto the mount, and part way through using iptables on the > server at IP 1.2.3.4 to simulate it falling off the planet (basically > dropping ALL outgoing and incoming packets from all the clients/peers). > > The client's seem to handle this fine - after a short pause in the copy, > they continue to write the data to the second replicate server, which > dutifully stores the data. An md5sum of the files from clients shows they > are getting the complete file back from the (one) server in the cluster - so > all is good thus far :) > > Now, when I pull the firewall down on the gluster server I took down > earlier (allowing clients and peers to communicate with it again), that > server has only some of the files which were copied and *part* of a file > which it received before it got disconnected. > > The client logs show that a self-heal process has been triggered, but > nothing seem to happen *at all* to bring the replicas back into sync. So I > tested a few things in this situation to discover what the procedure might > be to recover from this once we have a live system. > > On the client, I go into the gluster mounted directory and do an 'ls -al'. > This triggers a partial re-sync of the brick on the peer which was > inaccessible for a while - the missing files are created in the brick as > ZERO size; no data is transferred from the other replica into those files > and the partial file which that brick holds does not have any of the missing > part copied into it. > > The 'ls -al' on the client lists ALL the files that were copied into the > cluster (as you'd expect), and the files have the correct size information > except for 1 - the file which was being actively written when I downed the > peer at IP 1.2.3.4. > That file's size is listed as the partial size of the file held on the > disconnected peer - it is not reporting the full size as held by the peer > with the complete file. However, an md5sum of the file is correct - the > whole file is being read back from the peer which has it, even though the > size information is wrong. A stat, touch or any other access of that file > does not cause it to be synced with the brick which only has the partial > copy. > > I now try the 'self-heal' trigger as documented on the website. A bit more > success! All the zero sized files on the peer at 1.2.3.4 are now having > data copied into them from the brick which has the full set of files. > All the files are now in sync between the bricks except one - the partial > file which was being written to at the time the peer went down. The peer at > 1.2.3.4 still only has the partial file, the peer at 1.2.3.5 has the full > file, and all the clients report the size as being the partial size held by > the peer at 1.2.3.4, but can md5sum the file and get the correct result. > No matter how much that file is accessed, it will not sync over to the > other peer. > > So I tried a couple more things to see if I could trigger the sync... From > another client (NOT the one which performed the copy of files onto the > cluster), I umount'ed and re-mount'ed the volume. Further stat's, md5sum's, > etc still do not trigger the sync. > > However, if I umount and re-mount the volume on the client which actually > performed the copy procedure; as soon as I do an ls in the directory with > that file in it, the sync begins. I don't even have to touch the file > itself - a simple ls on the directory is all it takes to trigger. The size > of the file is then correctly reported to the client also. > > This isn't a split-brain situation since the file on the peer at 1.2.3.4 is > NOT being modified while it's out of the cluster - it's just got one or two > whole files from the client, plus a partial one cut off during transfer. > > I'd be very grateful if someone could confirm if this is expected behaviour > of the cluster or not? > > To me, it seems unthinkable that a volume would have the triggered to > repair (with the find/stat commands), plus be umount'ed and re-mount'ed by > the exact client which was writing the partial file at the time, in order to > force it to be sync'ed. > > If this is a bug, it's a pretty impressive one in terms of reliability of > cluster - what would happen if the peer which DOES have the full file goes > down before the above procedure is complete? The first peer still only has > the partial file, yet the clients will believe the whole file has been > written to the volume - causing an inconsistent state and possible data > corruption. > > Thanks for reading such a long message - please let me know if you need any > more info to help explain why it's doing this! :) > > Cheers, > > Darren. > > [1] - Please, please can you make 'volume status' an alias for 'volume > info', and 'peer info' an alias for 'peer status'?! I keep typing them the > wrong way around! :) > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110622/b7ffc0ca/attachment.html>
Darren Austin
2011-Jun-28 09:49 UTC
[Gluster-users] Unexpected behaviour during replication heal
----- Original Message -----> It looks like the disconnection happened in the middle of a write > transaction (after the lock phase, before the unlock phase). And theThe server was deliberately disconnected after the write had begun, in order to test what would happen in that situation and to document a recovery procedure for it.> server's detection of client disconnection (via TCP_KEEPALIVE) seems to have > not happened before the client reconnected.I've not configured any special keep alive setting for the server or clients - the configuration was an out of the box glusterd.vol file, and a "volume create" sequence with standard params (no special settings or options applied). The disconnected server was also in that state for approx 10 minutes - not seconds. I assume the "default" set up is not to hold on to a locked file for over 10 minutes when in a disconnected state? Surely it shouldn't hold onto a lock *at all* once it's out of the cluster?> The client, having witnessed the reconnection has assumed the locks have been relinquished by the > server. The server, however, having noticed the same client reconnection before > breakage of the original connection has not released the held locks.But why is the server still holding the locks WAY past the time it should be? We're not talking seconds here, we're talking minutes of disconnection. And why, when it is reconnected will it not sync that file back from the other servers that have a full copy of it?> Tuning the server side tcp keepalive to a smaller value should fix > this problem. Can you please verify?Are you talking about the GlusterFS keep alive setting in the vol file, or changing the actual TCP keerpalive settings for the *whole* server? Changing the server TCP keepalive is not an option, since it has ramifications on other things - and it shouldn't be necessary to solve what is, really, a GlusterFS bug... Cheers, Darren.
Darren Austin
2011-Jun-28 14:30 UTC
[Gluster-users] Unexpected behaviour during replication heal
> Can you check the server (brick) logs to check the order of detected > disconnection and new/reconnection from the client?Hi, It seems this wasn't due to keepalives - the system time on both server was a few seconds out. After a pointer from someone off-list, I synced the time and ran ntpd (which I wasn't doing as this was just a test system) and did some more tests. The partial-file syndrome I noted before seems to have gone away - at least in terms of the file not syncing back to the previously disconnected server after it finds it way back into the cluster. Once the keepalive timeout is reached, the client sends all the data to the second server. A quick question on that actually - when all servers are online, are the clients supposed to send the data to both at the same time? I see from monitoring the traffic that the client duplicates the writes - one to each server? Also, when one of the servers disconnects, is it notmal that the client "stalls" the write until the keepalive time expires and the online servers notice one has vanished? Finally, during my testing I encountered a replicable hard lock up of the client... here's the situation: Server1 and Server2 in the cluster, sharing 'data-volume' (which is /data on both servers). Client mounts server1:data-volume as /mnt. Client begins to write a large (1 or 2 GB) file to /mnt (I just used random data). Server1 goes down part way through the write (I simulated this by iptables -j DROP'ing everything from relevant IPs). Client "stalls" writes until the keepalive timeout, and then continues to send data to Server2. Server1 comes back online shortly after the keepalive timeout - but BEFORE the Client has written all the data toServer2. Server1 and Server2 reconnect and the writes on the Client completely hang. The mounted directory on the client becomes completely in-accessible when the two servers reconnect. I had to kill -9 the dd process doing the write (along with the glusterfs process on the client) in order to release the mountpoint. I've reproduced this issue several times now and the result is always the same. If the client is writing data to a server when one of the others comes back online after an outage, the client will hang. I've attached logs for one of the times I tested this - I hope it helps in diagnosing the problem :) Let me know if you need any more info. -- Darren Austin - Systems Administrator, Widgit Software. Tel: +44 (0)1926 333680. Web: http://www.widgit.com/ 26 Queen Street, Cubbington, Warwickshire, CV32 7NA. -------------- next part -------------- A non-text attachment was scrubbed... Name: mnt.log Type: text/x-log Size: 13142 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/a3003e39/attachment.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: server1.log Type: text/x-log Size: 1770 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/a3003e39/attachment-0001.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: server2.log Type: text/x-log Size: 589 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/a3003e39/attachment-0002.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: server1-brick.log Type: text/x-log Size: 7551 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/a3003e39/attachment-0003.bin> -------------- next part -------------- A non-text attachment was scrubbed... Name: server2-brick.log Type: text/x-log Size: 2082 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/a3003e39/attachment-0004.bin>
Marco Agostini
2011-Jun-28 20:23 UTC
[Gluster-users] Unexpected behaviour during replication heal
2011/6/28 Darren Austin <darren-lists at widgit.com>:> > Also, when one of the servers disconnects, is it notmal that the client "stalls" the write until the keepalive time expires and the online servers notice one has vanished? >You can modify the parameter network.ping-timeout from 46sec to 5 or 10 second to reduce the "time stalls" of client.> Finally, during my testing I encountered a replicable hard lock up of the client... here's the situation: > ?Server1 and Server2 in the cluster, sharing 'data-volume' (which is /data on both servers). > ?Client mounts server1:data-volume as /mnt. > ?Client begins to write a large (1 or 2 GB) file to /mnt ?(I just used random data). > ?Server1 goes down part way through the write (I simulated this by iptables -j DROP'ing everything from relevant IPs). > ?Client "stalls" writes until the keepalive timeout, and then continues to send data to Server2. > ?Server1 comes back online shortly after the keepalive timeout - but BEFORE the Client has written all the data toServer2. > ?Server1 and Server2 reconnect and the writes on the Client completely hang. >I have similar problem with a file that I'm using with KVM for storage virtual disk> The mounted directory on the client becomes completely in-accessible when the two servers reconnect. >actualy is normal :-|> I had to kill -9 the dd process doing the write (along with the glusterfs process on the client) in order to release the mountpoint. >If you don't kill the process and wait that all node are syncronized all the system should return ready. To force a syncronization of all volume you can type these command on the client: find <gluster-mount> -noleaf -print0 | xargs --null stat >/dev/null ... and wait http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Triggering_Self-Heal_on_Replicate Craig Carl said me, three days ago: ------------------------------------------------------ ?that happens because Gluster's self heal is a blocking operation. We are working on a non-blocking self heal, we are hoping to ship it in early September. ------------------------------------------------------ You can verify that directly from your client log... you can read that: [2011-06-28 13:28:17.484646] I [client-lk.c:617:decrement_reopen_fd_count] 0-data-volume-client-0: last fd open'd/lock-self-heal'd - notifying CHILD-UP Marco
Darren Austin
2011-Jun-29 10:00 UTC
[Gluster-users] Unexpected behaviour during replication heal
----- Original Message -----> some information: > - you have 2 GlusterFS as server (are "pingable" from any server and > from any client ?)Both servers and the client can all ping each other during the hard lock.> - 10.58.139.217 is a GlusterFS server ? ... if so, can you post the > "ps -ef | grep gluster" result ?Server1: root 22831 1 0 09:21 ? 00:00:00 /opt/sbin/glusterd -p /var/run/glusterd.pid root 22881 1 0 09:21 ? 00:00:02 /opt/sbin/glusterfsd --xlator-option data-volume-server.listen-port=24009 -s localhost --volfile-id data-volume.10.234.158.226.data -p /etc/glusterd/vols/data-volume/run/10.234.158.226-data.pid -S /tmp/aebe39eb33894f925c6a24eefb187b17.socket --brick-name /data --brick-port 24009 -l /var/log/glusterfs/bricks/data.log root 22886 1 0 09:21 ? 00:00:00 /opt/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log Server2: root 7053 1 0 09:21 ? 00:00:00 /opt/sbin/glusterd -p /var/run/glusterd.pid root 7097 1 0 09:21 ? 00:00:08 /opt/sbin/glusterfsd --xlator-option data-volume-server.listen-port=24009 -s localhost --volfile-id data-volume.10.49.14.115.data -p /etc/glusterd/vols/data-volume/run/10.49.14.115-data.pid -S /tmp/b75962c846832291d46f8b2548fe37d8.socket --brick-name /data --brick-port 24009 -l /var/log/glusterfs/bricks/data.log root 7102 1 0 09:21 ? 00:00:00 /opt/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log Client: root 8378 1 0 09:32 ? 00:00:05 /opt/sbin/glusterfs --log-level=INFO --volfile-id=data-volume --volfile-server=10.234.158.226 /mnt> - all GlusterFS version (server and client) are the same ? (glusterfs -v)Yes, all servers and clients are running 3.2.1, no patches or modifications. HTH, Darren. -- Darren Austin - Systems Administrator, Widgit Software. Tel: +44 (0)1926 333680. Web: http://www.widgit.com/ 26 Queen Street, Cubbington, Warwickshire, CV32 7NA.
Darren Austin
2011-Jun-29 10:38 UTC
[Gluster-users] Unexpected behaviour during replication heal
----- Original Message -----> Darren, > Can you get us the process state dumps of the client when it is hung? > (kill -USR1 <pid> of mount and gzip /tmp/glusterdump.<pid>). That will > help us > figuring out what exactly was happening.Gluster dump logs attached. The -1 file was done just after the hard lock, the -2 about 10 minutes later. Cheers, Darren. -- Darren Austin - Systems Administrator, Widgit Software. Tel: +44 (0)1926 333680. Web: http://www.widgit.com/ 26 Queen Street, Cubbington, Warwickshire, CV32 7NA. -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterdump.8378-1.gz Type: application/x-gzip Size: 2812 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110629/540c9d05/attachment.gz> -------------- next part -------------- A non-text attachment was scrubbed... Name: glusterdump.8378-2.gz Type: application/x-gzip Size: 2887 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110629/540c9d05/attachment-0001.gz>