Hello All. I run into the problem with replication. I have two servers (192.168.0.62 and 192.168.0.37) and I want to do one replicated volume. I reviewed documentation and made following: ---glusterfsd.vol--- volume posix type storage/posix option directory /var/share end-volume volume locks type features/locks subvolumes posix end-volume volume brick type performance/io-threads option thread-count 16 subvolumes locks end-volume volume server type protocol/server option transport-type tcp option auth.addr.brick.allow * option auth.addr.brick-ns.allow * subvolumes brick end-volume ----------------------- and ---glusterfs.vol--- volume remote1 type protocol/client option transport-type tcp option remote-host 192.168.0.62 option remote-subvolume brick end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 192.168.0.37 option remote-subvolume brick end-volume volume replicate type cluster/replicate subvolumes remote1 remote2 end-volume volume writebehind type performance/write-behind option aggregate-size 128KB option window-size 1MB subvolumes replicate end-volume volume cache type performance/io-cache option cache-size 512MB subvolumes writebehind end-volume -------------- In the documentation wrote, that replicate transport is RAID1, but it isn't true! If both servers up, everything works fine. But if second server down (due lost network connection), I run into the problem. If I deleted a file from first server and second server will be up, the deleted file will be recreated at the first server! As you see, this is not RAID1. Please help me. Can GlusertFS work as completely RAID1 or no? With best wishes, Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090514/ecb4934e/attachment.html>
Hey there, Basically, you're being affected by the self-healing features of the replication translator. From the GlusterFS FAQ: Q: What about deletion self/auto healing? A: With auto healing or self healing only file creation is healed. If a brick is missing because of a disk crash re-creation of files is ok but if it's a temporary network problem synchronizing deletion is mandatory. In other words, when that downed node comes back up, the healing system sees that it has data that its partner lacks. Giving the benefit of the doubt that it's better to have an extra file laying around than to suffer the possibility that the file should not have been deleted, the healer errs on the side of caution and replicates the file. That said, the replication translator is not really meant to implement a true RAID1 mirroring scheme, but rather to emulate the good bits of such a scheme. At least, that's my understanding thereof. Ess> If both servers up, everything works fine. But if second server down (due > lost network connection), I run into the problem. > If I deleted a file from first server and second server will be up, the > deleted file will be recreated at the first server! > As you see, this is not RAID1. > Please help me. Can GlusertFS work as completely RAID1 or no?-- SO not teh 1337