HI Guys This all applies to Gluster3.3 I love gluster but I'm having some difficulties understanding some things. 1.Replication(with existing data): Two servers in simple single brick replication. ie 1 volume (testvol) -server1:/data/ && server2:/data/ -server1 has a few millions files in the /data dir -server2 has a no files in /data dir So after i created the testvol and started the volume QUESTION (1): Do i need to mount each volume on the servers like so ? If yes why ? ---> on server1: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest ---> on server2: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest CLIENT: Then I mount the client: mount server-1-ip:/testvol /mnt/gfstest Question(2) : I only see files from server2 ??? Question (3) Whenever I'm writing/updating/working with the files on the SERVER i should ALWAYS do it via the (local mount )/mnt/gfstest. I should never work with files directly in the bricks /data ?? Question (4.1) -Whats the best-practise to sync existing "data" ? Question (4.2) -Is it safe to create a brick in a directory that already has files in it ? Regards Jacques -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120604/31d61b9e/attachment.html>
On 6/4/12 4:05 AM, Jacques du Rand wrote:> HI Guys > This all applies to Gluster3.3 > > I love gluster but I'm having some difficulties understanding some > things. > > 1.Replication(with existing data): > Two servers in simple single brick replication. ie 1 volume (testvol) > -server1:/data/ && server2:/data/ > -server1 has a few millions files in the /data dir > -server2 has a no files in /data dir > > So after i created the testvol and started the volume > QUESTION (1): Do i need to mount each volume on the servers like so > ? If yes why ? > ---> on server1: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest > ---> on server2: mount -t gluster 127.0.0.1:/testvol /mnt/gfstestOnly if you want to access the files within the volume on the two servers which have the bricks on them.> > CLIENT: > Then I mount the client: > mount server-1-ip:/testvol /mnt/gfstest > > Question(2) : > I only see files from server2 ???Probably hit and miss what you see, since your bricks are not consistent.> > Question (3) > Whenever I'm writing/updating/working with the files on the SERVER i > should ALWAYS do it via the (local mount )/mnt/gfstest. I should never > work with files directly in the bricks /data ??Correct - Gluster can't keep track of writes if you don't do it through the glusterfs mount point.> > Question (4.1) > -Whats the best-practise to sync existing "data" ?You will need to force a manual self-heal and see if that copies all the data over to the other brick. find /mnt/gfstest -noleaf -print0 | xargs --null stat>/dev/null> > Question (4.2) > -Is it safe to create a brick in a directory that already has files in > it ?As long as you force a self-heal on it before you use it.> > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120604/71244dbf/attachment.html>
On 06/04/2012 07:15 PM, Amar Tumballi wrote:>> Do you know if I'll be able to convert a distribute to >> distribute-replicate this way? >> >> 1) delete the distribute volume >> >> 2) create a distribute-replicate volume >> >> 3) run the self-heal, which hopefully results in the data moved to the >> other brick, *not* removed? > > With 3.3.0 release, for all these three steps can be achieved by one > step, do just > > bash# gluster volume add-brick <VOLNAME> replica N BRICK1 BRICK2 .. BRICKn > > where, > > VOLNAME is distribute volume name > > N is target replica count (in this case 2, as with one add-brick command > you can increase replica count only by 1). > > BRICK(1-n) is gluster brick to be added as pair to each existing bricks > in order. > > Let the proactive self-heal daemon take care of syncing your data :-)Hmm, but the steps you describe (gluster volume add-brick <VOLNAME> replica N ...), assuming I upgrade to 3.3: 1) my volume is "distribute" right now - "will gluster volume add-brick <VOLNAME> replica N ..." work in that case? 2) I don't have any bricks to add; all are existing already 3) with 1) and 2) above - does it mean I have to delete the distribute volume, create it as distribute+replicate over the existing data, and hope it will mirror the data, not remove it (I'm concern that xattrs will be somehow confusing glusterfs) -- Tomasz Chmielewski http://www.ptraveler.com