Dear all, Is there an easy way to put a storage brick, which is part of a dht volume, into some kind of read-only maintainance mode, while keeping the whole dht volume in read/write state? Currently it almost works, but files are still scheduled to go to the server in maintainance mode and in this case you get an error. It should be possible to write to another brick instead. Sincerely, -- Fred-Markus Stober fred.stober at kit.edu Karlsruhe Institute of Technology
Fred, Would you like to tell us more about the use case ? Like why would you want to do this ? If we take a brick out, it would not be possible to get it back in ( with the existing data ). Regards, Tejas. ----- Original Message ----- From: "Fred Stober" <fred.stober at kit.edu> To: gluster-users at gluster.org Sent: Wednesday, April 14, 2010 4:57:24 PM Subject: [Gluster-users] Maintainance mode for bricks Dear all, Is there an easy way to put a storage brick, which is part of a dht volume, into some kind of read-only maintainance mode, while keeping the whole dht volume in read/write state? Currently it almost works, but files are still scheduled to go to the server in maintainance mode and in this case you get an error. It should be possible to write to another brick instead. Sincerely, -- Fred-Markus Stober fred.stober at kit.edu Karlsruhe Institute of Technology _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
ok .. so if I understand correctly, you want to *replace* an existing export by a new export on a new machine, while keeping all data online and keep the source export of the replace in read mode so it can be copied off. Are there other processes also doing a read off the read only export/sub-volume, besides the rsync ? Regards, Tejas. ----- Original Message ----- From: "Fred Stober" <fred.stober at kit.edu> To: "Tejas N. Bhise" <tejas at gluster.com> Cc: gluster-users at gluster.org Sent: Wednesday, April 14, 2010 5:50:33 PM Subject: Re: [Gluster-users] Maintainance mode for bricks On Wednesday 14 April 2010, Tejas N. Bhise wrote:> Fred, > > Would you like to tell us more about the use case ? Like why would you want > to do this ? If we take a brick out, it would not be possible to get it > back in ( with the existing data ).Ok, here is our use case: We have a small test system running on 3 file servers. cluster/distribute is used to give a flat view of the file servers. Now have the problem that one file server is going to be replaced with a larger one. Therefore we want to put the old file server into read only mode to rsync the files to the new server. Unfortunately this will take ~2days. During this time it would be nice to keep the glusterfs in read/write mode. If I understand it correctly, I should be able to use "lookup-unhashed" to reintegrate the new fileserver in the existing file system, when we switch off the old server. Cheers, Fred -- Fred-Markus Stober stober at cern.ch Karlsruhe Institute of Technology
Ok. and how many clients mount these volume(s) .. asking so I can understand how many need to remount if config is changed. ----- Original Message ----- From: "Fred Stober" <fred.stober at kit.edu> To: "Tejas N. Bhise" <tejas at gluster.com> Cc: gluster-users at gluster.org Sent: Wednesday, April 14, 2010 7:00:56 PM Subject: Re: [Gluster-users] Maintainance mode for bricks Dear Tejas, On Wednesday 14 April 2010, Tejas N. Bhise wrote:> ok .. so if I understand correctly, you want to *replace* an existing > export by a new export on a new machine, while keeping all data online and > keep the source export of the replace in read mode so it can be copied off.Exactly.> Are there other processes also doing a read off the read only > export/sub-volume, besides the rsync ?Yes, there is some activity ... The goal is to keep the whole process as transparent to the users which read/write to the flat space as possible. Since our users mostly have a write-once read-many usage pattern, it should be possible to keep them happy. Regards, Fred -- Fred-Markus Stober stober at cern.ch Karlsruhe Institute of Technology