On 05/22/2017 03:11 AM, W Kern wrote:> > So I am experimenting with shards using a couple VMs and decided to > test his scenario (i.e. only one node available on a simple 2 node + 1 > arbiter replicated/sharded volume use 3.10.1 on Cent7.3) > > I setup a VM testbed. Then verified everything including the sharding > works and then shutdown nodes 2 and 3 (the arbiter). > > As expected I got a quorum error on the mount. > > So I tried > > gluster volume set VOL cluster.quorum-type none > > from the remaining 'working' node1 and it simply responds with > > "volume set: failed: Quorum not met. Volume operation not allowed" > > how do you FORCE gluster to ignore the quorum in such a situation? >You probably also have server quorum enabled (cluster.server-quorum-type = server). Server quorum enforcement does not allow modifying volume options or other actions like peer probing/detaching if server quorum is not met. Also, don't disable client quorum for arbiter volumes or you will end up corrupting the files. For instance, if the arbiter brick was the only one that is up, and you disabled client quorum, then a writev from the application will get a success but nothing will ever get written on-disk on the arbiter brick. -Ravi> > I tried stopping the volume and even rebooting node1 and still get the > error (And of course the volume wont start for the same reason) > > -WK > > > On 5/18/2017 7:41 AM, Ravishankar N wrote: >> On 05/18/2017 07:18 PM, lemonnierk at ulrar.net wrote: >>> Hi, >>> >>> >>> We are having huge hardware issues (oh joy ..) with RAID cards. >>> On a replica 3 volume, we have 2 nodes down. Can we somehow tell >>> gluster that it's quorum is 1, to get some amount of service back >>> while we try to fix the other nodes or install new ones ? >> If you know what you are getting into, then `gluster v set <volname> >> cluster.quorum-type none` should give you the desired result, i.e. >> allow write access to the volume. >>> Thanks >>> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://lists.gluster.org/mailman/listinfo/gluster-users >> >> >> >> <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> >> Virus-free. www.avg.com >> <http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> >> >> >> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170522/f2cb3112/attachment.html>
On 5/21/2017 7:00 PM, Ravishankar N wrote:> On 05/22/2017 03:11 AM, W Kern wrote: >> >> gluster volume set VOL cluster.quorum-type none >> >> from the remaining 'working' node1 and it simply responds with >> >> "volume set: failed: Quorum not met. Volume operation not allowed" >> >> how do you FORCE gluster to ignore the quorum in such a situation? >> > You probably also have server quorum enabled > (cluster.server-quorum-type = server). Server quorum enforcement does > not allow modifying volume options or other actions like peer > probing/detaching if server quorum is not met. >Great, that worked. ie gluster volume set VOL cluster.server-quorum-type none Although I did get an error of "Volume set: failed: Commit failed on localhost, please check the log files for more details" but then I noticed that volume immediately came back up and I was able to mount the single remaining node and access those files. So you need to do both settings in my scenario.> Also, don't disable client quorum for arbiter volumes or you will end > up corrupting the files. For instance, if the arbiter brick was the > only one that is up, and you disabled client quorum, then a writev > from the application will get a success but nothing will ever get > written on-disk on the arbiter brick.Yes, I am learning the pro/cons of arbiter and understand their limitations. In this test case, I had taken the arbiter OFFLINE (deliberately) and I was rehearsing a scenario where only one of the two real copies survived and I needed that data. Hopefully that is an unlikely scenario and we would have backups, but I've earned the grey specs in my hair and the Original Poster who started this thread run into that exact scenario. On our older Gluster installs without sharding, the files are simply sitting there on the disk if you need them. That is enormously comforting and means you can be a little less paranoid compared to other distributed storage schemes we use/have used. But then I have my next question: Is it possible to recreate a large file (such as a VM image) from the raw shards outside of the Gluster environment if you only had the raw brick or volume data? From the docs, I see you can identify the shards by the GFID # getfattr -d -m. -e hex/path_to_file/ # ls /bricks/*/.shard -lh | grep /GFID Is there a gluster tool/script that will recreate the file? or can you just sort them sort them properly and then simply cat/copy+ them back together? cat shardGFID.1 .. shardGFID.X > thefile Thank You though, the sharding should be a big win. -bill/ // <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170521/f755f0a8/attachment.html>