Joe Julian
2015-Aug-31 19:01 UTC
[Gluster-users] Why is it not possible to mount a replicated gluster volume with one Gluster server?
On 08/31/2015 10:41 AM, Vijay Bellur wrote:> On Monday 31 August 2015 10:42 PM, Atin Mukherjee wrote: >> > 2. Server2 dies. Server1 has to reboot. >> > >> > In this case the service stays down. It is inpossible to remount the >> share without Server1. This is not acceptable for a High Availability >> System and I believe also not intended, but a misconfiguration or bug. >> This is exactly what I gave as an example in the thread (please read >> again). GlusterD is not supposed to start brick process if its other >> counter part hasn't come up yet in a 2 node setup. The reason it has >> been designed in this way is to block GlusterD on operating on a volume >> which could be stale as the node was down and cluster was operational >> earlier. > > For two node deployments, a third dummy node is recommended to ensure > that quorum is maintained when one of the nodes is down. > > Regards, > VijayHave the settings changed to enable server quorum by default?
Merlin Morgenstern
2015-Aug-31 19:30 UTC
[Gluster-users] Why is it not possible to mount a replicated gluster volume with one Gluster server?
this all makes sense and sounds a bit like a solr setup :-) I have now added the third node as a peer sudo gluster peer probe gs3 That indeed allow me to mount the share manually on node2 even if node1 is down. BUT: It does not mount on reboot! It only successfully mounts if node1 is up. I need to do a manual: sudo mount -a Is there a particular reason for this, or is it a misconfiguration? 2015-08-31 21:01 GMT+02:00 Joe Julian <joe at julianfamily.org>:> > > On 08/31/2015 10:41 AM, Vijay Bellur wrote: > >> On Monday 31 August 2015 10:42 PM, Atin Mukherjee wrote: >> >>> > 2. Server2 dies. Server1 has to reboot. >>> > >>> > In this case the service stays down. It is inpossible to remount the >>> share without Server1. This is not acceptable for a High Availability >>> System and I believe also not intended, but a misconfiguration or bug. >>> This is exactly what I gave as an example in the thread (please read >>> again). GlusterD is not supposed to start brick process if its other >>> counter part hasn't come up yet in a 2 node setup. The reason it has >>> been designed in this way is to block GlusterD on operating on a volume >>> which could be stale as the node was down and cluster was operational >>> earlier. >>> >> >> For two node deployments, a third dummy node is recommended to ensure >> that quorum is maintained when one of the nodes is down. >> >> Regards, >> Vijay >> > > Have the settings changed to enable server quorum by default? > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150831/5c9f1434/attachment.html>
Atin Mukherjee
2015-Sep-01 04:03 UTC
[Gluster-users] Why is it not possible to mount a replicated gluster volume with one Gluster server?
On 09/01/2015 01:00 AM, Merlin Morgenstern wrote:> this all makes sense and sounds a bit like a solr setup :-) > > I have now added the third node as a peer > sudo gluster peer probe gs3 > > That indeed allow me to mount the share manually on node2 even if node1 is > down. > > BUT: It does not mount on reboot! It only successfully mounts if node1 is > up. I need to do a manual: sudo mount -aYou would need to ensure that at least two of the nodes are up in the cluster in this case.> > Is there a particular reason for this, or is it a misconfiguration? > > > > 2015-08-31 21:01 GMT+02:00 Joe Julian <joe at julianfamily.org>: > >> >> >> On 08/31/2015 10:41 AM, Vijay Bellur wrote: >> >>> On Monday 31 August 2015 10:42 PM, Atin Mukherjee wrote: >>> >>>> > 2. Server2 dies. Server1 has to reboot. >>>> > >>>> > In this case the service stays down. It is inpossible to remount the >>>> share without Server1. This is not acceptable for a High Availability >>>> System and I believe also not intended, but a misconfiguration or bug. >>>> This is exactly what I gave as an example in the thread (please read >>>> again). GlusterD is not supposed to start brick process if its other >>>> counter part hasn't come up yet in a 2 node setup. The reason it has >>>> been designed in this way is to block GlusterD on operating on a volume >>>> which could be stale as the node was down and cluster was operational >>>> earlier. >>>> >>> >>> For two node deployments, a third dummy node is recommended to ensure >>> that quorum is maintained when one of the nodes is down. >>> >>> Regards, >>> Vijay >>> >> >> Have the settings changed to enable server quorum by default? >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users >> > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >