Atin Mukherjee
2015-Oct-30 11:48 UTC
[Gluster-users] How-to start gluster when only one node is up ?
-Atin Sent from one plus one On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote:> > Hello, > > > > I setup a gluster file cluster with 2 nodes. It works fine. > > But, when I shut down the 2 nodes, and startup only one node, I cannotmount the share :> > > > [root at xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare > > Mount failed. Please check the log file for more details. > > > > Log says : > > [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main]0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5 (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 /glusterLocalShare)> > [2015-10-30 10:33:26.171964] I [MSGID: 101190][event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1> > [2015-10-30 10:33:26.185685] I [MSGID: 101190][event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2> > [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify]0-gv0-client-0: parent translators are ready, attempting connect on transport> > [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify]0-gv0-client-1: parent translators are ready, attempting connect on transport> > [2015-10-30 10:33:26.192209] E [MSGID: 114058][client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: failed to get the port number for remote subvolume. Please ume status' on server to see if brick process is running.> > [2015-10-30 10:33:26.192339] I [MSGID: 114018][client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from gv0-client-0. Client process will keep trying to connect t brick's port is available> > > > And when I check the volumes I get: > > [root at xxx ~]# gluster volume status > > Status of volume: gv0 > > Gluster process TCP Port RDMA Port OnlinePid> >------------------------------------------------------------------------------> > Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A NN/A> > NFS Server on localhost N/A N/A NN/A> > NFS Server on localhost N/A N/A NN/A> > > > Task Status of Volume gv0 > >------------------------------------------------------------------------------> > There are no active volume tasks > > > > If I start th second node, all is OK. > > > > Is this normal ?This behaviour is by design. In a multi node cluster when GlusterD comes up it doesn't start the bricks until it receives the configuration from its one of the friends to ensure that stale information is not been referred. In your case since the other node is down bricks are not started and hence mount fails. As a workaround, we recommend to add a dummy node to the cluster to avoid this issue.> > > > Regards, > > > > R?mi > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151030/fb8f3665/attachment.html>
Remi Serrano
2015-Oct-30 11:53 UTC
[Gluster-users] How-to start gluster when only one node is up ?
Thanks Atin Regards, R?mi De : Atin Mukherjee [mailto:atin.mukherjee83 at gmail.com] Envoy? : vendredi 30 octobre 2015 12:48 ? : Remi Serrano <rserrano at pros.com> Cc : gluster-users at gluster.org Objet : Re: [Gluster-users] How-to start gluster when only one node is up ? -Atin Sent from one plus one On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com<mailto:rserrano at pros.com>> wrote:> > Hello, > > > > I setup a gluster file cluster with 2 nodes. It works fine. > > But, when I shut down the 2 nodes, and startup only one node, I cannot mount the share : > > > > [root at xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare > > Mount failed. Please check the log file for more details. > > > > Log says : > > [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5 (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 /glusterLocalShare) > > [2015-10-30 10:33:26.171964] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 > > [2015-10-30 10:33:26.185685] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 > > [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify] 0-gv0-client-0: parent translators are ready, attempting connect on transport > > [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify] 0-gv0-client-1: parent translators are ready, attempting connect on transport > > [2015-10-30 10:33:26.192209] E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: failed to get the port number for remote subvolume. Please ume status' on server to see if brick process is running. > > [2015-10-30 10:33:26.192339] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from gv0-client-0. Client process will keep trying to connect t brick's port is available > > > > And when I check the volumes I get: > > [root at xxx ~]# gluster volume status > > Status of volume: gv0 > > Gluster process TCP Port RDMA Port Online Pid > > ------------------------------------------------------------------------------ > > Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N N/A > > NFS Server on localhost N/A N/A N N/A > > NFS Server on localhost N/A N/A N N/A > > > > Task Status of Volume gv0 > > ------------------------------------------------------------------------------ > > There are no active volume tasks > > > > If I start th second node, all is OK. > > > > Is this normal ?This behaviour is by design. In a multi node cluster when GlusterD comes up it doesn't start the bricks until it receives the configuration from its one of the friends to ensure that stale information is not been referred. In your case since the other node is down bricks are not started and hence mount fails. As a workaround, we recommend to add a dummy node to the cluster to avoid this issue.> > > > Regards, > > > > R?mi > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org<mailto:Gluster-users at gluster.org> > http://www.gluster.org/mailman/listinfo/gluster-users<http://cp.mcafee.com/d/FZsSd6Qm674XLKfc6XCQrELc3AkmhNJ6WbP0V4sUedEThuvupd79EVd79J6XbX3PBTNNKdfz7ol1iQqQVsSJ6JendIeLfIIKfZvxRTDTTeLsKCOUZObbxEVhKqeuEyCJtdmWavaxVZicHs3jq9J4TvAS7DTxPdNP1KVI04hfrFfUY01MjlS67OFek7qUhfrFfUBSWv6xLpOxOHpkDYh7_up1h4jVsSzuVEVjdwLQzh0qmMAuTivNbJQ-d3iWq80LkMnDkQgkDIpCy0yuTivNd41wDY3h1m1VoQglxVEwxl3Ph1axEwrmd418SUedQSaeSv6mJsC>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151030/ca0625ef/attachment.html>
Mauro Mozzarelli
2015-Oct-30 11:58 UTC
[Gluster-users] How-to start gluster when only one node is up ?
Hi, Atin keeps giving the same answer: "it is by design" I keep saying "the design is wrong and it should be changed to cater for standby servers" In the meantime this is the workaround I am using: When the single node starts I stop and start the volume, and then it becomes mountable. On CentOS 6 and CentOS 7 it works with release up to 3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4. In my experience glusterfs releases are a bit of a hit and miss. Often something stops working with newer releases, then after a few more releases it works again or there is a workaround ... Not quite the stability one would want for commercial use, and thus at the moment I can risk using it only for my home servers, hence the cluster with a node always ON and the second as STANDBY. MOUNT=/home LABEL="GlusterFS:" if grep -qs $MOUNT /proc/mounts; then echo "$LABEL $MOUNT is mounted"; gluster volume start gv_home 2>/dev/null else echo "$LABEL $MOUNT is NOT mounted"; echo "$LABEL Restarting gluster volume ..." yes|gluster volume stop gv_home > /dev/null gluster volume start gv_home mount -t glusterfs sirius-ib:/gv_home $MOUNT; if grep -qs $MOUNT /proc/mounts; then echo "$LABEL $MOUNT is mounted"; gluster volume start gv_home 2>/dev/null else echo "$LABEL failure to mount $MOUNT"; fi fi I hope this helps. Mauro On Fri, October 30, 2015 11:48, Atin Mukherjee wrote:> -Atin > Sent from one plus one > On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote: >> >> Hello, >> >> >> >> I setup a gluster file cluster with 2 nodes. It works fine. >> >> But, when I shut down the 2 nodes, and startup only one node, I cannot > mount the share : >> >> >> >> [root at xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare >> >> Mount failed. Please check the log file for more details. >> >> >> >> Log says : >> >> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5 > (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 > /glusterLocalShare) >> >> [2015-10-30 10:33:26.171964] I [MSGID: 101190] > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread > with index 1 >> >> [2015-10-30 10:33:26.185685] I [MSGID: 101190] > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread > with index 2 >> >> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify] > 0-gv0-client-0: parent translators are ready, attempting connect on > transport >> >> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify] > 0-gv0-client-1: parent translators are ready, attempting connect on > transport >> >> [2015-10-30 10:33:26.192209] E [MSGID: 114058] > [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: failed > to get the port number for remote subvolume. Please ume status' on server > to see if brick process is running. >> >> [2015-10-30 10:33:26.192339] I [MSGID: 114018] > [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from > gv0-client-0. Client process will keep trying to connect t brick's port is > available >> >> >> >> And when I check the volumes I get: >> >> [root at xxx ~]# gluster volume status >> >> Status of volume: gv0 >> >> Gluster process TCP Port RDMA Port Online > Pid >> >> > ------------------------------------------------------------------------------ >> >> Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N > N/A >> >> NFS Server on localhost N/A N/A N > N/A >> >> NFS Server on localhost N/A N/A N > N/A >> >> >> >> Task Status of Volume gv0 >> >> > ------------------------------------------------------------------------------ >> >> There are no active volume tasks >> >> >> >> If I start th second node, all is OK. >> >> >> >> Is this normal ? > This behaviour is by design. In a multi node cluster when GlusterD comes > up > it doesn't start the bricks until it receives the configuration from its > one of the friends to ensure that stale information is not been referred. > In your case since the other node is down bricks are not started and hence > mount fails. > As a workaround, we recommend to add a dummy node to the cluster to avoid > this issue. >> >> >> >> Regards, >> >> >> >> R??mi >> >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-- Mauro Mozzarelli Phone: +44 7941 727378 eMail: mauro at ezplanet.net
Remi Serrano
2015-Oct-30 12:01 UTC
[Gluster-users] How-to start gluster when only one node is up ?
Thank you Mauro. R?mi -----Message d'origine----- De?: Mauro Mozzarelli [mailto:mauro at ezplanet.net] Envoy??: vendredi 30 octobre 2015 12:58 ??: Atin Mukherjee <atin.mukherjee83 at gmail.com> Cc?: Remi Serrano <rserrano at pros.com>; gluster-users at gluster.org Objet?: Re: [Gluster-users] How-to start gluster when only one node is up ? Hi, Atin keeps giving the same answer: "it is by design" I keep saying "the design is wrong and it should be changed to cater for standby servers" In the meantime this is the workaround I am using: When the single node starts I stop and start the volume, and then it becomes mountable. On CentOS 6 and CentOS 7 it works with release up to 3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4. In my experience glusterfs releases are a bit of a hit and miss. Often something stops working with newer releases, then after a few more releases it works again or there is a workaround ... Not quite the stability one would want for commercial use, and thus at the moment I can risk using it only for my home servers, hence the cluster with a node always ON and the second as STANDBY. MOUNT=/home LABEL="GlusterFS:" if grep -qs $MOUNT /proc/mounts; then echo "$LABEL $MOUNT is mounted"; gluster volume start gv_home 2>/dev/null else echo "$LABEL $MOUNT is NOT mounted"; echo "$LABEL Restarting gluster volume ..." yes|gluster volume stop gv_home > /dev/null gluster volume start gv_home mount -t glusterfs sirius-ib:/gv_home $MOUNT; if grep -qs $MOUNT /proc/mounts; then echo "$LABEL $MOUNT is mounted"; gluster volume start gv_home 2>/dev/null else echo "$LABEL failure to mount $MOUNT"; fi fi I hope this helps. Mauro On Fri, October 30, 2015 11:48, Atin Mukherjee wrote:> -Atin > Sent from one plus one > On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote: >> >> Hello, >> >> >> >> I setup a gluster file cluster with 2 nodes. It works fine. >> >> But, when I shut down the 2 nodes, and startup only one node, I >> cannot > mount the share : >> >> >> >> [root at xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare >> >> Mount failed. Please check the log file for more details. >> >> >> >> Log says : >> >> [2015-10-30 10:33:26.147003] I [MSGID: 100030] >> [glusterfsd.c:2318:main] > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version > 3.7.5 > (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 > /glusterLocalShare) >> >> [2015-10-30 10:33:26.171964] I [MSGID: 101190] > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 1 >> >> [2015-10-30 10:33:26.185685] I [MSGID: 101190] > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 2 >> >> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify] > 0-gv0-client-0: parent translators are ready, attempting connect on > transport >> >> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify] > 0-gv0-client-1: parent translators are ready, attempting connect on > transport >> >> [2015-10-30 10:33:26.192209] E [MSGID: 114058] > [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0: > failed to get the port number for remote subvolume. Please ume status' > on server to see if brick process is running. >> >> [2015-10-30 10:33:26.192339] I [MSGID: 114018] > [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from > gv0-client-0. Client process will keep trying to connect t brick's > port is available >> >> >> >> And when I check the volumes I get: >> >> [root at xxx ~]# gluster volume status >> >> Status of volume: gv0 >> >> Gluster process TCP Port RDMA Port Online > Pid >> >> > ---------------------------------------------------------------------- > -------- >> >> Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N > N/A >> >> NFS Server on localhost N/A N/A N > N/A >> >> NFS Server on localhost N/A N/A N > N/A >> >> >> >> Task Status of Volume gv0 >> >> > ---------------------------------------------------------------------- > -------- >> >> There are no active volume tasks >> >> >> >> If I start th second node, all is OK. >> >> >> >> Is this normal ? > This behaviour is by design. In a multi node cluster when GlusterD > comes up it doesn't start the bricks until it receives the > configuration from its one of the friends to ensure that stale > information is not been referred. > In your case since the other node is down bricks are not started and > hence mount fails. > As a workaround, we recommend to add a dummy node to the cluster to > avoid this issue. >> >> >> >> Regards, >> >> >> >> R??mi >> >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://cp.mcafee.com/d/1jWVIi4zqb33ytT6kn3tPqdQnC1Oab8USzt5Vwsyes76Qr >> ELfLcCzAQsCzASztBZxVOXUUT6DNzIawFqdqsKrmzmDbCZs8fIIKfZvC7zhPtMtRXBQSn >> zhPNEVsp7cLYJt6OaaJNOfaxVZicHs3jq9JcTvC4n3hOUehssodTdw0y9Xt9_7w0e2qKM >> M-l9OwXn29Xt9_4KTjUQdGWT6EaYkmfQ6Y3jrxEVupdwLQzh0qmMAuTivNbJQ-d3iWq80 >> LkMnDkQgkDIpCy0yuTivNd41wDY3h1m1VoQglxVEwxl3Ph1axEwrmd418SUedXoJO > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://cp.mcafee.com/d/2DRPowcy0w76Qm674XKcEK6XCQrELc3AkmhNJ6WbP0V4sUe > dEThuvupd79EVd79J6XbX3PBTNNKdfz7ol1iQqQVsSJ6JendWUgvppsvW_cf6zCXwXHTbF > IL6zDzhOUOepvVqWdAklrzAul3PWApmU6CQPqpK_c8K6zBMsyUUMrKr014jSWj-f00s4Rt > xxYGjB1SK4jSWj-9tKDNErlRKdglUEIvEdU6CT3hOYOr1vF6y0QJx8ZKA_ynrFYq6BQQg1 > uFwLeFEwFfoPd414ZKA_yq831fU6y2I3ONEwH3Ph12G7Cy2l3h0SIq82hJMsrYI-l-- Mauro Mozzarelli Phone: +44 7941 727378 eMail: mauro at ezplanet.net
Atin Mukherjee
2015-Oct-30 12:14 UTC
[Gluster-users] How-to start gluster when only one node is up ?
-Atin Sent from one plus one On Oct 30, 2015 5:28 PM, "Mauro Mozzarelli" <mauro at ezplanet.net> wrote:> > Hi, > > Atin keeps giving the same answer: "it is by design" > > I keep saying "the design is wrong and it should be changed to cater for > standby servers"Every design has got its own set of limitations and i would say this is a limitation instead of mentioning the overall design itself wrong. I would again stand with my points as correctness is always a priority in a distributed system. This behavioural change was introduced in 3.5 and if this was not included as part of release note I apologize on behalf of the release management. As communicated earlier, we will definitely resolve this issue in GlusterD2.> > In the meantime this is the workaround I am using: > When the single node starts I stop and start the volume, and then it > becomes mountable. On CentOS 6 and CentOS 7 it works with release up to > 3.7.4. Release 3.7.5 is broken so I reverted back to 3.7.4.This is where I am not convinced. An explicit volume start should start the bricks, can you raise a BZ with all the relevant details?> > In my experience glusterfs releases are a bit of a hit and miss. Often > something stops working with newer releases, then after a few more > releases it works again or there is a workaround ... Not quite the > stability one would want for commercial use, and thus at the moment I can > risk using it only for my home servers, hence the cluster with a node > always ON and the second as STANDBY. > > MOUNT=/home > LABEL="GlusterFS:" > if grep -qs $MOUNT /proc/mounts; then > echo "$LABEL $MOUNT is mounted"; > gluster volume start gv_home 2>/dev/null > else > echo "$LABEL $MOUNT is NOT mounted"; > echo "$LABEL Restarting gluster volume ..." > yes|gluster volume stop gv_home > /dev/null > gluster volume start gv_home > mount -t glusterfs sirius-ib:/gv_home $MOUNT; > if grep -qs $MOUNT /proc/mounts; then > echo "$LABEL $MOUNT is mounted"; > gluster volume start gv_home 2>/dev/null > else > echo "$LABEL failure to mount $MOUNT"; > fi > fi > > I hope this helps. > Mauro > > On Fri, October 30, 2015 11:48, Atin Mukherjee wrote: > > -Atin > > Sent from one plus one > > On Oct 30, 2015 4:35 PM, "Remi Serrano" <rserrano at pros.com> wrote: > >> > >> Hello, > >> > >> > >> > >> I setup a gluster file cluster with 2 nodes. It works fine. > >> > >> But, when I shut down the 2 nodes, and startup only one node, I cannot > > mount the share : > >> > >> > >> > >> [root at xxx ~]# mount -t glusterfs 10.32.0.11:/gv0 /glusterLocalShare > >> > >> Mount failed. Please check the log file for more details. > >> > >> > >> > >> Log says : > >> > >> [2015-10-30 10:33:26.147003] I [MSGID: 100030] [glusterfsd.c:2318:main] > > 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.5 > > (args: /usr/sbin/glusterfs -127.0.0.1 --volfile-id=/gv0 > > /glusterLocalShare) > >> > >> [2015-10-30 10:33:26.171964] I [MSGID: 101190] > > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread > > with index 1 > >> > >> [2015-10-30 10:33:26.185685] I [MSGID: 101190] > > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread > > with index 2 > >> > >> [2015-10-30 10:33:26.186972] I [MSGID: 114020] [client.c:2118:notify] > > 0-gv0-client-0: parent translators are ready, attempting connect on > > transport > >> > >> [2015-10-30 10:33:26.191823] I [MSGID: 114020] [client.c:2118:notify] > > 0-gv0-client-1: parent translators are ready, attempting connect on > > transport > >> > >> [2015-10-30 10:33:26.192209] E [MSGID: 114058] > > [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-0:failed> > to get the port number for remote subvolume. Please ume status' onserver> > to see if brick process is running. > >> > >> [2015-10-30 10:33:26.192339] I [MSGID: 114018] > > [client.c:2042:client_rpc_notify] 0-gv0-client-0: disconnected from > > gv0-client-0. Client process will keep trying to connect t brick's portis> > available > >> > >> > >> > >> And when I check the volumes I get: > >> > >> [root at xxx ~]# gluster volume status > >> > >> Status of volume: gv0 > >> > >> Gluster process TCP Port RDMA Port Online > > Pid > >> > >> > >------------------------------------------------------------------------------> >> > >> Brick 10.32.0.11:/glusterBrick1/gv0 N/A N/A N > > N/A > >> > >> NFS Server on localhost N/A N/A N > > N/A > >> > >> NFS Server on localhost N/A N/A N > > N/A > >> > >> > >> > >> Task Status of Volume gv0 > >> > >> > >------------------------------------------------------------------------------> >> > >> There are no active volume tasks > >> > >> > >> > >> If I start th second node, all is OK. > >> > >> > >> > >> Is this normal ? > > This behaviour is by design. In a multi node cluster when GlusterD comes > > up > > it doesn't start the bricks until it receives the configuration from its > > one of the friends to ensure that stale information is not beenreferred.> > In your case since the other node is down bricks are not started andhence> > mount fails. > > As a workaround, we recommend to add a dummy node to the cluster toavoid> > this issue. > >> > >> > >> > >> Regards, > >> > >> > >> > >> R?mi > >> > >> > >> > >> > >> _______________________________________________ > >> Gluster-users mailing list > >> Gluster-users at gluster.org > >> http://www.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://www.gluster.org/mailman/listinfo/gluster-users > > > -- > Mauro Mozzarelli > Phone: +44 7941 727378 > eMail: mauro at ezplanet.net >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151030/a85a4688/attachment.html>