Hi guys. I have a 2 node replicated gluster setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server. can anyone shed some light on whats wrong? my gfs config options are as following Volume Name: gfsvolume Type: Replicate Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gfs1:/export/sda/brick Brick2: gfs2:/export/sda/brick Options Reconfigured: cluster.quorum-count: 1 auth.allow: 172.* cluster.quorum-type: fixed performance.cache-size: 1914589184 performance.cache-refresh-timeout: 60 cluster.data-self-heal-algorithm: diff performance.write-behind-window-size: 4MB nfs.trusted-write: off nfs.addr-namelookup: off cluster.server-quorum-type: server performance.cache-max-file-size: 2MB network.frame-timeout: 90 network.ping-timeout: 30 performance.quick-read: off cluster.server-quorum-ratio: 50% Thank You Kindly, Kaamesh -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150208/80628572/attachment.html>
I added a third server to the cluster to serve as a tie breaker. This worked. The third server does not actually contribute any bricks to any volumes.> On Feb 8, 2015, at 2:50 AM, Kaamesh Kamalaaharan <kaamesh at novocraft.com> wrote: > > Hi guys. I have a 2 node replicated gluster setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server. > can anyone shed some light on whats wrong? > > my gfs config options are as following > > > Volume Name: gfsvolume > Type: Replicate > Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: gfs1:/export/sda/brick > Brick2: gfs2:/export/sda/brick > Options Reconfigured: > cluster.quorum-count: 1 > auth.allow: 172.* > cluster.quorum-type: fixed > performance.cache-size: 1914589184 > performance.cache-refresh-timeout: 60 > cluster.data-self-heal-algorithm: diff > performance.write-behind-window-size: 4MB > nfs.trusted-write: off > nfs.addr-namelookup: off > cluster.server-quorum-type: server > performance.cache-max-file-size: 2MB > network.frame-timeout: 90 > network.ping-timeout: 30 > performance.quick-read: off > cluster.server-quorum-ratio: 50% > > > Thank You Kindly, > Kaamesh > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150208/b47eb94a/attachment.html>
I added a third server to the cluster to serve as a tie breaker. This worked. The third server does not actually contribute any bricks to any volumes. -------------------------------------------------- Craig Yoshioka> On Feb 8, 2015, at 2:50 AM, Kaamesh Kamalaaharan <kaamesh at novocraft.com> wrote: > > Hi guys. I have a 2 node replicated gluster setup with the quorum count set at 1 brick. By my understanding this means that the gluster will not go down when one brick is disconnected. This however proves false and when one brick is disconnected (i just pulled it off the network) the remaining brick goes down as well and i lose my mount points on the server. > can anyone shed some light on whats wrong? > > my gfs config options are as following > > > Volume Name: gfsvolume > Type: Replicate > Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: gfs1:/export/sda/brick > Brick2: gfs2:/export/sda/brick > Options Reconfigured: > cluster.quorum-count: 1 > auth.allow: 172.* > cluster.quorum-type: fixed > performance.cache-size: 1914589184 > performance.cache-refresh-timeout: 60 > cluster.data-self-heal-algorithm: diff > performance.write-behind-window-size: 4MB > nfs.trusted-write: off > nfs.addr-namelookup: off > cluster.server-quorum-type: server > performance.cache-max-file-size: 2MB > network.frame-timeout: 90 > network.ping-timeout: 30 > performance.quick-read: off > cluster.server-quorum-ratio: 50% > > > Thank You Kindly, > Kaamesh > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150208/e97d6abf/attachment.html>
An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150208/8122d5c8/attachment.html> -------------- next part -------------- _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users