M. Vale
2011-Nov-03 12:28 UTC
[Gluster-users] glusterfs after stoping glusterfs we can't start it
HI, using gluster in replicated, we have the following conf: Volume Name: volume01 Type: Distributed-Replicate Status: Started Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: gluster01:/mnt Brick2: gluster02:/mnt Brick3: gluster03:/mnt Brick4: gluster04:/mnt Brick5: gluster05:/mnt Brick6: gluster06:/mnt Brick7: gluster51:/mnt Brick8: gluster52:/mnt Options Reconfigured: cluster.data-self-heal-algorithm: full performance.io-thread-count: 64 diagnostics.brick-log-level: INFO The we did: gluster volume stop volume01 And it took several minutes, after that running gluster volume info gives: Volume Name: volume01 Type: Distributed-Replicate Status: Stopped Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: gluster01:/mnt Brick2: gluster02:/mnt Brick3: gluster03:/mnt Brick4: gluster04:/mnt Brick5: gluster05:/mnt Brick6: gluster06:/mnt Brick7: gluster51:/mnt Brick8: gluster52:/mnt Options Reconfigured: cluster.data-self-heal-algorithm: full performance.io-thread-count: 64 diagnostics.brick-log-level: INFO But now if I do: gluster volume start volume01, gives the following error: operation failed If I do gluster volume reset the same thing: gluster volume reset volume01 operation failed And if I try to stop again: gluster volume stop volume01 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y operation failed This occurs using gluster 3.2 on Centos 6.0 Where do I start looking so I can start the volume again? Thanks MV -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20111103/6c11190a/attachment.html>
krish
2011-Nov-03 12:59 UTC
[Gluster-users] glusterfs after stoping glusterfs we can't start it
Vale, Were you running commands from the cli from multiple machines simultaneously? Could you attach glusterd logs of glusterd from all the machines in the cluster? It would be either, /usr/local/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log or /var/log/glusterfs/etc-glusterfs-glusterd.vol.log based on your mode of installation. thanks, kp On 11/03/2011 05:58 PM, M. Vale wrote:> HI, using gluster in replicated, we have the following conf: > > Volume Name: volume01 > Type: Distributed-Replicate > Status: Started > Number of Bricks: 4 x 2 = 8 > Transport-type: tcp > Bricks: > Brick1: gluster01:/mnt > Brick2: gluster02:/mnt > Brick3: gluster03:/mnt > Brick4: gluster04:/mnt > Brick5: gluster05:/mnt > Brick6: gluster06:/mnt > Brick7: gluster51:/mnt > Brick8: gluster52:/mnt > Options Reconfigured: > cluster.data-self-heal-algorithm: full > performance.io-thread-count: 64 > diagnostics.brick-log-level: INFO > > > The we did: > > gluster volume stop volume01 > > And it took several minutes, after that running gluster volume info gives: > > > Volume Name: volume01 > Type: Distributed-Replicate > Status: Stopped > Number of Bricks: 4 x 2 = 8 > Transport-type: tcp > Bricks: > Brick1: gluster01:/mnt > Brick2: gluster02:/mnt > Brick3: gluster03:/mnt > Brick4: gluster04:/mnt > Brick5: gluster05:/mnt > Brick6: gluster06:/mnt > Brick7: gluster51:/mnt > Brick8: gluster52:/mnt > Options Reconfigured: > cluster.data-self-heal-algorithm: full > performance.io-thread-count: 64 > diagnostics.brick-log-level: INFO > > > But now if I do: gluster volume start volume01, gives the following error: > > operation failed > > If I do gluster volume reset the same thing: > > gluster volume reset volume01 > operation failed > > And if I try to stop again: > > gluster volume stop volume01 > Stopping volume will make its data inaccessible. Do you want to > continue? (y/n) y > operation failed > > > This occurs using gluster 3.2 on Centos 6.0 > > > Where do I start looking so I can start the volume again? > > Thanks > MV > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20111103/46c362c3/attachment.html>