hi Pranith,> hi Pierre, > Could you send volume info output of the volume where you are > trying to do this operation and also point out which brick is giving > problem. > > Pranithfollowing is the volume info of xstoocky01 where I tried to start the volume : [root at xstoocky01 glusterfs]# gluster volume info gvscratch Volume Name: gvscratch Type: Distributed-Stripe Volume ID: 295a9f91-42f9-4c51-86fe-e8f18e1f5efc Status: Stopped Number of Bricks: 2 x 7 = 14 Transport-type: tcp Bricks: Brick1: xstoocky01:/mnt/gvscratch/brick2 Brick2: xstoocky02:/mnt/gvscratch/brick2 Brick3: xstoocky03:/mnt/gvscratch/brick2 Brick4: xstoocky04:/mnt/gvscratch/brick2 Brick5: xstoocky05:/mnt/gvscratch/brick2 Brick6: xstoocky06:/mnt/gvscratch/brick2 Brick7: xstoocky07:/mnt/gvscratch/brick2 Brick8: xstoocky08:/mnt/gvscratch/brick2 Brick9: xstoocky09:/mnt/gvscratch/brick2 Brick10: xstoocky10:/mnt/gvscratch/brick2 Brick11: xstoocky11:/mnt/gvscratch/brick2 Brick12: xstoocky12:/mnt/gvscratch/brick2 Brick13: xstoocky13:/mnt/gvscratch/brick2 Brick14: xstoocky14:/mnt/gvscratch/brick2 Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on nfs.disable: off cluster.quorum-count: 2 cluster.quorum-type: fixed performance.cache-size: 5GB cluster.min-free-disk: 5% performance.io-thread-count: 16 features.grace-timeout: 20 server.allow-insecure: on performance.flush-behind: on The problem is on the xstoocky02, where the linux logical volume has be completely rebuild with mkfs. Many thank's Pierre L?onard -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160129/bc274601/attachment.html>
On 01/29/2016 02:03 PM, Pierre L?onard wrote:> hi Pranith, >> hi Pierre, >> Could you send volume info output of the volume where you are >> trying to do this operation and also point out which brick is giving >> problem. >> >> Pranith > following is the volume info of xstoocky01 where I tried to start the > volume : > > [root at xstoocky01 glusterfs]# gluster volume info gvscratch > > Volume Name: gvscratch > Type: Distributed-Stripe > Volume ID: 295a9f91-42f9-4c51-86fe-e8f18e1f5efc > Status: Stopped > Number of Bricks: 2 x 7 = 14 > Transport-type: tcp > Bricks: > Brick1: xstoocky01:/mnt/gvscratch/brick2 > Brick2: xstoocky02:/mnt/gvscratch/brick2 > Brick3: xstoocky03:/mnt/gvscratch/brick2 > Brick4: xstoocky04:/mnt/gvscratch/brick2 > Brick5: xstoocky05:/mnt/gvscratch/brick2 > Brick6: xstoocky06:/mnt/gvscratch/brick2 > Brick7: xstoocky07:/mnt/gvscratch/brick2 > Brick8: xstoocky08:/mnt/gvscratch/brick2 > Brick9: xstoocky09:/mnt/gvscratch/brick2 > Brick10: xstoocky10:/mnt/gvscratch/brick2 > Brick11: xstoocky11:/mnt/gvscratch/brick2 > Brick12: xstoocky12:/mnt/gvscratch/brick2 > Brick13: xstoocky13:/mnt/gvscratch/brick2 > Brick14: xstoocky14:/mnt/gvscratch/brick2 > Options Reconfigured: > diagnostics.count-fop-hits: on > diagnostics.latency-measurement: on > nfs.disable: off > cluster.quorum-count: 2 > cluster.quorum-type: fixed > performance.cache-size: 5GB > cluster.min-free-disk: 5% > performance.io-thread-count: 16 > features.grace-timeout: 20 > server.allow-insecure: on > performance.flush-behind: on > > > The problem is on the xstoocky02, where the linux logical volume has > be completely rebuild with mkfs.How will you rebuild the lost data, if we did mkfs? Pranith> > Many thank's > > Pierre L?onard >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160129/d9ffdbec/attachment.html>
Hi Pranith,>> The problem is on the xstoocky02, where the linux logical volume has >> be completely rebuild with mkfs. > How will you rebuild the lost data, if we did mkfs? > > PranithThat is a scratch, the users know that the data could be lost. on that node, the data are lost. Now I want to restart the volume. Do you mean that the only solution is to destroy the volume and next rebuild it ? Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160201/f10ec10f/attachment.html>