hi Pierre,
Could you send volume info output of the volume where you are
trying to do this operation and also point out which brick is giving
problem.
Pranith
On 01/28/2016 08:55 PM, Pierre L?onard wrote:> Hi all,
>
> I have a strip 7 volume of 14 nodes. one of the brick crash and I
> replace the failed disk. Nox on that node the brick is entirely new.
>
> then when I want to start the volume gluster answer :
>
> [root at xstoocky01 brick2]# gluster volume start gvscratch
> volume start: gvscratch: failed: Staging failed on xstoocky02. Error:
> Failed to get extended attribute trusted.glusterfs.volume-id for brick
> dir /mnt/gvscratch/brick2. Reason : No data available
>
>
> Which seem to be relatively honest. I can't remove the brick :
>
> gluster volume remove-brick gvscratch
> xstoocky02:/mnt/gvscratch/brick2 start
> volume remove-brick start: failed: Remove brick incorrect brick count
> of 1 for stripe 7
>
>
>
>
> So how can I re-introduce that brick in the volume ?
>
>
> Many thank's
>
> Pierre L?onard
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20160129/c3cf2b8e/attachment.html>
hi Pranith,> hi Pierre, > Could you send volume info output of the volume where you are > trying to do this operation and also point out which brick is giving > problem. > > Pranithfollowing is the volume info of xstoocky01 where I tried to start the volume : [root at xstoocky01 glusterfs]# gluster volume info gvscratch Volume Name: gvscratch Type: Distributed-Stripe Volume ID: 295a9f91-42f9-4c51-86fe-e8f18e1f5efc Status: Stopped Number of Bricks: 2 x 7 = 14 Transport-type: tcp Bricks: Brick1: xstoocky01:/mnt/gvscratch/brick2 Brick2: xstoocky02:/mnt/gvscratch/brick2 Brick3: xstoocky03:/mnt/gvscratch/brick2 Brick4: xstoocky04:/mnt/gvscratch/brick2 Brick5: xstoocky05:/mnt/gvscratch/brick2 Brick6: xstoocky06:/mnt/gvscratch/brick2 Brick7: xstoocky07:/mnt/gvscratch/brick2 Brick8: xstoocky08:/mnt/gvscratch/brick2 Brick9: xstoocky09:/mnt/gvscratch/brick2 Brick10: xstoocky10:/mnt/gvscratch/brick2 Brick11: xstoocky11:/mnt/gvscratch/brick2 Brick12: xstoocky12:/mnt/gvscratch/brick2 Brick13: xstoocky13:/mnt/gvscratch/brick2 Brick14: xstoocky14:/mnt/gvscratch/brick2 Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on nfs.disable: off cluster.quorum-count: 2 cluster.quorum-type: fixed performance.cache-size: 5GB cluster.min-free-disk: 5% performance.io-thread-count: 16 features.grace-timeout: 20 server.allow-insecure: on performance.flush-behind: on The problem is on the xstoocky02, where the linux logical volume has be completely rebuild with mkfs. Many thank's Pierre L?onard -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160129/bc274601/attachment.html>