On 08/15/2013 10:05 PM, David Gibbons wrote:> Hi There,
>
> I'm currently testing Gluster for possible production use. I
haven't
> been able to find the answer to this question in the forum arch or in
> the public docs. It's possible that I don't know which keywords to
> search for.
>
> Here's the question (more details below): let's say that one of my
> bricks "fails" -- /not/ a whole node failure but a single brick
> failure within the node. How do I replace a single brick on a node and
> force a sync from one of the replicas?
>
> I have two nodes with 5 bricks each:
> gluster> volume info test-a
>
> Volume Name: test-a
> Type: Distributed-Replicate
> Volume ID: e8957773-dd36-44ae-b80a-01e22c78a8b4
> Status: Started
> Number of Bricks: 5 x 2 = 10
> Transport-type: tcp
> Bricks:
> Brick1: 10.250.4.63:/localmnt/g1lv2
> Brick2: 10.250.4.65:/localmnt/g2lv2
> Brick3: 10.250.4.63:/localmnt/g1lv3
> Brick4: 10.250.4.65:/localmnt/g2lv3
> Brick5: 10.250.4.63:/localmnt/g1lv4
> Brick6: 10.250.4.65:/localmnt/g2lv4
> Brick7: 10.250.4.63:/localmnt/g1lv5
> Brick8: 10.250.4.65:/localmnt/g2lv5
> Brick9: 10.250.4.63:/localmnt/g1lv1
> Brick10: 10.250.4.65:/localmnt/g2lv1
>
> I formatted 10.250.4.65:/localmnt/g2lv5 (to simulate a
"failure").
> What is the next step? I have tried various combinations of removing
> and re-adding the brick, replacing the brick, etc. I read in a
> previous message to this list that replace-brick was for planned
> changes which makes sense, so that's probably not my next step.
You must first check if the 'formatted' brick
10.250.4.65:/localmnt/g2lv5 is online using the `gluster volume status`
command. If not start the volume using `gluster volume start
<VOLNAME>force`. You can then use the gluster volume heal command which
would copy the data from the other replica brick into your formatted brick.
Hope this helps.
-Ravi
>
> Cheers,
> Dave
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130816/3d8b8b5e/attachment.html>