Hi,
There isn't a way to replace the failing tier brick through a single
command as we don't have support for replace/ remove or add brick with
tier.
Once you bring the brick online(volume start force), the data in the
brick will be built by the self heal daemon (Done because its a
replicated tier).
But adding brick will still not work.
Else if you use the force option, it will work as expected and cause
data loss while detaching the tier.
The volume start force will start a new brick process for the brick
that was down.
If it doesn't then we need to check the logs for the reason why its
not starting.
On Mon, Mar 5, 2018 at 3:07 PM, Curt Lestrup <curt at lestrup.se>
wrote:> Hi Hari,
>
> I tried and now understand the implications of ?detach force?.
> The brick failure was not caused by glusterfs so it has nothing with
glusterfs version to do.
>
> In fact, my question is about how to replace a failing tier brick.
>
> The setup is replicated so shouldn?t there be a way to attach a new brick
without terminating and rebuild the tier?
> Or at least allow a ?force? to ?gluster volume tier labgreenbin detach
start??
>
> How should I understand the below? Is it not misleading, since it refer to
the offline brick?
>> volume tier detach start: failed: Pre Validation failed on labgfs51.
Found
>> stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove
the
>> offline brick
>
> Tried "gluster volume start labgreenbin force" but that did not
reinitialize a new brick.
>
> /C
>
> On 2018-03-05, 07:31, "Hari Gowtham" <hgowtham at
redhat.com> wrote:
>
> Hi Curt,
>
> gluster volume tier labgreenbin detach force will convert the volume
> from a tiered volume to a normal volume by detaching all the hot
> bricks.
> The force command won't move the data to the cold bricks. If the
hot
> brick had data, it will not be moved.
>
> Here you can copy the data on the hot brick to the mount point after
detach.
>
> Or you can do a "gluster volume start labgreenbin force" to
restart
> the brick that has went down.
>
> Did it happen with 3.12.6 too?
> If yes, do you have any idea about how the brick went down?
>
>
> On Sun, Mar 4, 2018 at 8:08 PM, Curt Lestrup <curt at lestrup.se>
wrote:
> > Hi,
> >
> >
> >
> > Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu
16.04 with
> > a 3 ssd tier where one ssd is bad.
> >
> >
> >
> > Status of volume: labgreenbin
> >
> > Gluster process TCP Port RDMA Port
Online Pid
> >
> >
------------------------------------------------------------------------------
> >
> > Hot Bricks:
> >
> > Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y
> > 4217
> >
> > Brick labgfs51:/gfs/p1-tier/mount N/A N/A N
N/A
> >
> > Brick labgfs11:/gfs/p1-tier/mount 49152 0 Y
643
> >
> > Cold Bricks:
> >
> > Brick labgfs11:/gfs/p1/mount 49153 0 Y
312
> >
> > Brick labgfs51:/gfs/p1/mount 49153 0 Y
295
> >
> > Brick labgfs81:/gfs/p1/mount 49153 0 Y
307
> >
> >
> >
> > Cannot find a command to replace the ssd so instead trying detach
the tier
> > but:
> >
> >
> >
> > # gluster volume tier labgreenbin detach start
> >
> > volume tier detach start: failed: Pre Validation failed on
labgfs51. Found
> > stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to
remove the
> > offline brick
> >
> > Tier command failed
> >
> >
> >
> > ?force? results in Usage:
> >
> > # gluster volume tier labgreenbin detach start force
> >
> > Usage:
> >
> > volume tier <VOLNAME> status
> >
> > volume tier <VOLNAME> start [force]
> >
> > volume tier <VOLNAME> stop
> >
> > volume tier <VOLNAME> attach [<replica COUNT>]
<NEW-BRICK>... [force]
> >
> > volume tier <VOLNAME> detach
<start|stop|status|commit|[force]>
> >
> >
> >
> > So trying to remove the brick:
> >
> > # gluster v remove-brick labgreenbin replica 2
labgfs51:/gfs/p1-tier/mount
> > force
> >
> > Removing brick(s) can result in data loss. Do you want to
Continue? (y/n) y
> >
> > volume remove-brick commit force: failed: Removing brick from a
Tier volume
> > is not allowed
> >
> >
> >
> > Succeeded removing the tier with:
> >
> > # gluster volume tier labgreenbin detach force
> >
> >
> >
> > but what does that mean? Will the content of tier get lost?
> >
> >
> >
> > How to solve this situation?
> >
> > /Curt
> >
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Regards,
> Hari Gowtham.
>
>
--
Regards,
Hari Gowtham.