Hi Jose,
Thanks for providing the volume info. You have 2 subvolumes. Data is
replicated within the bricks of that subvolumes.
First one consisting of Node A's brick1 & Node B's brick1 and the
second
one consisting of Node A's brick2 and Node B's brick2.
You don't have the same data on all the 4 bricks. Data are distributed
between these two subvolumes.
To remove the replica you can use the command
gluster volume remove-brick scratch replica 1 gluster02ib:/gdata/brick1/
scratch gluster02ib:/gdata/brick2/scratch force
So you will have one copy of data present from both the distributes.
Before doing this make sure "gluster volume heal scratch info" value
is
zero. So copies you retain will have the correct data.
After the remove-brick erase the data from the backend.
Then you can expand the volume by following the steps at [1].
[1]
https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#expanding-volumes
Regards,
Karthik
On Fri, Apr 6, 2018 at 11:39 PM, Jose Sanchez <josesanc at carc.unm.edu>
wrote:
> Hi Karthik
>
> this is our configuration, is 2x2 =4 , they are all replicated , each
> brick has 14tb. we have 2 nodes A and B, each one with brick 1 and 2.
>
> Node A (replicated A1 (14tb) and B1 (14tb) ) same with node B (Replicated
> A2 (14tb) and B2 (14tb)).
>
> Do you think we need to degrade the node first before removing it. i
> believe the same copy of data is on all 4 bricks, we would like to keep one
> of them, and add the other bricks as extra space
>
> Thanks for your help on this
>
> Jose
>
>
>
>
>
> [root at gluster01 ~]# gluster volume info scratch
>
>
> Volume Name: scratch
> Type: Distributed-Replicate
> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster02ib:/gdata/brick1/scratch
> Brick3: gluster01ib:/gdata/brick2/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
>
> [root at gluster01 ~]# gluster volume status all
> Status of volume: scratch
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y
> 1743
> Brick gluster02ib:/gdata/brick1/scratch 49156 49157 Y
> 1732
> Brick gluster01ib:/gdata/brick2/scratch 49154 49155 Y
> 1738
> Brick gluster02ib:/gdata/brick2/scratch 49158 49159 Y
> 1733
> Self-heal Daemon on localhost N/A N/A Y
> 1728
> Self-heal Daemon on gluster02ib N/A N/A Y
> 1726
>
>
> Task Status of Volume scratch
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
> ---------------------------------
> Jose Sanchez
> Systems/Network Analyst 1
> Center of Advanced Research Computing
> 1601 Central Ave
>
<https://maps.google.com/?q=1601+Central+Ave&entry=gmail&source=g>.
> MSC 01 1190
> Albuquerque, NM 87131-0001
> carc.unm.edu
> 575.636.4232
>
> On Apr 6, 2018, at 3:49 AM, Karthik Subrahmanya <ksubrahm at
redhat.com>
> wrote:
>
> Hi Jose,
>
> By switching into pure distribute volume you will lose availability if
> something goes bad.
>
> I am guessing you have a nX2 volume.
> If you want to preserve one copy of the data in all the distributes, you
> can do that by decreasing the replica count in the remove-brick operation.
> If you have any inconsistency, heal them first using the "gluster
volume
> heal <volname>" command and wait till the
> "gluster volume heal <volname> info" output becomes zero,
before removing
> the bricks, so that you will have the correct data.
> If you do not want to preserve the data then you can directly remove the
> bricks.
> Even after removing the bricks the data will be present in the backend of
> the removed bricks. You have to manually erase them (both data and
> .glusterfs folder).
> See [1] for more details on remove-brick.
>
> [1]. https://docs.gluster.org/en/latest/Administrator%
> 20Guide/Managing%20Volumes/#shrinking-volumes
>
> HTH,
> Karthik
>
>
> On Thu, Apr 5, 2018 at 8:17 PM, Jose Sanchez <josesanc at
carc.unm.edu>
> wrote:
>
>>
>> We have a Gluster setup with 2 nodes (distributed replication) and we
>> would like to switch it to the distributed mode. I know the data is
>> duplicated between those nodes, what is the proper way of switching it
to a
>> distributed, we would like to double or gain the storage space on our
>> gluster storage node. what happens with the data, do i need to erase
one of
>> the nodes?
>>
>> Jose
>>
>>
>> ---------------------------------
>> Jose Sanchez
>> Systems/Network Analyst
>> Center of Advanced Research Computing
>> 1601 Central Ave
>>
<https://maps.google.com/?q=1601+Central+Ave&entry=gmail&source=g>.
>> MSC 01 1190
>> Albuquerque, NM 87131-0001
>> carc.unm.edu
>> 575.636.4232
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180407/899d8884/attachment.html>