search for: gluster02ib

Displaying 14 results from an estimated 14 matches for "gluster02ib".

2018 Apr 25
2
Turn off replication
...scratch on port 49152 [2018-04-25 22:08:55.309659] I [MSGID: 106143] [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick /gdata/brick1/scratch.rdma on port 49153 [2018-04-25 22:08:55.310231] E [MSGID: 106005] [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start brick gluster02ib:/gdata/brick1/scratch [2018-04-25 22:08:55.310275] E [MSGID: 106074] [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add bricks [2018-04-25 22:08:55.310304] E [MSGID: 106123] [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit failed. [2018-04-25 22:08:...
2018 Apr 12
2
Turn off replication
...ation you have provided me, I would like to make sure > that I?m running the right commands. > > 1. gluster volume heal scratch info > If the count is non zero, trigger the heal and wait for heal info count to become zero. > 2. gluster volume remove-brick scratch *replica 1 * > gluster02ib:/gdata/brick1/scratch gluster02ib:/gdata/brick2/scratch force > 3. gluster volume add-brick* ?#"* scratch gluster02ib:/gdata/brick1/ > scratch gluster02ib:/gdata/brick2/scratch > > > Based on the configuration I have, Brick 1 from Node A and B are tide > together and Brick 2...
2018 Apr 25
0
Turn off replication
Hello Karthik Im having trouble adding the two bricks back online. Any help is appreciated thanks when i try to add-brick command this is what i get [root at gluster01 ~]# gluster volume add-brick scratch gluster02ib:/gdata/brick2/scratch/ volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be contained by an existing brick I have run the following commands and remove the .glusterfs hidden directories [root at glus...
2018 Apr 27
0
Turn off replication
...18-04-25 22:08:55.309659] I [MSGID: 106143] > [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick > /gdata/brick1/scratch.rdma on port 49153 > [2018-04-25 22:08:55.310231] E [MSGID: 106005] > [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start > brick gluster02ib:/gdata/brick1/scratch > [2018-04-25 22:08:55.310275] E [MSGID: 106074] > [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add > bricks > [2018-04-25 22:08:55.310304] E [MSGID: 106123] > [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit &...
2018 Apr 30
2
Turn off replication
...9659] I [MSGID: 106143] >> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick >> /gdata/brick1/scratch.rdma on port 49153 >> [2018-04-25 22:08:55.310231] E [MSGID: 106005] >> [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start >> brick gluster02ib:/gdata/brick1/scratch >> [2018-04-25 22:08:55.310275] E [MSGID: 106074] >> [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add >> bricks >> [2018-04-25 22:08:55.310304] E [MSGID: 106123] >> [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management...
2018 May 02
0
Turn off replication
...18-04-25 22:08:55.309659] I [MSGID: 106143] > [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick > /gdata/brick1/scratch.rdma on port 49153 > [2018-04-25 22:08:55.310231] E [MSGID: 106005] > [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start > brick gluster02ib:/gdata/brick1/scratch > [2018-04-25 22:08:55.310275] E [MSGID: 106074] > [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add > bricks > [2018-04-25 22:08:55.310304] E [MSGID: 106123] > [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit &...
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
...ster01 ~]# gluster volume status Status of volume: scratch Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y 3140 Brick gluster02ib:/gdata/brick1/scratch 49153 49154 Y 2634 Self-heal Daemon on localhost N/A N/A Y 3132 Self-heal Daemon on gluster02ib N/A N/A Y 2626 Task Status of Volume scratch ------------------------------------------------...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
...> Status of volume: scratch > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y > 3140 > Brick gluster02ib:/gdata/brick1/scratch 49153 49154 Y > 2634 > Self-heal Daemon on localhost N/A N/A Y > 3132 > Self-heal Daemon on gluster02ib N/A N/A Y > 2626 > > > Task Status of Volume scratch > --------------------------...
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a volume and after that, I ran the status and info in it and on the status i get just the two brikcs > Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y 3140 > Brick gluste...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...helping me with this, I understand now , but I have few > questions. > > When i had it setup in replica (just 2 nodes with 2 bricks) and tried to > added , it failed. > > [root at gluster01 ~]# gluster volume add-brick scratch replica 2 >> gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch >> volume add-brick: failed: /gdata/brick2/scratch is already part of a >> volume >> > > Did you try the add brick operation several times with the same bricks? If > yes, that could be the cause as Gluster sets xattrs on the brick root > directory...
2018 Jan 10
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi, Please let us know what commands you ran so far and the output of the *gluster volume info* command. Thanks, Nithya On 9 January 2018 at 23:06, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hello > > We are trying to setup Gluster for our project/scratch storage HPC machine > using a replicated mode with 2 nodes, 2 bricks each (14tb each). > > Our goal is to be
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
....org> Hi Nithya Thanks for helping me with this, I understand now , but I have few questions. When i had it setup in replica (just 2 nodes with 2 bricks) and tried to added , it failed. [root at gluster01 ~]# gluster volume add-brick scratch replica 2 > gluster01ib:/gdata/brick2/scratch gluster02ib:/gdata/brick2/scratch > volume add-brick: failed: /gdata/brick2/scratch is already part of a volume > Did you try the add brick operation several times with the same bricks? If yes, that could be the cause as Gluster sets xattrs on the brick root directory. and after that, I ran the status...
2018 Apr 07
0
Turn off replication
...amp; Node B's brick1 and the second one consisting of Node A's brick2 and Node B's brick2. You don't have the same data on all the 4 bricks. Data are distributed between these two subvolumes. To remove the replica you can use the command gluster volume remove-brick scratch replica 1 gluster02ib:/gdata/brick1/ scratch gluster02ib:/gdata/brick2/scratch force So you will have one copy of data present from both the distributes. Before doing this make sure "gluster volume heal scratch info" value is zero. So copies you retain will have the correct data. After the remove-brick erase t...
2018 Jan 09
2
Creating cluster replica on 2 nodes 2 bricks each.
Hello We are trying to setup Gluster for our project/scratch storage HPC machine using a replicated mode with 2 nodes, 2 bricks each (14tb each). Our goal is to be able to have a replicated system between node 1 and 2 (A bricks) and add an additional 2 bricks (B bricks) from the 2 nodes. so we can have a total of 28tb replicated mode. Node 1 [ (Brick A) (Brick B) ] Node 2 [ (Brick A) (Brick