search for: __glusterd_handle_add_brick

Displaying 11 results from an estimated 11 matches for "__glusterd_handle_add_brick".

2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
...erd_handle_status_volume] 0-management: Received status volume req for volume scratch [2018-01-10 15:00:29.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-10 15:01:09.872082] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-01-10 15:01:09.872128] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2018-01-10 15:01:09.876763] E [MSGID: 106451] [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: /gdata/brick2/scr...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
...erd_handle_status_volume] 0-management: Received status volume req for volume scratch [2018-01-10 15:00:29.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-10 15:01:09.872082] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-01-10 15:01:09.872128] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2018-01-10 15:01:09.876763] E [MSGID: 106451] [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: /gdata/brick2/scr...
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...nagement: > Received status volume req for volume scratch > [2018-01-10 15:00:29.516071] I [MSGID: 106488] > [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: > Received get vol req > [2018-01-10 15:01:09.872082] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: > Received add brick req > [2018-01-10 15:01:09.872128] I [MSGID: 106578] > [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: > replica-count is 2 > [2018-01-10 15:01:09.876763] E [MSGID: 106451] > [glusterd-utils.c:6207:glusterd_is_path_in_use] 0...
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi Jose, Gluster is working as expected. The Distribute-replicated type just means that there are now 2 replica sets and files will be distributed across them. A volume of type Replicate (1xn where n is the number of bricks in the replica set) indicates there is no distribution (all files on the volume will be present on all the bricks in the volume). A volume of type Distributed-Replicate
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. Thanks Jose [root at gluster01 ~]# gluster volume status Status of volume: scratch Gluster process
2018 Apr 25
2
Turn off replication
Looking at the logs , it seems that it is trying to add using the same port was assigned for gluster01ib: Any Ideas?? Jose [2018-04-25 22:08:55.169302] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) [0x7f5464b9b045] -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) [0x7f5464c33d85] -->/lib64/libglusterfs.so.0(runne...
2018 Apr 27
0
Turn off replication
...hez <josesanc at carc.unm.edu> wrote: > Looking at the logs , it seems that it is trying to add using the same port > was assigned for gluster01ib: > > > Any Ideas?? > > Jose > > > > [2018-04-25 22:08:55.169302] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: > Received add brick req > [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] > (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) > [0x7f5464b9b045] > -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) > [0x7f5464c33d85] -->...
2018 Apr 30
2
Turn off replication
...ooking at the logs , it seems that it is trying to add using the same port >> was assigned for gluster01ib: >> >> >> Any Ideas?? >> >> Jose >> >> >> >> [2018-04-25 22:08:55.169302] I [MSGID: 106482] >> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: >> Received add brick req >> [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] >> (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) >> [0x7f5464b9b045] >> -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) >>...
2018 May 02
0
Turn off replication
...lt;josesanc at carc.unm.edu> wrote: > > Looking at the logs , it seems that it is trying to add using the same port > was assigned for gluster01ib: > > > Any Ideas?? > > Jose > > > > [2018-04-25 22:08:55.169302] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: > Received add brick req > [2018-04-25 22:08:55.186037] I [run.c:191:runner_log] > (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045) > [0x7f5464b9b045] > -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85) > [0x7f5464c33d85] -->...
2018 Apr 25
0
Turn off replication
...validate_fn] 0-management: ADD-brick prevalidation failed. [2018-04-25 21:04:56.198716] E [MSGID: 106122] [glusterd-mgmt-handler.c:337:glusterd_handle_pre_validate_fn] 0-management: Pre Validation failed on operation Add brick [2018-04-25 21:07:11.084205] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-04-25 21:07:11.087682] E [MSGID: 106452] [glusterd-utils.c:6064:glusterd_new_brick_validate] 0-management: Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be contained by an existing brick [2018-04-25 21:07:11.087716] W...
2018 Apr 12
2
Turn off replication
On Wed, Apr 11, 2018 at 7:38 PM, Jose Sanchez <josesanc at carc.unm.edu> wrote: > Hi Karthik > > Looking at the information you have provided me, I would like to make sure > that I?m running the right commands. > > 1. gluster volume heal scratch info > If the count is non zero, trigger the heal and wait for heal info count to become zero. > 2. gluster volume