search for: 106578

Displaying 14 results from an estimated 14 matches for "106578".

Did you mean: 106568
2018 Apr 25
3
Problem adding replicated bricks on FreeBSD
...nd then try to run gluster volume add-brick poc replica 2 s2:/gluster/1/poc it will always fail (sometimes after a pause, sometimes not.) The only error I'm seeing on the server hosting the new brick, aside from the generic "Unable to add bricks" message, is like so: I [MSGID: 106578] [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: replica-count is set 2 I [MSGID: 106578] [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type is set 2, need to change it E [MSGID: 106054] [glusterd-utils.c:12974:glusterd_handle_replicate_...
2018 Jan 29
2
Replacing a third data node with an arbiter one
...4/libglusterfs.so.0(runner_log+0x105) [0x7fcd4f48d0b5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh --volname=thedude --version=1 --volume-op=add-brick --gd-workdir=/var/lib/glusterd [2018-01-29 15:15:52.999816] I [MSGID: 106578] [glusterd-brick-ops.c:1354:glusterd_op_perform_add_bricks] 0-management: replica-count is set 3 [2018-01-29 15:15:52.999849] I [MSGID: 106578] [glusterd-brick-ops.c:1359:glusterd_op_perform_add_bricks] 0-management: arbiter-count is set 1 [2018-01-29 15:15:52.999862] I [MSG...
2018 Apr 26
0
Problem adding replicated bricks on FreeBSD
...gt; gluster volume add-brick poc replica 2 s2:/gluster/1/poc > it will always fail (sometimes after a pause, sometimes not.) The only > error I'm seeing on the server hosting the new brick, aside from the > generic "Unable to add bricks" message, is like so: > I [MSGID: 106578] > [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management: > replica-count is set 2 > I [MSGID: 106578] > [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: > type is set 2, need to change it > E [MSGID: 106054] > [glusterd-utils.c:12974:g...
2018 Jan 11
3
Creating cluster replica on 2 nodes 2 bricks each.
...9.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-10 15:01:09.872082] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-01-10 15:01:09.872128] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2018-01-10 15:01:09.876763] E [MSGID: 106451] [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: /gdata/brick2/scratch is already part of a volume [File exists] [2018-01-10 15:01:09.876807] W [MSGID...
2018 Jan 12
0
Creating cluster replica on 2 nodes 2 bricks each.
...9.516071] I [MSGID: 106488] [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2018-01-10 15:01:09.872082] I [MSGID: 106482] [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: Received add brick req [2018-01-10 15:01:09.872128] I [MSGID: 106578] [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: replica-count is 2 [2018-01-10 15:01:09.876763] E [MSGID: 106451] [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: /gdata/brick2/scratch is already part of a volume [File exists] [2018-01-10 15:01:09.876807] W [MSGID...
2018 Jan 29
0
Replacing a third data node with an arbiter one
...+0x105) > [0x7fcd4f48d0b5] ) 0-management: Ran script: > /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh > --volname=thedude --version=1 --volume-op=add-brick > --gd-workdir=/var/lib/glusterd > [2018-01-29 15:15:52.999816] I [MSGID: 106578] > [glusterd-brick-ops.c:1354:glusterd_op_perform_add_bricks] > 0-management: replica-count is set 3 > [2018-01-29 15:15:52.999849] I [MSGID: 106578] > [glusterd-brick-ops.c:1359:glusterd_op_perform_add_bricks] > 0-management: arbiter-count is set 1 >...
2018 Jan 26
0
Replacing a third data node with an arbiter one
On 01/24/2018 07:20 PM, Hoggins! wrote: > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 >
2018 Jan 15
1
Creating cluster replica on 2 nodes 2 bricks each.
...t; [glusterd-handler.c:1537:__glusterd_handle_cli_get_volume] 0-management: > Received get vol req > [2018-01-10 15:01:09.872082] I [MSGID: 106482] > [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management: > Received add brick req > [2018-01-10 15:01:09.872128] I [MSGID: 106578] > [glusterd-brick-ops.c:499:__glusterd_handle_add_brick] 0-management: > replica-count is 2 > [2018-01-10 15:01:09.876763] E [MSGID: 106451] > [glusterd-utils.c:6207:glusterd_is_path_in_use] 0-management: > /gdata/brick2/scratch is already part of a volume [File exists] > [2018-0...
2018 Apr 26
0
FreeBSD problem adding/removing replicated bricks
...and then try to run: gluster volume add-brick poc replica 2 s2:/gluster/1/poc it will always fail (sometimes after a pause, sometimes not.)? The only error I'm seeing on the server hosting the new brick, aside from the generic "Unable to add bricks" message, is like so: I [MSGID: 106578] [glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management:? replica-count is set 2 I [MSGID: 106578] [glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management: type is set 2, need to change it E [MSGID: 106054] [glusterd-utils.c:12974:glusterd_handle_replicate_bric...
2018 Jan 24
4
Replacing a third data node with an arbiter one
Hello, The subject says it all. I have a replica 3 cluster : gluster> volume info thedude ? Volume Name: thedude Type: Replicate Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: ngluster-1.network.hoggins.fr:/export/brick/thedude Brick2:
2018 Jan 11
0
Creating cluster replica on 2 nodes 2 bricks each.
Hi Jose, Gluster is working as expected. The Distribute-replicated type just means that there are now 2 replica sets and files will be distributed across them. A volume of type Replicate (1xn where n is the number of bricks in the replica set) indicates there is no distribution (all files on the volume will be present on all the bricks in the volume). A volume of type Distributed-Replicate
2018 Jan 10
2
Creating cluster replica on 2 nodes 2 bricks each.
Hi Nithya This is what i have so far, I have peer both cluster nodes together as replica, from node 1A and 1B , now when i tried to add it , i get the error that it is already part of a volume. when i run the cluster volume info , i see that has switch to distributed-replica. Thanks Jose [root at gluster01 ~]# gluster volume status Status of volume: scratch Gluster process
2012 Feb 08
0
[LLVMdev] Clarifying FMA-related TargetOptions
...lone seems like it should not be enabled by default. This does not surprise me, however, care is required here. First, there has been a previous thread on this recently, and a specifically recommend that you read Stephen Canon's remarks: http://permalink.gmane.org/gmane.comp.compilers.llvm.cvs/106578 In my experience, users of numerical codes expect that the compiler will use FMA instructions where it can, unless specifically asked to avoid doing so by the user. Even though this can sometimes produce a different result (*almost* always a better one), the performance gain is too large to be ign...
2012 Feb 08
6
[LLVMdev] Clarifying FMA-related TargetOptions
Hello everyone, I'd like to propose the attached patch to form FMA intrinsics aggressively, but in order to do so I need some clarification on the intended semantics for the various FP precision-related TargetOptions. I've summarized the three relevant ones below: UnsafeFPMath - Defaults to off, enables "less precise" results than permitted by IEEE754. Comments specifically