search for: 97b7

Displaying 7 results from an estimated 7 matches for "97b7".

Did you mean: 97,7
2018 Feb 06
5
strange hostname issue on volume create command with famous Peer in Cluster state error message
...7:05.769160] I [MSGID: 106477] [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: 476b754c-24cd-4816-a630-99c1b696a9e6 [2018-02-06 13:47:05.806715] I [MSGID: 106511] [glusterd-rpc-ops.c:261:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: 1c041dbb-bad3-4158-97b7-fe47cddadada, host: sec.ostechnix.lan [2018-02-06 13:47:05.806764] I [MSGID: 106511] [glusterd-rpc-ops.c:421:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req [2018-02-06 13:47:05.816670] I [MSGID: 106493] [glusterd-rpc-ops.c:485:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from u...
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
...6477] > [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: > 476b754c-24cd-4816-a630-99c1b696a9e6 > [2018-02-06 13:47:05.806715] I [MSGID: 106511] > [glusterd-rpc-ops.c:261:__glusterd_probe_cbk] 0-management: Received probe > resp from uuid: 1c041dbb-bad3-4158-97b7-fe47cddadada, host: > sec.ostechnix.lan > [2018-02-06 13:47:05.806764] I [MSGID: 106511] > [glusterd-rpc-ops.c:421:__glusterd_probe_cbk] 0-glusterd: Received resp to > probe req > [2018-02-06 13:47:05.816670] I [MSGID: 106493] > [glusterd-rpc-ops.c:485:__glusterd_friend_add_cbk] 0...
2018 Feb 06
1
strange hostname issue on volume create command with famous Peer in Cluster state error message
...769160] I [MSGID: 106477] [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: 476b754c-24cd-4816-a630-99c1b696a9e6 > [2018-02-06 13:47:05.806715] I [MSGID: 106511] [glusterd-rpc-ops.c:261:__glusterd_probe_cbk] 0-management: Received probe resp from uuid: 1c041dbb-bad3-4158-97b7-fe47cddadada, host: sec.ostechnix.lan > [2018-02-06 13:47:05.806764] I [MSGID: 106511] [glusterd-rpc-ops.c:421:__glusterd_probe_cbk] 0-glusterd: Received resp to probe req > [2018-02-06 13:47:05.816670] I [MSGID: 106493] [glusterd-rpc-ops.c:485:__glusterd_friend_add_cbk] 0-glusterd: Received...
2018 Feb 06
0
strange hostname issue on volume create command with famous Peer in Cluster state error message
...D: 106477] > [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated > UUID: 476b754c-24cd-4816-a630-99c1b696a9e6 > [2018-02-06 13:47:05.806715] I [MSGID: 106511] [glusterd-rpc-ops.c:261:__glusterd_probe_cbk] > 0-management: Received probe resp from uuid: 1c041dbb-bad3-4158-97b7-fe47cddadada, > host: sec.ostechnix.lan > [2018-02-06 13:47:05.806764] I [MSGID: 106511] [glusterd-rpc-ops.c:421:__glusterd_probe_cbk] > 0-glusterd: Received resp to probe req > [2018-02-06 13:47:05.816670] I [MSGID: 106493] [glusterd-rpc-ops.c:485:__glusterd_friend_add_cbk] > 0-glus...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...b519-40e2-8dc0-a26f8faa5628> <gfid:fa4185b0-e5ab-4fdc-9dca-cb6ba33dcc8d> <gfid:8b2cf4bf-8c2a-465e-8f28-3e9a7f517268> <gfid:13925c48-fda4-40bd-bfcb-d7ced99b82b2> <gfid:292e3a0e-7114-4c97-b688-e94503047b58> <gfid:a52d1173-e034-4b57-9170-a7c91cbe2904> <gfid:5c830c7b-97b7-425b-9ab2-761ef2f41e88> <gfid:420c76a8-1598-4136-9c77-88c8d59d24e7> <gfid:ea6dbca2-f7e3-4015-ae34-04e8bf31fd4f> ... And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999): # grep -c gfid heal-info.fpack 80578 #...
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...b519-40e2-8dc0-a26f8faa5628> <gfid:fa4185b0-e5ab-4fdc-9dca-cb6ba33dcc8d> <gfid:8b2cf4bf-8c2a-465e-8f28-3e9a7f517268> <gfid:13925c48-fda4-40bd-bfcb-d7ced99b82b2> <gfid:292e3a0e-7114-4c97-b688-e94503047b58> <gfid:a52d1173-e034-4b57-9170-a7c91cbe2904> <gfid:5c830c7b-97b7-425b-9ab2-761ef2f41e88> <gfid:420c76a8-1598-4136-9c77-88c8d59d24e7> <gfid:ea6dbca2-f7e3-4015-ae34-04e8bf31fd4f> ... And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999): # grep -c gfid heal-info.fpack 80578 #...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks, I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows: # gluster volume info Volume Name: myvol Type: Distributed-Replicate Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gv0:/data/glusterfs Brick2: gv1:/data/glusterfs Brick3: