search for: gfs4

Displaying 3 results from an estimated 3 matches for "gfs4".

Did you mean: gfs
2017 Sep 29
1
Gluster geo replication volume is faulty
...gfs2:/gfs/arbiter/gv0 (arbiter) Brick7: gfs1:/gfs/brick2/gv0 Brick8: gfs2:/gfs/brick2/gv0 Brick9: gfs3:/gfs/arbiter/gv0 (arbiter) Options Reconfigured: nfs.disable: on transport.address-family: inet geo-replication.indexing: on geo-replication.ignore-pid-check: on changelog.changelog: on [root at gfs4 ~]# gluster volume info Volume Name: gfsvol_rep Type: Distributed-Replicate Volume ID: 42bfa062-ad0d-4242-a813-63389be1c404 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs5:/gfs/brick1/gv0 Brick2: gfs6:/gfs/brick1/gv0 Brick3: gfs4:/gfs/ar...
2017 Oct 06
0
Gluster geo replication volume is faulty
...s/brick2/gv0 > Brick8: gfs2:/gfs/brick2/gv0 > Brick9: gfs3:/gfs/arbiter/gv0 (arbiter) > Options Reconfigured: > nfs.disable: on > transport.address-family: inet > geo-replication.indexing: on > geo-replication.ignore-pid-check: on > changelog.changelog: on > > [root at gfs4 ~]# gluster volume info > Volume Name: gfsvol_rep > Type: Distributed-Replicate > Volume ID: 42bfa062-ad0d-4242-a813-63389be1c404 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x (2 + 1) = 9 > Transport-type: tcp > Bricks: > Brick1: gfs5:/gfs/brick1/gv0 >...
2014 Apr 28
2
volume start causes glusterd to core dump in 3.5.0
...d dumps core. The tail of the log after the crash: +------------------------------------------------------------------------------+ [2014-04-28 21:49:18.102981] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f, host: fed-gfs4, port: 0 [2014-04-28 21:49:18.138936] I [glusterd-handler.c:2212:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 84c0bb8e-bf48-4386-ada1-3e4db68b980f [2014-04-28 21:49:18.138982] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: c7a11029-1...