Displaying 20 results from an estimated 53 matches for "afr_notify".
2017 Oct 19
3
gluster tiering errors
...E [MSGID: 109089]
[dht-helper.c:517:dht_check_and_open_fd_on_subvol_task]
0-<vol>-tier-dht: Failed to open the fd (0x7f02bf201130, flags=00) on file
34d76e11-412f-4bc6-9a3e-b1f89658f13b @ <vol>-hot-dht [Invalid argument]
[2017-10-18 17:13:59.541591] E [MSGID: 108006]
[afr-common.c:4808:afr_notify] 0-<vol>-replicate-0: All subvolumes are
down. Going offline until atleast one of them comes back up.
[2017-10-18 17:13:59.541748] E [MSGID: 108006]
[afr-common.c:4808:afr_notify] 0-<vol>-replicate-1: All subvolumes are
down. Going offline until atleast one of them comes back up.
[2017-...
2017 Oct 22
0
gluster tiering errors
...dht-helper.c:517:dht_check_and_open_fd_on_subvol_task] 0-<vol>-tier-dht:
> Failed to open the fd (0x7f02bf201130, flags=00) on file
> 34d76e11-412f-4bc6-9a3e-b1f89658f13b @ <vol>-hot-dht [Invalid argument]
> [2017-10-18 17:13:59.541591] E [MSGID: 108006]
> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-0: All subvolumes are
> down. Going offline until atleast one of them comes back up.
> [2017-10-18 17:13:59.541748] E [MSGID: 108006]
> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-1: All subvolumes are
> down. Going offline until atleast one of them c...
2017 Oct 22
1
gluster tiering errors
...:dht_check_and_open_fd_on_subvol_task]
>> 0-<vol>-tier-dht: Failed to open the fd (0x7f02bf201130, flags=00) on file
>> 34d76e11-412f-4bc6-9a3e-b1f89658f13b @ <vol>-hot-dht [Invalid argument]
>> [2017-10-18 17:13:59.541591] E [MSGID: 108006]
>> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-0: All subvolumes are
>> down. Going offline until atleast one of them comes back up.
>> [2017-10-18 17:13:59.541748] E [MSGID: 108006]
>> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-1: All subvolumes are
>> down. Going offline until atlea...
2017 Oct 24
2
gluster tiering errors
...:dht_check_and_open_fd_on_subvol_task]
>> 0-<vol>-tier-dht: Failed to open the fd (0x7f02bf201130, flags=00) on file
>> 34d76e11-412f-4bc6-9a3e-b1f89658f13b @ <vol>-hot-dht [Invalid argument]
>> [2017-10-18 17:13:59.541591] E [MSGID: 108006]
>> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-0: All subvolumes are
>> down. Going offline until atleast one of them comes back up.
>> [2017-10-18 17:13:59.541748] E [MSGID: 108006]
>> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-1: All subvolumes are
>> down. Going offline until atlea...
2017 Oct 27
0
gluster tiering errors
...pen_fd_on_subvol_task]
>>> 0-<vol>-tier-dht: Failed to open the fd (0x7f02bf201130, flags=00) on file
>>> 34d76e11-412f-4bc6-9a3e-b1f89658f13b @ <vol>-hot-dht [Invalid argument]
>>> [2017-10-18 17:13:59.541591] E [MSGID: 108006]
>>> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-0: All subvolumes are
>>> down. Going offline until atleast one of them comes back up.
>>> [2017-10-18 17:13:59.541748] E [MSGID: 108006]
>>> [afr-common.c:4808:afr_notify] 0-<vol>-replicate-1: All subvolumes are
>>> down. Going off...
2018 Apr 09
2
Gluster cluster on two networks
...9 11:42:29.628191] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti\
l brick's port is available
[2018-04-09 11:42:29.628272] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum is not met
[2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-4: current graph is no longer active, destroying rpc_client
[2018-04-09 11:42:29.628349] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-cl...
2018 Apr 10
0
Gluster cluster on two networks
...ID: 114018] [client.c:2285:client_rpc_notify]
> 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2.
> Client process will keep trying to connect to glusterd unti\
> l brick's port is available
> [2018-04-09 11:42:29.628272] W [MSGID: 108001]
> [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum
> is not met
> [2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify]
> 2-urd-gds-volume-client-4: current graph is no longer active, destroying
> rpc_client
> [2018-04-09 11:42:29.628349] I [MSGID: 114021] [client.c:2369:no...
2012 Jun 22
1
Fedora 17 GlusterFS 3.3.0 problmes
...nt-0: Connected to 10.59.0.11:24009, attached to remote volume
'/export'.
[2012-06-21 19:24:39.644512] I [client-handshake.c:1445:client_setvolume_cbk]
0-share-client-0: Server and Client lk-version numbers are not same, reopening
the fds
[2012-06-21 19:24:39.644579] I [afr-common.c:3627:afr_notify]
0-share-replicate-0: Subvolume 'share-client-0' came back up; going online.
[2012-06-21 19:24:39.644621] I
[client-handshake.c:453:client_set_lk_version_cbk] 0-share-client-0: Server lk
version = 1
[2012-06-21 19:24:39.646923] I
[client-handshake.c:1636:select_server_supported_programs...
2018 Apr 10
1
Gluster cluster on two networks
...5:client_rpc_notify]
> > 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2.
> > Client process will keep trying to connect to glusterd unti\
> > l brick's port is available
> > [2018-04-09 11:42:29.628272] W [MSGID: 108001]
> > [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum
> > is not met
> > [2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify]
> > 2-urd-gds-volume-client-4: current graph is no longer active, destroying
> > rpc_client
> > [2018-04-09 11:42:29.628349] I [MSGID:...
2018 Apr 10
0
Gluster cluster on two networks
...09 11:42:29.628191] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-2: disconnected from urd-gds-volume-client-2. Client process will keep trying to connect to glusterd unti
l brick's port is available
[2018-04-09 11:42:29.628272] W [MSGID: 108001] [afr-common.c:5370:afr_notify] 2-urd-gds-volume-replicate-0: Client-quorum is not met
[2018-04-09 11:42:29.628299] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-client-4: current graph is no longer active, destroying rpc_client
[2018-04-09 11:42:29.628349] I [MSGID: 114021] [client.c:2369:notify] 2-urd-gds-volume-cl...
2011 Aug 24
1
Input/output error
...lient-1: Using Program GlusterFS-3.1.0, Num (1298437), Version
(310)
[2011-08-24 11:35:16.322126] I [client-handshake.c:913:client_setvolume_cbk]
0-syncdata-client-1: Connected to 172.23.0.2:24009, attached to remote
volume '/home/syncdata'.
[2011-08-24 11:35:16.322191] I [afr-common.c:2611:afr_notify]
0-syncdata-replicate-0: Subvolume 'syncdata-client-1' came back up; going
online.
[2011-08-24 11:35:16.323281] I
[client-handshake.c:1082:select_server_supported_programs]
0-syncdata-client-0: Using Program GlusterFS-3.1.0, Num (1298437), Version
(310)
[2011-08-24 11:35:16.324274] I [clien...
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
...vice: RPC program not available (req 1298437 330) for
10.0.1.203:65533
[2017-06-23 16:35:18.872421] E
[rpcsvc.c:565:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed
to complete successfully
gl-master-04 : glustershd.log
[2017-06-23 16:35:42.536840] E [MSGID: 108006]
[afr-common.c:4323:afr_notify] 0-mvol1-replicate-1: All subvolumes are
down. Going offline until atleast one of them comes back up.
[2017-06-23 16:35:51.702413] E [socket.c:2292:socket_connect_finish]
0-mvol1-client-3: connection to 10.0.1.156:49152 failed (Connection refused)
gl-master-03, brick1-movl1.log :
[2017-06-23...
2012 Jan 04
0
FUSE init failed
...-client-3: Using Program GlusterFS 3.2.5, Num (1298437),
Version (310)
[2012-01-04 20:06:45.234313] I
[client-handshake.c:913:client_setvolume_cbk] 0-test-volume-client-3:
Connected to 10.141.0.4:24010, attached to remote volume '/local'.
[2012-01-04 20:06:45.234338] I [afr-common.c:3141:afr_notify]
0-test-volume-replicate-1: Subvolume 'test-volume-client-3' came back up;
going online.
[2012-01-04 20:06:45.260113] I
[client-handshake.c:1090:select_server_supported_programs]
0-test-volume-client-2: Using Program GlusterFS 3.2.5, Num (1298437),
Version (310)
[2012-01-04 20:06:45.26...
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All-
There are a lot of the following type of errors in my client and NFS
logs following a recent volume expansion.
[2012-02-16 22:59:42.504907] I
[dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol:
atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501
34 - 1227133511
[2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk]
0-atmos-dht: mismatching
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
...330) for?
> 10.0.1.203:65533
> [2017-06-23 16:35:18.872421] E?
> [rpcsvc.c:565:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed?
> to complete successfully
>
> gl-master-04 : glustershd.log
>
> [2017-06-23 16:35:42.536840] E [MSGID: 108006]?
> [afr-common.c:4323:afr_notify] 0-mvol1-replicate-1: All subvolumes are?
> down. Going offline until atleast one of them comes back up.
> [2017-06-23 16:35:51.702413] E [socket.c:2292:socket_connect_finish]?
> 0-mvol1-client-3: connection to 10.0.1.156:49152 failed (Connection refused)
>
>
>
> gl-master-0...
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
...5533
>> [2017-06-23 16:35:18.872421] E
>> [rpcsvc.c:565:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed
>> to complete successfully
>>
>> gl-master-04 : glustershd.log
>>
>> [2017-06-23 16:35:42.536840] E [MSGID: 108006]
>> [afr-common.c:4323:afr_notify] 0-mvol1-replicate-1: All subvolumes are
>> down. Going offline until atleast one of them comes back up.
>> [2017-06-23 16:35:51.702413] E [socket.c:2292:socket_connect_finish]
>> 0-mvol1-client-3: connection to 10.0.1.156:49152 failed (Connection refused)
>>
>>
>&g...
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
...ed to 172.16.95.153:24009, attached to remote volume '/mnt/cloudbrick'.
[2013-12-03 05:42:32.790884] I [client-handshake.c:1423:client_setvolume_cbk] 0-glustervol-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2013-12-03 05:42:32.791003] I [afr-common.c:3685:afr_notify] 0-glustervol-replicate-0: Subvolume 'glustervol-client-1' came back up; going online.
[2013-12-03 05:42:32.791161] I [client-handshake.c:453:client_set_lk_version_cbk] 0-glustervol-client-1: Server lk version = 1
[2013-12-03 05:42:32.795103] E [afr-self-heal-data.c:1321:afr_sh_data_open_cb...
2018 Jan 23
2
Understanding client logs
...me '/interbullfs/i\
nterbull'.
[2017-11-09 10:10:39.777663] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-interbull-interbull-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2017-11-09 10:10:39.777724] I [MSGID: 108005] [afr-common.c:4756:afr_notify] 0-interbull-interbull-replicate-0: Subvolume 'interbull-interbull-client-0' came back up; going online.
[2017-11-09 10:10:39.777954] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-interbull-interbull-client-0: Server lk version = 1
[2017-11-09 10:10:39.779909] I [MS...
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
...p is 3.200 and machine real ip is 3.233 and 3.5. In
gluster log i can see:
[2011-06-06 02:33:54.230082] I
[client-handshake.c:913:client_setvolume_cbk] 0-atlas-client-1:
Connected to 192.168.3.233:24009, attached to re
mote volume '/atlas'.
[2011-06-06 02:33:54.230116] I [afr-common.c:2514:afr_notify]
0-atlas-replicate-0: Subvolume 'atlas-client-1' came back up; going
online.
[2011-06-06 02:33:54.237541] I [fuse-bridge.c:3316:fuse_graph_setup]
0-fuse: switched to graph 0
[2011-06-06 02:33:54.237801] I [fuse-bridge.c:2897:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: g...
2011 Feb 09
0
Removing two bricks
...uster volume remove-brick brick linguest4:/data/exp linguest5:/data/exp
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful
As soon as the above is run and completed I now get this log every 3 seconds.
[2011-02-09 10:28:43.957955] E [afr-common.c:2602:afr_notify] brick-replicate-1: All subvolumes are down. Going offline until atleast one of them comes back up.
That log will continue until i log onto all other machines and umount,mount the gluster mount point.
Also now that the two bricks have been removed there is data missing even though the documentatio...