Hi,
here my slave node logs at the time sync stopped:
[2020-03-08 03:33:01.489559] I [glusterfsd-mgmt.c:2282:mgmt_getspec_cbk]
0-glusterfs: No change in volfile,continuing
[2020-03-08 03:33:01.489298] I [MSGID: 100011]
[glusterfsd.c:1679:reincarnate] 0-glusterfsd: Fetching the volume file from
server...
[2020-03-08 09:49:37.991177] I [fuse-bridge.c:6083:fuse_thread_proc]
0-fuse: initiating unmount of /tmp/gsyncd-aux-mount-l3PR6o
[2020-03-08 09:49:37.993978] W [glusterfsd.c:1596:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e65) [0x7f2f9f70ce65]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x55cc67c20625]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x55cc67c2048b] ) 0-:
received signum (15), shutting down
[2020-03-08 09:49:37.994012] I [fuse-bridge.c:6871:fini] 0-fuse: Unmounting
'/tmp/gsyncd-aux-mount-l3PR6o'.
[2020-03-08 09:49:37.994022] I [fuse-bridge.c:6876:fini] 0-fuse: Closing
fuse connection to '/tmp/gsyncd-aux-mount-l3PR6o'.
[2020-03-08 09:49:50.302806] I [MSGID: 100030] [glusterfsd.c:2867:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 7.3
(args: /usr/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO
--log-file=/var/log/glusterfs/geo-replication-slaves/media-storage_slave-node_dr-media/mnt-master-node-srv-media-storage.log
--volfile-server=localhost --volfile-id=dr-media --client-pid=-1
/tmp/gsyncd-aux-mount-1AQBe4)
[2020-03-08 09:49:50.311167] I [glusterfsd.c:2594:daemonize] 0-glusterfs:
Pid of current running process is 55522
[2020-03-08 09:49:50.352351] I [MSGID: 101190]
[event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 0
[2020-03-08 09:49:50.352416] I [MSGID: 101190]
[event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2020-03-08 09:49:50.373248] I [MSGID: 114020] [client.c:2436:notify]
0-dr-media-client-0: parent translators are ready, attempting connect on
transport
Final graph:
+------------------------------------------------------------------------------+
1: volume dr-media-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host slave-node
5: option remote-subvolume /data/dr-media
6: option transport-type socket
7: option transport.address-family inet
8: option username 4aafadfa-6ccb-4c2f-920c-1f37ed9eef34
9: option password a8c0f88b-2621-4038-8f65-98068ea71bb0
10: option transport.socket.ssl-enabled off
11: option transport.tcp-user-timeout 0
12: option transport.socket.keepalive-time 20
13: option transport.socket.keepalive-interval 2
14: option transport.socket.keepalive-count 9
15: option send-gids true
16: end-volume
17:
18: volume dr-media-dht
19: type cluster/distribute
20: option lock-migration off
21: option force-migration off
22: subvolumes dr-media-client-0
23: end-volume
24:
25: volume dr-media-write-behind
26: type performance/write-behind
27: option cache-size 8MB
28: option aggregate-size 1MB
29: subvolumes dr-media-dht
30: end-volume
31:
32: volume dr-media-read-ahead
33: type performance/read-ahead
34: subvolumes dr-media-write-behind
35: end-volume
36:
37: volume dr-media-readdir-ahead
38: type performance/readdir-ahead
39: option parallel-readdir off
40: option rda-request-size 131072
41: option rda-cache-limit 10MB
42: subvolumes dr-media-read-ahead
43: end-volume
44:
45: volume dr-media-io-cache
46: type performance/io-cache
47: option cache-size 256MB
48: subvolumes dr-media-readdir-ahead
49: end-volume
50:
51: volume dr-media-open-behind
52: type performance/open-behind
53: subvolumes dr-media-io-cache
54: end-volume
55:
56: volume dr-media-quick-read
57: type performance/quick-read
58: option cache-size 256MB
59: subvolumes dr-media-open-behind
60: end-volume
61:
62: volume dr-media-md-cache
63: type performance/md-cache
64: option cache-posix-acl true
65: subvolumes dr-media-quick-read
66: end-volume
67:
68: volume dr-media-io-threads
69: type performance/io-threads
70: subvolumes dr-media-md-cache
71: end-volume
72:
73: volume dr-media
74: type debug/io-stats
75: option log-level INFO
76: option threads 16
77: option latency-measurement off
78: option count-fop-hits off
79: option global-threading off
80: subvolumes dr-media-io-threads
81: end-volume
82:
83: volume posix-acl-autoload
84: type system/posix-acl
85: subvolumes dr-media
86: end-volume
87:
88: volume gfid-access-autoload
89: type features/gfid-access
90: subvolumes posix-acl-autoload
91: end-volume
92:
93: volume meta-autoload
94: type meta
95: subvolumes gfid-access-autoload
96: end-volume
97:
+------------------------------------------------------------------------------+
[2020-03-08 09:49:50.388102] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
0-dr-media-client-0: changing port to 49152 (from 0)
[2020-03-08 09:49:50.388132] I [socket.c:865:__socket_shutdown]
0-dr-media-client-0: intentional socket shutdown(12)
[2020-03-08 09:49:50.401512] I [MSGID: 114057]
[client-handshake.c:1375:select_server_supported_programs]
0-dr-media-client-0: Using Program GlusterFS 4.x v1, Num (1298437), Version
(400)
[2020-03-08 09:49:50.401765] W [dict.c:999:str_to_data]
(-->/usr/lib64/glusterfs/7.3/xlator/protocol/client.so(+0x381d4)
[0x7f7f63daa1d4] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
[0x7f7f76b3a2f6] -->/lib64/libglusterfs.so.0(str_to_data+0x71)
[0x7f7f76b36c11] ) 0-dict: value is NULL [Invalid argument]
[2020-03-08 09:49:50.401783] I [MSGID: 114006]
[client-handshake.c:1236:client_setvolume] 0-dr-media-client-0: failed to
set process-name in handshake msg
[2020-03-08 09:49:50.404115] I [MSGID: 114046]
[client-handshake.c:1105:client_setvolume_cbk] 0-dr-media-client-0:
Connected to dr-media-client-0, attached to remote volume
'/data/dr-media'.
[2020-03-08 09:49:50.405761] I [fuse-bridge.c:5166:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
7.22
[2020-03-08 09:49:50.405780] I [fuse-bridge.c:5777:fuse_graph_sync] 0-fuse:
switched to graph 0
[2020-03-08 11:49:00.933168] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f7f76b438ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f7f6def1221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f7f6def2998] (-->
/lib64/libpthread.so.0(+0x7e65)[0x7f7f75984e65] (-->
/lib64/libc.so.6(clone+0x6d)[0x7f7f7524a88d] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:53:29.822876] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f7f76b438ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f7f6def1221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f7f6def2998] (-->
/lib64/libpthread.so.0(+0x7e65)[0x7f7f75984e65] (-->
/lib64/libc.so.6(clone+0x6d)[0x7f7f7524a88d] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 12:00:46.656170] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f7f76b438ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f7f6def1221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f7f6def2998] (-->
/lib64/libpthread.so.0(+0x7e65)[0x7f7f75984e65] (-->
/lib64/libc.so.6(clone+0x6d)[0x7f7f7524a88d] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
master node logs here:
[2020-03-08 09:49:38.115108] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fb5ab2aa8ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fb5a265c221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8b3a)[0x7fb5a265cb3a] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fb5aa0ebe25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fb5a99b4bad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 09:49:38.932935] I [fuse-bridge.c:6083:fuse_thread_proc]
0-fuse: initiating unmount of /tmp/gsyncd-aux-mount-Xy8taN
[2020-03-08 09:49:38.947634] W [glusterfsd.c:1596:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7fb5aa0ebe25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x557ccdf55625]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x557ccdf5548b] ) 0-:
received signum (15), shutting down
[2020-03-08 09:49:38.947684] I [fuse-bridge.c:6871:fini] 0-fuse: Unmounting
'/tmp/gsyncd-aux-mount-Xy8taN'.
[2020-03-08 09:49:38.947704] I [fuse-bridge.c:6876:fini] 0-fuse: Closing
fuse connection to '/tmp/gsyncd-aux-mount-Xy8taN'.
[2020-03-08 09:49:51.545529] I [MSGID: 100030] [glusterfsd.c:2867:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 7.3
(args: /usr/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO
--log-file=/var/log/glusterfs/geo-replication/media-storage_slave-node_dr-media/mnt-srv-media-storage.log
--volfile-server=localhost --volfile-id=media-storage --client-pid=-1
/tmp/gsyncd-aux-mount-XT9WfC)
[2020-03-08 09:49:51.559518] I [glusterfsd.c:2594:daemonize] 0-glusterfs:
Pid of current running process is 18484
[2020-03-08 09:49:51.645473] I [MSGID: 101190]
[event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 0
[2020-03-08 09:49:51.645624] I [MSGID: 101190]
[event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2020-03-08 09:49:51.701470] I [MSGID: 114020] [client.c:2436:notify]
0-media-storage-client-0: parent translators are ready, attempting connect
on transport
[2020-03-08 09:49:51.702908] I [rpc-clnt.c:1963:rpc_clnt_reconfig]
0-media-storage-client-0: changing port to 49152 (from 0)
[2020-03-08 09:49:51.702961] I [socket.c:865:__socket_shutdown]
0-media-storage-client-0: intentional socket shutdown(12)
[2020-03-08 09:49:51.703807] I [MSGID: 114057]
[client-handshake.c:1375:select_server_supported_programs]
0-media-storage-client-0: Using Program GlusterFS 4.x v1, Num (1298437),
Version (400)
[2020-03-08 09:49:51.704147] W [dict.c:999:str_to_data]
(-->/usr/lib64/glusterfs/7.3/xlator/protocol/client.so(+0x381d4)
[0x7fc5c01031d4] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
[0x7fc5ce8d82f6] -->/lib64/libglusterfs.so.0(str_to_data+0x71)
[0x7fc5ce8d4c11] ) 0-dict: value is NULL [Invalid argument]
[2020-03-08 09:49:51.704178] I [MSGID: 114006]
[client-handshake.c:1236:client_setvolume] 0-media-storage-client-0: failed
to set process-name in handshake msg
[2020-03-08 09:49:51.705982] I [MSGID: 114046]
[client-handshake.c:1105:client_setvolume_cbk] 0-media-storage-client-0:
Connected to media-storage-client-0, attached to remote volume
'/srv/media-storage'.
[2020-03-08 09:49:51.707627] I [fuse-bridge.c:5166:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
7.22
[2020-03-08 09:49:51.707658] I [fuse-bridge.c:5777:fuse_graph_sync] 0-fuse:
switched to graph 0
[2020-03-08 09:56:26.875082] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 10:12:35.190809] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 10:25:06.240795] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 10:40:33.946794] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 10:43:50.459247] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 10:55:27.034947] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 11:05:53.483207] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 11:26:01.492270] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 11:32:56.618737] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:37:50.475099] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:40:54.362173] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 11:42:35.859423] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:44:24.906383] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:47:45.474723] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:50:58.127202] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:52:55.616968] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 11:56:24.039211] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2020-03-08 11:57:56.031648] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 12:06:19.686974] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fc5ce8e18ea]
(-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fc5c5c93221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fc5c5c94998] (-->
/lib64/libpthread.so.0(+0x7e25)[0x7fc5cd722e25] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-03-08 12:12:41.888889] E [fuse-bridge.c:4188:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
I see just no such file or directory error logs for 5 days. I shifted log
level to DEBUG I will send them also.
Strahil Nikolov <hunter86_bg at yahoo.com>, 12 Mar 2020 Per, 07:55
tarihinde
?unu yazd?:
> On March 11, 2020 10:17:05 PM GMT+02:00, "Etem Bayo?lu" <
> etembayoglu at gmail.com> wrote:
> >Hi Strahil,
> >
> >Thank you for your response. when I tail logs on both master and slave
> >I
> >get this:
> >
> >on slave, from
>
>/var/log/glusterfs/geo-replication-slaves/<geo-session>/mnt-XXX.log
> >file:
> >
> >[2020-03-11 19:53:32.721509] E
> >[fuse-bridge.c:227:check_and_dump_fuse_W]
> >(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f78e10488ea]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f78d83f6221]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f78d83f7998]
> >(-->
> >/lib64/libpthread.so.0(+0x7e65)[0x7f78dfe89e65] (-->
> >/lib64/libc.so.6(clone+0x6d)[0x7f78df74f88d] ))))) 0-glusterfs-fuse:
> >writing to fuse device failed: No such file or directory
> >[2020-03-11 19:53:32.723758] E
> >[fuse-bridge.c:227:check_and_dump_fuse_W]
> >(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f78e10488ea]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f78d83f6221]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f78d83f7998]
> >(-->
> >/lib64/libpthread.so.0(+0x7e65)[0x7f78dfe89e65] (-->
> >/lib64/libc.so.6(clone+0x6d)[0x7f78df74f88d] ))))) 0-glusterfs-fuse:
> >writing to fuse device failed: No such file or directory
> >
> >on master,
> >from /var/log/glusterfs/geo-replication/<geo-session>/mnt-XXX.log
file:
> >
> >[2020-03-11 19:40:55.872002] E [fuse-bridge.c:4188:fuse_xattr_cbk]
> >0-glusterfs-fuse: extended attribute not supported by the backend
> >storage
> >[2020-03-11 19:40:58.389748] E
> >[fuse-bridge.c:227:check_and_dump_fuse_W]
> >(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f1f4b9108ea]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f1f42cc2221]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f1f42cc3998]
> >(-->
> >/lib64/libpthread.so.0(+0x7e25)[0x7f1f4a751e25] (-->
> >/lib64/libc.so.6(clone+0x6d)[0x7f1f4a01abad] ))))) 0-glusterfs-fuse:
> >writing to fuse device failed: No such file or directory
> >[2020-03-11 19:41:08.214591] E
> >[fuse-bridge.c:227:check_and_dump_fuse_W]
> >(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f1f4b9108ea]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f1f42cc2221]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f1f42cc3998]
> >(-->
> >/lib64/libpthread.so.0(+0x7e25)[0x7f1f4a751e25] (-->
> >/lib64/libc.so.6(clone+0x6d)[0x7f1f4a01abad] ))))) 0-glusterfs-fuse:
> >writing to fuse device failed: No such file or directory
> >[2020-03-11 19:53:59.275469] E
> >[fuse-bridge.c:227:check_and_dump_fuse_W]
> >(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f1f4b9108ea]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f1f42cc2221]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f1f42cc3998]
> >(-->
> >/lib64/libpthread.so.0(+0x7e25)[0x7f1f4a751e25] (-->
> >/lib64/libc.so.6(clone+0x6d)[0x7f1f4a01abad] ))))) 0-glusterfs-fuse:
> >writing to fuse device failed: No such file or directory
> >
> >####################gsyncd.log outputs:######################
> >
> >from slave:
> >[2020-03-11 08:55:16.384085] I [repce(slave
> >master-node/srv/media-storage):96:service_loop] RepceServer:
> >terminating on
> >reaching EOF.
> >[2020-03-11 08:57:55.87364] I [resource(slave
> >master-node/srv/media-storage):1105:connect] GLUSTER: Mounting gluster
> >volume locally...
> >[2020-03-11 08:57:56.171372] I [resource(slave
> >master-node/srv/media-storage):1128:connect] GLUSTER: Mounted gluster
> >volume duration=1.0837
> >[2020-03-11 08:57:56.173346] I [resource(slave
> >master-node/srv/media-storage):1155:service_loop] GLUSTER: slave
> >listening
> >
> >from master:
> >[2020-03-11 20:08:55.145453] I [master(worker
> >/srv/media-storage):1991:syncjob] Syncer: Sync Time Taken
> >duration=134.9987num_files=4661 job=2 return_code=0
> >[2020-03-11 20:08:55.285871] I [master(worker
> >/srv/media-storage):1421:process] _GMaster: Entry Time Taken MKD=83
> >MKN=8109 LIN=0 SYM=0 REN=0 RMD=0 CRE=0 duration=17.0358 UNL=0
> >[2020-03-11 20:08:55.286082] I [master(worker
> >/srv/media-storage):1431:process] _GMaster: Data/Metadata Time Taken
> >SETA=83 SETX=0 meta_duration=0.9334 data_duration=135.2497 DATA=8109
> >XATT=0
> >[2020-03-11 20:08:55.286410] I [master(worker
> >/srv/media-storage):1441:process] _GMaster: Batch Completed
> >changelog_end=1583917610 entry_stime=None changelog_start=1583917610
> >stime=None duration=153.5185 num_changelogs=1 mode=xsync
> >[2020-03-11 20:08:55.315442] I [master(worker
> >/srv/media-storage):1681:crawl] _GMaster: processing xsync changelog
>
>
>path=/var/lib/misc/gluster/gsyncd/media-storage_daredevil01.zingat.com_dr-media/srv-media-storage/xsync/XSYNC-CHANGELOG.1583917613
> >
> >
> >Thank you..
> >
> >Strahil Nikolov <hunter86_bg at yahoo.com>, 11 Mar 2020 ?ar,
12:28
> >tarihinde
> >?unu yazd?:
> >
> >> On March 11, 2020 10:09:27 AM GMT+02:00, "Etem Bayo?lu"
<
> >> etembayoglu at gmail.com> wrote:
> >> >Hello community,
> >> >
> >> >I've set up a glusterfs geo-replication node for disaster
recovery.
> >I
> >> >manage about 10TB media data on a gluster volume and I want to
sync
> >all
> >> >data to remote location over WAN. So, I created a slave node
volume
> >on
> >> >disaster recovery center on remote location and I've
started geo-rep
> >> >session. It has been transferred data fine up to about 800GB,
but
> >> >syncing
> >> >has stopped for three days despite gluster geo-rep status
active and
> >> >hybrid
> >> >crawl. There is no sending data. I've recreated session
and
> >restarted
> >> >but
> >> >still the same.
> >> >
> >> >#gluster volu geo-rep status
> >> >
> >> >MASTER NODE MASTER VOL MASTER BRICK
SLAVE
> >> >USER
> >> >SLAVE SLAVE NODE
> >> >STATUS
> >> > CRAWL STATUS LAST_SYNCED
> >>
> >>
>
>
>>------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> >> >master-node media-storage /srv/media-storage root
> >> > ssh://slave-node::dr-media slave-node Active
> >> >Hybrid Crawl N/A
> >> >
> >> >Any idea? please. Thank you.
> >>
> >> Hi Etem,
> >>
> >> Have you checked the log on both source and destination. Maybe
they
> >can
> >> hint you what the issue is.
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
>
> Hi Etem,
>
> Nothing obvious....
> I don't like this one:
>
> [2020-03-11 19:53:32.721509] E
> >[fuse-bridge.c:227:check_and_dump_fuse_W]
> >(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7f78e10488ea]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7f78d83f6221]
> >(-->
> >/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7f78d83f7998]
> >(-->
> >/lib64/libpthread.so.0(+0x7e65)[0x7f78dfe89e65] (-->
> >/lib64/libc.so.6(clone+0x6d)[0x7f78df74f88d] ))))) 0-glusterfs-fuse:
> >writing to fuse device failed: No such file or directory
>
> Can you check the health of the slave volume (splitbrains, brick
> status,etc) ?
>
> Maybe you can check the logs and find when exactly the master stopped
> replicating and then checking the logs of the slave at that exact time .
>
> Also, you can increase the log level on the slave and then recreate the
> georep.
> For details, check:
>
>
>
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
>
> P.S.: Trace/debug can fill up your /var/log, so enable them for a short
> period of time.
>
> Best Regards,
> Strahil Nikolov
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20200312/3bc9f732/attachment.html>