Aravinda
2015-Nov-18 05:17 UTC
[Gluster-users] Geo-replication started but not replicating
Looks like I/O error on slave while doing keep_alive. We can get more useful info for the same from Slave log files. In Slave nodes look for errors in /var/log/glusterfs/geo-replication-slaves/*.log and /var/log/glusterfs/geo-replication-slaves/*.gluster.log regards Aravinda On 11/17/2015 10:02 PM, Deepak Ravi wrote:> I also noted that the second master gfs2 alternates between passive/faulty. > Not sure if this matters but, I have changed the /etc/hosts file to change > 127.0.0.1 to gfs1 and so on because my node would not be in peer cluster > state. > > Gluster version : 3.7.6-1 > OS: RHEL 7 > > > [root at gfs1 ~]# cat > /var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.log > [2015-11-17 10:30:30.244277] I [monitor(monitor):362:distribute] <top>: > slave bricks: [{'host': 'xfs1', 'dir': '/data/brick/xvol'}, {'host': > 'xfs2', 'dir': '/data/brick/xvol'}] > [2015-11-17 10:30:30.245239] I [monitor(monitor):383:distribute] <top>: > worker specs: [('/data/brick/gvol', 'ssh://root at xfs2:gluster://localhost:xvol', > 1)] > [2015-11-17 10:30:30.433696] I [monitor(monitor):221:monitor] Monitor: > ------------------------------------------------------------ > [2015-11-17 10:30:30.433882] I [monitor(monitor):222:monitor] Monitor: > starting gsyncd worker > [2015-11-17 10:30:30.561599] I [gsyncd(/data/brick/gvol):650:main_i] <top>: > syncing: gluster://localhost:gvol -> ssh://root at xfs2 > :gluster://localhost:xvol > [2015-11-17 10:30:30.573781] I [changelogagent(agent):75:__init__] > ChangelogAgent: Agent listining... > [2015-11-17 10:30:34.26421] I [master(/data/brick/gvol):83:gmaster_builder] > <top>: setting up xsync change detection mode > [2015-11-17 10:30:34.26695] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.27324] I [master(/data/brick/gvol):83:gmaster_builder] > <top>: setting up changelog change detection mode > [2015-11-17 10:30:34.27477] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.27873] I [master(/data/brick/gvol):83:gmaster_builder] > <top>: setting up changeloghistory change detection mode > [2015-11-17 10:30:34.28048] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:36.40117] I [master(/data/brick/gvol):1229:register] > _GMaster: xsync temp directory: > /var/lib/misc/glusterfsd/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol/0c4166e49b1b516d061ed475806364b9/xsync > [2015-11-17 10:30:36.40409] I > [resource(/data/brick/gvol):1432:service_loop] GLUSTER: Register time: > 1447774236 > [2015-11-17 10:30:36.65299] I [master(/data/brick/gvol):530:crawlwrap] > _GMaster: primary master with volume id > f77a024e-a865-493e-9ce2-d7dbe99ee6d5 ... > [2015-11-17 10:30:36.67856] I [master(/data/brick/gvol):539:crawlwrap] > _GMaster: crawl interval: 1 seconds > [2015-11-17 10:31:36.185137] I [master(/data/brick/gvol):552:crawlwrap] > _GMaster: 0 crawls, 0 turns > [2015-11-17 10:32:36.315582] I [master(/data/brick/gvol):552:crawlwrap] > _GMaster: 0 crawls, 0 turns > [2015-11-17 10:33:36.438072] I [master(/data/brick/gvol):552:crawlwrap] > _GMaster: 0 crawls, 0 turns > > > [root at gfs2 ~]#cat > /var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.log > | less > [2015-11-17 10:30:30.498424] I [monitor(monitor):362:distribute] <top>: > slave bricks: [{'host': 'xfs1', 'dir': '/data/brick/xvol'}, {'host': > 'xfs2', 'dir': '/data/brick/xvol'}] > [2015-11-17 10:30:30.499473] I [monitor(monitor):383:distribute] <top>: > worker specs: [('/data/brick/gvol', 'ssh://root at xfs1:gluster://localhost:xvol', > 1)] > [2015-11-17 10:30:30.679028] I [monitor(monitor):221:monitor] Monitor: > ------------------------------------------------------------ > [2015-11-17 10:30:30.679259] I [monitor(monitor):222:monitor] Monitor: > starting gsyncd worker > [2015-11-17 10:30:30.807980] I [gsyncd(/data/brick/gvol):650:main_i] <top>: > syncing: gluster://localhost:gvol -> ssh://root at xfs1 > :gluster://localhost:xvol > [2015-11-17 10:30:30.820440] I [changelogagent(agent):75:__init__] > ChangelogAgent: Agent listining... > [2015-11-17 10:30:34.358032] I > [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up xsync > change detection mode > [2015-11-17 10:30:34.358304] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.359335] I > [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up changelog > change detection mode > [2015-11-17 10:30:34.359496] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.359890] I > [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up > changeloghistory change detection mode > [2015-11-17 10:30:34.360044] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:36.371203] I [master(/data/brick/gvol):1229:register] > _GMaster: xsync temp directory: > /var/lib/misc/glusterfsd/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol/0c4166e49b1b516d061ed475806364b9/xsync > [2015-11-17 10:30:36.371514] I > [resource(/data/brick/gvol):1432:service_loop] GLUSTER: Register time: > 1447774236 > [2015-11-17 10:30:36.383291] I [master(/data/brick/gvol):530:crawlwrap] > _GMaster: primary master with volume id > f77a024e-a865-493e-9ce2-d7dbe99ee6d5 ... > [2015-11-17 10:30:36.386276] I [master(/data/brick/gvol):539:crawlwrap] > _GMaster: crawl interval: 1 seconds > [2015-11-17 10:30:46.558255] E [repce(/data/brick/gvol):207:__call__] > RepceClient: call 29036:140624661567232:1447774246.47 (keep_alive) failed > on peer with OSError > [2015-11-17 10:30:46.558463] E > [syncdutils(/data/brick/gvol):276:log_raise_exception] <top>: FAIL: > Traceback (most recent call last): > File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 306, > in twrap > tf(*aa) > File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 438, in > keep_alive > cls.slave.server.keep_alive(vi) > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in > __call__ > return self.ins(self.meth, *a) > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in > __call__ > raise res > OSError: [Errno 5] Input/output error > > > > > ----------- > > [root at gfs1 ~]# ps aux | grep gsyncd > root 15837 0.0 1.0 368584 11148 ? Ssl 11:08 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --path=/data/brick/gvol --monitor -c > /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf --iprefix=/var > :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e xfs1::xvol > root 15867 0.0 1.7 884044 18064 ? Ssl 11:08 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --local-path /data/brick/gvol --agent --rpc-fd 7,10,9,8 > root 15868 0.0 1.7 847644 17292 ? Sl 11:08 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --feedback-fd 12 --local-path /data/brick/gvol --local-id > .%2Fdata%2Fbrick%2Fgvol --rpc-fd 9,8,7,10 --subvol-num 1 --resource-remote > ssh://root at xfs2:gluster://localhost:xvol > root 15879 0.0 0.4 80384 4244 ? S 11:08 0:00 ssh > -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i > /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-5bwc6n/21cd0d364db39da791c9bc6dcf62c55b.sock root at xfs2 > /nonexistent/gsyncd --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N > --listen --timeout 120 gluster://localhost:xvol > root 15887 0.1 3.9 630404 40476 ? Ssl 11:08 0:02 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.%2Fdata%2Fbrick%2Fgvol.gluster.log > --volfile-server=localhost --volfile-id=gvol --client-pid=-1 > /tmp/gsyncd-aux-mount-IOxY7_ > root 16540 0.0 0.0 112640 956 pts/0 R+ 11:26 0:00 grep > --color=auto gsyncd > -------------- > [root at gfs2 ec2-user]# ps aux | grep gsyncd > root 3099 0.0 1.3 368488 13568 ? Ssl 11:08 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --path=/data/brick/gvol --monitor -c > /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf --iprefix=/var > :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c xfs1::xvol > root 6618 1.0 1.9 883944 19872 ? Ssl 11:27 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --local-path /data/brick/gvol --agent --rpc-fd 8,11,10,9 > root 6619 1.1 1.4 847548 15004 ? Sl 11:27 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --feedback-fd 13 --local-path /data/brick/gvol --local-id > .%2Fdata%2Fbrick%2Fgvol --rpc-fd 10,9,8,11 --subvol-num 1 --resource-remote > ssh://root at xfs1:gluster://localhost:xvol > root 6631 0.3 0.4 80384 4240 ? S 11:27 0:00 ssh > -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i > /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-WIfjHQ/25f2a0dc75697352a40d6471e241edf7.sock root at xfs1 > /usr/libexec/glusterfs/gsyncd --session-owner > f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout 120 > gluster://localhost:xvol > root 6638 1.0 3.2 630408 33416 ? Ssl 11:27 0:00 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.%2Fdata%2Fbrick%2Fgvol.gluster.log > --volfile-server=localhost --volfile-id=gvol --client-pid=-1 > /tmp/gsyncd-aux-mount-o44DsN > root 6692 0.0 0.0 112640 960 pts/0 R+ 11:28 0:00 grep > --color=auto gsyncd > --------------------- > > [root at xfs1 ~]# ps aux | grep gsyncd > root 2753 0.5 1.2 585232 12576 ? Ssl 11:28 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout > 120 gluster://localhost:xvol -c > /var/lib/glusterd/geo-replication/gsyncd_template.conf > root 2773 0.3 3.4 630412 34728 ? Ssl 11:28 0:00 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log > --volfile-server=localhost --volfile-id=xvol --client-pid=-1 > /tmp/gsyncd-aux-mount-une5yr > root 2793 0.0 0.0 112640 956 pts/0 R+ 11:28 0:00 grep > --color=auto gsyncd > [root at xfs1 ~]# > > ----------------------- > > [root at xfs2 ec2-user]# ps aux | grep gsyncd > root 28921 0.0 1.2 585236 12668 ? Ssl 11:08 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout > 120 gluster://localhost:xvol -c > /var/lib/glusterd/geo-replication/gsyncd_template.conf > root 28941 0.2 3.7 630412 38280 ? Ssl 11:08 0:02 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log > --volfile-server=localhost --volfile-id=xvol --client-pid=-1 > /tmp/gsyncd-aux-mount-cZvAEH > root 29029 0.0 0.0 112640 956 pts/0 R+ 11:29 0:00 grep > --color=auto gsyncd > [root at xfs2 ec2-user]# > > > > > On Tue, Nov 17, 2015 at 12:39 AM, Aravinda <avishwan at redhat.com> wrote: > >> One status row should show Active and other should show Passive. Please >> provide logs from gfs1 and gfs2 >> nodes(/var/log/glusterfs/geo-replication/gvol/*.log) >> >> Also please let us know, >> 1. Gluster version and OS >> 2. output of `ps aux | grep gsyncd` from Master nodes and Slave nodes >> >> regards >> Aravinda >> >> On 11/17/2015 02:09 AM, Deepak Ravi wrote: >> >> Hi all >> >> I'm working on a Geo-replication setup that I'm having issues with. >> >> Situation : >> >> - In the east region of AWS, I Created a replicated volume between 2 >> nodes, lets call this volume *gvol* >> - >> *In the west region of AWS, I Created another replicated volume between 2 >> nodes, lets call this volume xvol * >> - Geo replication was created and started successfully >> - >> >> [root at gfs1 mnt]# gluster volume geo-replication gvol xfs1::xvol status >> >> MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE >> SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED >> ------------------------------------------------------------------------------------------------------------------------------- >> gfs1 gvol /data/brick/gvol root xfs1::xvol >> N/A Passive N/A N/A >> gfs2 gvol /data/brick/gvol root xfs1::xvol >> N/A Passive N/A N/A >> >> The data on nodes(gfs1 and gfs2) was not being replicated to xfs1 at all. I >> tried restarting the services and it still didn't help. Looking at the log >> files didn't help me much because I didn't know what I should be looking >> for. >> >> Can someone point me in the right direction? >> >> Thanks >> >> >> >> _______________________________________________ >> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users >> >> >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151118/71204483/attachment.html>
Deepak Ravi
2015-Nov-18 06:29 UTC
[Gluster-users] Geo-replication started but not replicating
Alright, here you go. Slaves xfs1 and xfs2: *[root at xfs1 ~]*# cat /var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2 d7dbe99ee6d5\:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log | less [2015-11-17 15:30:32.082984] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2015-11-17 15:30:32.083124] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-0: changing port to 49152 (from 0) [2015-11-17 15:30:32.085969] E [socket.c:3021:socket_connect] 0-xvol-client-0: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2015-11-17 15:30:32.086037] W [socket.c:588:__socket_rwv] 0-xvol-client-0: writev on 54.172.172.245:49152 failed (Broken pipe) [2015-11-17 15:30:32.086301] E [rpc-clnt.c:362:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7fc03801ea82] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fc037de9a3e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fc037de9b4e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7fc037deb4da] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fc037debd08] ))))) 0-xvol-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2015-11-17 15:30:32.086051 (xid=0x3) [2015-11-17 15:30:32.086316] W [MSGID: 114032] [client-handshake.c:1623:client_dump_version_cbk] 0-xvol-client-0: received RPC status error [Transport endpoint is not connected] [2015-11-17 15:30:32.086331] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-xvol-client-0: disconnected from xvol-client-0. Client process will keep trying to connect to glusterd until brick's port is available [2015-11-17 15:30:32.086670] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-1: changing port to 49152 (from 0) [2015-11-17 15:30:32.090173] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-xvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2015-11-17 15:30:32.090911] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-xvol-client-1: Connected to xvol-client-1, attached to remote volume '/data/brick/xvol'. [2015-11-17 15:30:32.090924] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-xvol-client-1: Server and Client lk-version numbers are not same, reopening the fds [2015-11-17 15:30:32.090972] I [MSGID: 108005] [afr-common.c:3841:afr_notify] 0-xvol-replicate-0: Subvolume 'xvol-client-1' came back up; going online. [2015-11-17 15:30:32.094306] I [fuse-bridge.c:5137:fuse_graph_setup] 0-fuse: switched to graph 0 [2015-11-17 15:30:32.094349] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-xvol-client-1: Server lk version = 1 [2015-11-17 15:30:32.094463] I [fuse-bridge.c:4030:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.22 [2015-11-17 15:30:32.098769] W [MSGID: 108027] [afr-common.c:2100:afr_discover_done] 0-xvol-replicate-0: no read subvols for / [2015-11-17 15:30:36.075211] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-0: changing port to 49152 (from 0) [2015-11-17 15:30:36.077673] E [socket.c:3021:socket_connect] 0-xvol-client-0: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2015-11-17 15:30:36.077716] W [socket.c:588:__socket_rwv] 0-xvol-client-0: writev on 54.172.172.245:49152 failed (Broken pipe) [2015-11-17 15:30:36.077963] E [rpc-clnt.c:362:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7fc03801ea82] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fc037de9a3e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fc037de9b4e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7fc037deb4da] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fc037debd08] ))))) 0-xvol-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2015-11-17 15:30:36.077724 (xid=0x6) [2015-11-17 15:30:32.099560] W [MSGID: 108027] [afr-common.c:2100:afr_discover_done] 0-xvol-replicate-0: no read subvols for / [2015-11-17 15:30:36.077980] W [MSGID: 114032] [client-handshake.c:1623:client_dump_version_cbk] 0-xvol-client-0: received RPC status error [Transport endpoint is not connected] [2015-11-17 15:30:36.078003] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-xvol-client-0: disconnected from xvol-client-0. Client process will keep trying to connect to glusterd until brick's port is available [2015-11-17 15:30:40.080814] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-0: changing port to 49152 (from 0) [2015-11-17 15:30:40.083175] E [socket.c:3021:socket_connect] 0-xvol-client-0: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2015-11-17 15:30:40.083215] W [socket.c:588:__socket_rwv] 0-xvol-client-0: writev on 54.172.172.245:49152 failed (Broken pipe) ----------------------------------------------------------------------------------------- *[root at xfs1 ~]*# cat /var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5\:gluster%3A%2F%2F127.0.0.1%3Axvol.log | less [2015-11-17 10:30:32.41258] I [gsyncd(slave):650:main_i] <top>: syncing: gluster://localhost:xvol [2015-11-17 10:30:33.92473] I [resource(slave):844:service_loop] GLUSTER: slave listening [2015-11-17 10:30:46.525188] E [repce(slave):117:worker] <top>: call failed: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker res = getattr(self.obj, rmeth)(*in_data[2:]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 803, in keep_alive Xattr.lsetxattr('.', key, val) File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 78, in lsetxattr cls.raise_oserr() File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr raise OSError(errn, os.strerror(errn)) OSError: [Errno 5] Input/output error ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- *[root at xfs2 ~]*# cat /var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5\:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log | less [2015-11-17 15:30:31.764788] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-0: changing port to 49152 (from 0) [2015-11-17 15:30:31.767367] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-1: changing port to 49152 (from 0) [2015-11-17 15:30:31.769692] E [socket.c:3021:socket_connect] 0-xvol-client-1: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2015-11-17 15:30:31.769761] W [socket.c:588:__socket_rwv] 0-xvol-client-1: writev on 54.172.132.47:49152 failed (Broken pipe) [2015-11-17 15:30:31.770043] E [rpc-clnt.c:362:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f976f608a82] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f976f3d3a3e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f976f3d3b4e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7a)[0x7f976f3d54da] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f976f3d5d08] ))))) 0-xvol-client-1: forced unwinding frame type(GF-DUMP) op(DUMP(1)) called at 2015-11-17 15:30:31.769770 (xid=0x3) [2015-11-17 15:30:31.770058] W [MSGID: 114032] [client-handshake.c:1623:client_dump_version_cbk] 0-xvol-client-1: received RPC status error [Transport endpoint is not connected] [2015-11-17 15:30:31.770073] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-xvol-client-1: disconnected from xvol-client-1. Client process will keep trying to connect to glusterd until brick's port is available [2015-11-17 15:30:31.770392] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-xvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2015-11-17 15:30:31.771143] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-xvol-client-0: Connected to xvol-client-0, attached to remote volume '/data/brick/xvol'. [2015-11-17 15:30:31.771160] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-xvol-client-0: Server and Client lk-version numbers are not same, reopening the fds [2015-11-17 15:30:31.771207] I [MSGID: 108005] [afr-common.c:3841:afr_notify] 0-xvol-replicate-0: Subvolume 'xvol-client-0' came back up; going online. [2015-11-17 15:30:31.774360] I [fuse-bridge.c:5137:fuse_graph_setup] 0-fuse: switched to graph 0 [2015-11-17 15:30:31.774412] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-xvol-client-0: Server lk version = 1 [2015-11-17 15:30:31.774547] I [fuse-bridge.c:4030:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.22 [2015-11-17 15:30:35.755595] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-xvol-client-1: changing port to 49152 (from 0) [2015-11-17 15:30:35.757773] E [socket.c:3021:socket_connect] 0-xvol-client-1: connection attempt on 127.0.0.1:24007 failed, (Invalid argument) [2015-11-17 15:30:35.757807] W [socket.c:588:__socket_rwv] 0-xvol-client-1: writev on 54.172.132.47:49152 failed (Broken pipe) ------------------------------------------------------------------------------------------------------------------------------------------------------------- *[root at xfs2 ~]#* cat /var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5\:gluster%3A%2F%2F127.0.0.1%3Axvol.log [2015-11-17 10:30:31.723133] I [gsyncd(slave):650:main_i] <top>: syncing: gluster://localhost:xvol [2015-11-17 10:30:32.776576] I [resource(slave):844:service_loop] GLUSTER: slave listening [2015-11-17 11:07:15.296569] I [repce(slave):92:service_loop] RepceServer: terminating on reaching EOF. [2015-11-17 11:07:15.297107] I [syncdutils(slave):220:finalize] <top>: exiting. [2015-11-17 11:08:18.86194] I [gsyncd(slave):650:main_i] <top>: syncing: gluster://localhost:xvol [2015-11-17 11:08:19.135596] I [resource(slave):844:service_loop] GLUSTER: slave listening [root at xfs2 ~]# Thanks Deepak On Wed, Nov 18, 2015 at 12:17 AM, Aravinda <avishwan at redhat.com> wrote:> Looks like I/O error on slave while doing keep_alive. We can get more > useful info for the same from Slave log files. > > In Slave nodes look for errors in > */var/log/glusterfs/geo-replication-slaves/**.log and > */var/log/glusterfs/geo-replication-slaves/**.gluster.log > > regards > Aravinda > > On 11/17/2015 10:02 PM, Deepak Ravi wrote: > > I also noted that the second master gfs2 alternates between passive/faulty. > Not sure if this matters but, I have changed the /etc/hosts file to change > 127.0.0.1 to gfs1 and so on because my node would not be in peer cluster > state. > > Gluster version : 3.7.6-1 > OS: RHEL 7 > > > [root at gfs1 ~]# cat > /var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.log > [2015-11-17 10:30:30.244277] I [monitor(monitor):362:distribute] <top>: > slave bricks: [{'host': 'xfs1', 'dir': '/data/brick/xvol'}, {'host': > 'xfs2', 'dir': '/data/brick/xvol'}] > [2015-11-17 10:30:30.245239] I [monitor(monitor):383:distribute] <top>: > worker specs: [('/data/brick/gvol', 'ssh://root at xfs2:gluster://localhost:xvol', > 1)] > [2015-11-17 10:30:30.433696] I [monitor(monitor):221:monitor] Monitor: > ------------------------------------------------------------ > [2015-11-17 10:30:30.433882] I [monitor(monitor):222:monitor] Monitor: > starting gsyncd worker > [2015-11-17 10:30:30.561599] I [gsyncd(/data/brick/gvol):650:main_i] <top>: > syncing: gluster://localhost:gvol -> ssh://root at xfs2 > :gluster://localhost:xvol > [2015-11-17 10:30:30.573781] I [changelogagent(agent):75:__init__] > ChangelogAgent: Agent listining... > [2015-11-17 10:30:34.26421] I [master(/data/brick/gvol):83:gmaster_builder] > <top>: setting up xsync change detection mode > [2015-11-17 10:30:34.26695] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.27324] I [master(/data/brick/gvol):83:gmaster_builder] > <top>: setting up changelog change detection mode > [2015-11-17 10:30:34.27477] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.27873] I [master(/data/brick/gvol):83:gmaster_builder] > <top>: setting up changeloghistory change detection mode > [2015-11-17 10:30:34.28048] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:36.40117] I [master(/data/brick/gvol):1229:register] > _GMaster: xsync temp directory: > /var/lib/misc/glusterfsd/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol/0c4166e49b1b516d061ed475806364b9/xsync > [2015-11-17 10:30:36.40409] I > [resource(/data/brick/gvol):1432:service_loop] GLUSTER: Register time: > 1447774236 > [2015-11-17 10:30:36.65299] I [master(/data/brick/gvol):530:crawlwrap] > _GMaster: primary master with volume id > f77a024e-a865-493e-9ce2-d7dbe99ee6d5 ... > [2015-11-17 10:30:36.67856] I [master(/data/brick/gvol):539:crawlwrap] > _GMaster: crawl interval: 1 seconds > [2015-11-17 10:31:36.185137] I [master(/data/brick/gvol):552:crawlwrap] > _GMaster: 0 crawls, 0 turns > [2015-11-17 10:32:36.315582] I [master(/data/brick/gvol):552:crawlwrap] > _GMaster: 0 crawls, 0 turns > [2015-11-17 10:33:36.438072] I [master(/data/brick/gvol):552:crawlwrap] > _GMaster: 0 crawls, 0 turns > > > [root at gfs2 ~]#cat > /var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.log > | less > [2015-11-17 10:30:30.498424] I [monitor(monitor):362:distribute] <top>: > slave bricks: [{'host': 'xfs1', 'dir': '/data/brick/xvol'}, {'host': > 'xfs2', 'dir': '/data/brick/xvol'}] > [2015-11-17 10:30:30.499473] I [monitor(monitor):383:distribute] <top>: > worker specs: [('/data/brick/gvol', 'ssh://root at xfs1:gluster://localhost:xvol', > 1)] > [2015-11-17 10:30:30.679028] I [monitor(monitor):221:monitor] Monitor: > ------------------------------------------------------------ > [2015-11-17 10:30:30.679259] I [monitor(monitor):222:monitor] Monitor: > starting gsyncd worker > [2015-11-17 10:30:30.807980] I [gsyncd(/data/brick/gvol):650:main_i] <top>: > syncing: gluster://localhost:gvol -> ssh://root at xfs1 > :gluster://localhost:xvol > [2015-11-17 10:30:30.820440] I [changelogagent(agent):75:__init__] > ChangelogAgent: Agent listining... > [2015-11-17 10:30:34.358032] I > [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up xsync > change detection mode > [2015-11-17 10:30:34.358304] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.359335] I > [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up changelog > change detection mode > [2015-11-17 10:30:34.359496] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:34.359890] I > [master(/data/brick/gvol):83:gmaster_builder] <top>: setting up > changeloghistory change detection mode > [2015-11-17 10:30:34.360044] I [master(/data/brick/gvol):404:__init__] > _GMaster: using 'rsync' as the sync engine > [2015-11-17 10:30:36.371203] I [master(/data/brick/gvol):1229:register] > _GMaster: xsync temp directory: > /var/lib/misc/glusterfsd/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol/0c4166e49b1b516d061ed475806364b9/xsync > [2015-11-17 10:30:36.371514] I > [resource(/data/brick/gvol):1432:service_loop] GLUSTER: Register time: > 1447774236 > [2015-11-17 10:30:36.383291] I [master(/data/brick/gvol):530:crawlwrap] > _GMaster: primary master with volume id > f77a024e-a865-493e-9ce2-d7dbe99ee6d5 ... > [2015-11-17 10:30:36.386276] I [master(/data/brick/gvol):539:crawlwrap] > _GMaster: crawl interval: 1 seconds > [2015-11-17 10:30:46.558255] E [repce(/data/brick/gvol):207:__call__] > RepceClient: call 29036:140624661567232:1447774246.47 (keep_alive) failed > on peer with OSError > [2015-11-17 10:30:46.558463] E > [syncdutils(/data/brick/gvol):276:log_raise_exception] <top>: FAIL: > Traceback (most recent call last): > File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 306, > in twrap > tf(*aa) > File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 438, in > keep_alive > cls.slave.server.keep_alive(vi) > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in > __call__ > return self.ins(self.meth, *a) > File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in > __call__ > raise res > OSError: [Errno 5] Input/output error > > > > > ----------- > > [root at gfs1 ~]# ps aux | grep gsyncd > root 15837 0.0 1.0 368584 11148 ? Ssl 11:08 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --path=/data/brick/gvol --monitor -c > /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf --iprefix=/var > :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e xfs1::xvol > root 15867 0.0 1.7 884044 18064 ? Ssl 11:08 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --local-path /data/brick/gvol --agent --rpc-fd 7,10,9,8 > root 15868 0.0 1.7 847644 17292 ? Sl 11:08 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=c6e8cdef-bc46-4684-9c75-fc348fefb95e > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --feedback-fd 12 --local-path /data/brick/gvol --local-id > .%2Fdata%2Fbrick%2Fgvol --rpc-fd 9,8,7,10 --subvol-num 1 --resource-remotessh://root at xfs2:gluster://localhost:xvol > root 15879 0.0 0.4 80384 4244 ? S 11:08 0:00 ssh > -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i > /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-5bwc6n/21cd0d364db39da791c9bc6dcf62c55b.sock root at xfs2 > /nonexistent/gsyncd --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N > --listen --timeout 120 gluster://localhost:xvol > root 15887 0.1 3.9 630404 40476 ? Ssl 11:08 0:02 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.%2Fdata%2Fbrick%2Fgvol.gluster.log > --volfile-server=localhost --volfile-id=gvol --client-pid=-1 > /tmp/gsyncd-aux-mount-IOxY7_ > root 16540 0.0 0.0 112640 956 pts/0 R+ 11:26 0:00 grep > --color=auto gsyncd > -------------- > [root at gfs2 ec2-user]# ps aux | grep gsyncd > root 3099 0.0 1.3 368488 13568 ? Ssl 11:08 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --path=/data/brick/gvol --monitor -c > /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf --iprefix=/var > :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c xfs1::xvol > root 6618 1.0 1.9 883944 19872 ? Ssl 11:27 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --local-path /data/brick/gvol --agent --rpc-fd 8,11,10,9 > root 6619 1.1 1.4 847548 15004 ? Sl 11:27 0:00 python > /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/data/brick/gvol > -c /var/lib/glusterd/geo-replication/gvol_xfs1_xvol/gsyncd.conf > --iprefix=/var :gvol --glusterd-uuid=449f6672-fdcd-480b-870d-51e1ed92236c > xfs1::xvol -N -p --slave-id ff6d57c8-cfb5-40b3-843f-bcd79cdd6164 > --feedback-fd 13 --local-path /data/brick/gvol --local-id > .%2Fdata%2Fbrick%2Fgvol --rpc-fd 10,9,8,11 --subvol-num 1 --resource-remotessh://root at xfs1:gluster://localhost:xvol > root 6631 0.3 0.4 80384 4240 ? S 11:27 0:00 ssh > -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i > /var/lib/glusterd/geo-replication/secret.pem -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-WIfjHQ/25f2a0dc75697352a40d6471e241edf7.sock root at xfs1 > /usr/libexec/glusterfs/gsyncd --session-owner > f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout 120 > gluster://localhost:xvol > root 6638 1.0 3.2 630408 33416 ? Ssl 11:27 0:00 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication/gvol/ssh%3A%2F%2Froot%4054.172.172.245%3Agluster%3A%2F%2F127.0.0.1%3Axvol.%2Fdata%2Fbrick%2Fgvol.gluster.log > --volfile-server=localhost --volfile-id=gvol --client-pid=-1 > /tmp/gsyncd-aux-mount-o44DsN > root 6692 0.0 0.0 112640 960 pts/0 R+ 11:28 0:00 grep > --color=auto gsyncd > --------------------- > > [root at xfs1 ~]# ps aux | grep gsyncd > root 2753 0.5 1.2 585232 12576 ? Ssl 11:28 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout > 120 gluster://localhost:xvol -c > /var/lib/glusterd/geo-replication/gsyncd_template.conf > root 2773 0.3 3.4 630412 34728 ? Ssl 11:28 0:00 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log > --volfile-server=localhost --volfile-id=xvol --client-pid=-1 > /tmp/gsyncd-aux-mount-une5yr > root 2793 0.0 0.0 112640 956 pts/0 R+ 11:28 0:00 grep > --color=auto gsyncd > [root at xfs1 ~]# > > ----------------------- > > [root at xfs2 ec2-user]# ps aux | grep gsyncd > root 28921 0.0 1.2 585236 12668 ? Ssl 11:08 0:00 > /usr/bin/python /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py > --session-owner f77a024e-a865-493e-9ce2-d7dbe99ee6d5 -N --listen --timeout > 120 gluster://localhost:xvol -c > /var/lib/glusterd/geo-replication/gsyncd_template.conf > root 28941 0.2 3.7 630412 38280 ? Ssl 11:08 0:02 > /usr/sbin/glusterfs --aux-gfid-mount --acl > --log-file=/var/log/glusterfs/geo-replication-slaves/f77a024e-a865-493e-9ce2-d7dbe99ee6d5:gluster%3A%2F%2F127.0.0.1%3Axvol.gluster.log > --volfile-server=localhost --volfile-id=xvol --client-pid=-1 > /tmp/gsyncd-aux-mount-cZvAEH > root 29029 0.0 0.0 112640 956 pts/0 R+ 11:29 0:00 grep > --color=auto gsyncd > [root at xfs2 ec2-user]# > > > > > On Tue, Nov 17, 2015 at 12:39 AM, Aravinda <avishwan at redhat.com> <avishwan at redhat.com> wrote: > > > One status row should show Active and other should show Passive. Please > provide logs from gfs1 and gfs2 > nodes(/var/log/glusterfs/geo-replication/gvol/*.log) > > Also please let us know, > 1. Gluster version and OS > 2. output of `ps aux | grep gsyncd` from Master nodes and Slave nodes > > regards > Aravinda > > On 11/17/2015 02:09 AM, Deepak Ravi wrote: > > Hi all > > I'm working on a Geo-replication setup that I'm having issues with. > > Situation : > > - In the east region of AWS, I Created a replicated volume between 2 > nodes, lets call this volume *gvol* > - > *In the west region of AWS, I Created another replicated volume between 2 > nodes, lets call this volume xvol * > - Geo replication was created and started successfully > - > > [root at gfs1 mnt]# gluster volume geo-replication gvol xfs1::xvol status > > MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE > SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED > ------------------------------------------------------------------------------------------------------------------------------- > gfs1 gvol /data/brick/gvol root xfs1::xvol > N/A Passive N/A N/A > gfs2 gvol /data/brick/gvol root xfs1::xvol > N/A Passive N/A N/A > > The data on nodes(gfs1 and gfs2) was not being replicated to xfs1 at all. I > tried restarting the services and it still didn't help. Looking at the log > files didn't help me much because I didn't know what I should be looking > for. > > Can someone point me in the right direction? > > Thanks > > > > _______________________________________________ > Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users > > > >-- *~Deepak* -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151118/01761b07/attachment.html>