search for: syncdutils

Displaying 20 results from an estimated 27 matches for "syncdutils".

2012 Mar 20
1
issues with geo-replication
...ker [2012-03-20 19:29:10.168212] I [gsyncd:289:main_i] <top>: syncing: gluster://localhost:myvol -> ssh://root at remoteip:/data/path [2012-03-20 19:29:10.222372] D [repce:130:push] RepceClient: call 23154:47903647023584:1332271750.22 __repce_version__() ... [2012-03-20 19:29:10.504734] E [syncdutils:133:exception] <top>: FAIL: Traceback (most recent call last): File "/opt/glusterfs/3.2.5/local/libexec//glusterfs/python/syncdaemon/syncdutils.py", line 154, in twrap tf(*aa) File "/opt/glusterfs/3.2.5/local/libexec//glusterfs/python/syncdaemon/repce.py", line 117...
2014 Jun 27
1
geo-replication status faulty
...I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker [2014-06-26 17:09:10.137434] I [gsyncd(/data/glusterfs/vol0/brick1/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root at node003:gluster://localhost:gluster_vol1 [2014-06-26 17:09:10.258044] E [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception] <top>: connection to peer is broken [2014-06-26 17:09:10.259278] W [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!! [2014-06-26 17:09:10.260755] W [syncdutils(/data/glusterfs/vol0/bri...
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
...master node: [2018-07-11 18:42:48.941760] I [changelogagent(/urd-gds/gluster):73:__init__] ChangelogAgent: Agent listining... [2018-07-11 18:42:48.947567] I [resource(/urd-gds/gluster):1780:connect_remote] SSH: Initializing SSH connection between master and slave... [2018-07-11 18:42:49.363514] E [syncdutils(/urd-gds/gluster):304:log_raise_exception] <top>: connection to peer is broken [2018-07-11 18:42:49.364279] E [resource(/urd-gds/gluster):210:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret\ .p...
2018 Mar 06
1
geo replication
...c/glusterfsd/testtomcat/ssh%3A%2F%2Froot%40172.16.81.101%3Agluster%3A%2F%2F127.0.0.1%3Atesttomcat/b6a7905143e15d9b079b804c0a8ebf42 [2018-03-06 08:32:51.873176] I [resource(/gfs/testtomcat/mount):1653:service_loop] GLUSTER: Register time time=1520325171 [2018-03-06 08:32:51.926801] E [syncdutils(/gfs/testtomcat/mount):299:log_raise_exception] <top>: master volinfo unavailable [2018-03-06 08:32:51.936203] I [syncdutils(/gfs/testtomcat/mount):271:finalize] <top>: exiting. [2018-03-06 08:32:51.938469] I [repce(/gfs/testtomcat/mount):92:service_loop] RepceServer: terminating on rea...
2017 Sep 29
1
Gluster geo replication volume is faulty
...cls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock --compress geo-rep-user at gfs6:/proc/17554/cwd error=12 [2017-09-29 15:53:29.797259] I [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. [2017-09-29 15:53:29.799386] I [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminating on reaching EOF. [2017-09-29 15:53:29.799570] I [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. [2017-09-29 15:53:30.105407] I [moni...
2017 Oct 06
0
Gluster geo replication volume is faulty
...ion=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17554/cwderror=12 > [2017-09-29 15:53:29.797259] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:29.799386] I > [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminating on > reaching EOF. > [2017-09-29 15:53:29.799570] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [20...
2017 Aug 17
0
Extended attributes not supported by the backend storage
...he exact same point every time I try. In the master, I'm getting the following error messages: [2017-08-16 12:57:45.205311] E [repce(/mnt/storage/lapbacks):207:__call__] RepceClient: call 17769:140586894673664:1502888257.97 (entry_ops) failed on peer with OSError [2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main main_i() File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py"...
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
...e exact same point every time I try. In the master, I'm getting the following error messages: [2017-08-16 12:57:45.205311] E [repce(/mnt/storage/lapbacks):207:__call__] RepceClient: call 17769:140586894673664:1502888257.97 (entry_ops) failed on peer with OSError [2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main main_i() File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py"...
2011 May 03
3
Issue with geo-replication and nfs auth
...s auth (since initial release). Geo-replication --------------- System : Debian 6.0 amd64 Glusterfs: 3.2.0 MASTER (volume) => SLAVE (directory) For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status: 2011-05-03 09:57:40.315774] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twrap tf(*aa) File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 118, in listen rid, exc...
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/ SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz This release is made off jenkins-release-19 -- Gluster Build System _______________________________________________ Gluster-devel mailing list Gluster-devel at nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...:crawl] _GMaster: starting history crawl [{turns=1}, {stime=(1705935991, 0)}, {etime=1706125889}, {entry_stime=(1705935991, 0)}] [2024-01-24 19:51:30.251965] I [master(worker /opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time [{stime=(1705935991, 0)}] [2024-01-24 19:51:30.376715] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}] [2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}] [2024-01-24 19:51:30.993608] I [gsyncdstatu...
2018 Jan 19
2
geo-replication command rsync returned with 3
...swordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-o2j6UA/db73a3bfe7357366aff777392fc60a7e.sock --compress root at gl-slave-01-int:/proc/398/cwd" returned with 3 [2018-01-19 14:23:27.158600] I [syncdutils(/brick1/mvol1):220:finalize] <top>: exiting. [2018-01-19 14:23:27.162561] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF. [2018-01-19 14:23:27.163053] I [syncdutils(agent):220:finalize] <top>: exiting. [2018-01-19 14:23:28.61029] I [monitor(monitor):344:mon...
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...76:crawl] _GMaster: starting history crawl [{turns=1}, {stime=(1705935991, 0)}, {etime=1706125889}, {entry_stime=(1705935991, 0)}] [2024-01-24 19:51:30.251965] I [master(worker /opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time [{stime=(1705935991, 0)}] [2024-01-24 19:51:30.376715] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}] [2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}] [2024-01-24 19:51:30.993608] I [gsyncdstatus(...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...:crawl] _GMaster: starting history crawl [{turns=1}, {stime=(1705935991, 0)}, {etime=1706125889}, {entry_stime=(1705935991, 0)}] [2024-01-24 19:51:30.251965] I [master(worker /opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time [{stime=(1705935991, 0)}] [2024-01-24 19:51:30.376715] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}] [2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}] [2024-01-24 19:51:30.993608] I [gsyncdstatu...
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
...ues where geo-replication suddenly stopped and became stuck in a loop of Initializing..., Active.. Faulty on master1, while master2 remained in passive mode. Upon checking the gsyncd.log on the master1 node, we observed the following error (please refer to the attached logs for more details): E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}] # gluster volume geo-replication tier1data status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE...
2012 Jan 03
1
geo-replication loops
Hi, I was thinking about a common (I hope!) use case of Glusterfs geo-replication. Imagine 3 different facility having their own glusterfs deployment: * central-office * remote-office1 * remote-office2 Every client mount their local glusterfs deployment and write files (i.e.: user A deposit a PDF document on remote-office2), and it get replicated to the central-office glusterfs volume as soon
2018 Jan 19
0
geo-replication command rsync returned with 3
...oStrictHostKeyChecking=no -i >/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto > >-S /tmp/gsyncd-aux-ssh-o2j6UA/db73a3bfe7357366aff777392fc60a7e.sock >--compress root at gl-slave-01-int:/proc/398/cwd" returned with 3 > >[2018-01-19 14:23:27.158600] I [syncdutils(/brick1/mvol1):220:finalize] > ><top>: exiting. >[2018-01-19 14:23:27.162561] I [repce(agent):92:service_loop] >RepceServer: terminating on reaching EOF. >[2018-01-19 14:23:27.163053] I [syncdutils(agent):220:finalize] <top>: >exiting. >[2018-01-19 14:23:28.61029]...
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Jan 24
4
geo-replication command rsync returned with 3
...-e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-MZwEp2/cbad1c5f88978ecd713bdb1478fbabbe.sock --compress root at gl-node5-int:/proc/2013/cwd??? error=3 [2018-01-24 15:50:35.628978] I [syncdutils(/brick1/mvol1):271:finalize] <top>: exiting. after this upgrade one server fails : Start-Date: 2018-01-18? 04:33:52 Commandline: /usr/bin/unattended-upgrade Upgrade: libdns-export162:amd64 (1:9.10.3.dfsg.P4-8ubuntu1.8, 1:9.10.3.dfsg.P4-8ubuntu1.10), libisccfg140:amd64 (1:9.10.3.dfsg.P4-8...
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
...-4d22-9094-50ac8f8756e7', 'gid': 0, 'mode': 16877, 'entry': '.gfid/5531bd64-ac50-462b-943e-c0bf1c52f52c/Oracle_VM_VirtualBox_Extension', 'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False}) [2018-03-12 13:37:14.835911] E [syncdutils(/brick1/mvol1):299:log_raise_exception] <top>: The above directory failed to sync. Please fix it to proceed further. both gfid's of the directories as shown in the log : brick1/mvol1/.trashcan/test1/b1 0x5531bd64ac50462b943ec0bf1c52f52c brick1/mvol1/.trashcan/test1/b1/Oracle_VM_Virtual...