search for: gsyncd

Displaying 20 results from an estimated 40 matches for "gsyncd".

Did you mean: rsyncd
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
...-07-25 19:01:55.217725] I [monitor(monitor):19:set_state] Monitor: new state: starting... [2011-07-25 19:01:55.235734] I [monitor(monitor):42:monitor] Monitor: ------------------------------------------------------------ [2011-07-25 19:01:55.235909] I [monitor(monitor):43:monitor] Monitor: starting gsyncd worker [2011-07-25 19:01:55.295624] I [gsyncd:286:main_i] <top>: syncing: gluster://localhost:flvol -> ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave [2011-07-25 19:01:55.300410] D [repce:131:push] RepceClient: call 10976:139842552960768:1311620515.3 __repce_vers...
2012 Mar 20
1
issues with geo-replication
...-03-20 19:29:10.115926] I [monitor(monitor):18:set_state] Monitor: new state: starting... [2012-03-20 19:29:10.118187] I [monitor(monitor):59:monitor] Monitor: ------------------------------------------------------------ [2012-03-20 19:29:10.118295] I [monitor(monitor):60:monitor] Monitor: starting gsyncd worker [2012-03-20 19:29:10.168212] I [gsyncd:289:main_i] <top>: syncing: gluster://localhost:myvol -> ssh://root at remoteip:/data/path [2012-03-20 19:29:10.222372] D [repce:130:push] RepceClient: call 23154:47903647023584:1332271750.22 __repce_version__() ... [2012-03-20 19:29:10.504734]...
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'. As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path. I have followed the configuration steps as documented in the guide, but still hit this issue....
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
...t;khiremat at redhat.com>: Hi Marcus, Is the gluster geo-rep version is same on both master and slave? Thanks, Kotresh HR On Fri, Jul 13, 2018 at 1:26 AM, Marcus Peders?n <marcus.pedersen at slu.se<mailto:marcus.pedersen at slu.se>> wrote: Hi Kotresh, i have replaced both files (gsyncdconfig.py<https://review.gluster.org/#/c/20207/1/geo-replication/syncdaemon/gsyncdconfig.py> and repce.py<https://review.gluster.org/#/c/20207/1/geo-replication/syncdaemon/repce.py>) in all nodes both master and slave. I rebooted all servers but geo-replication status is still Stopped....
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
...lustershd.log -S /var/run/gluster/1ac2284f75671ffa.socket --xlator-option *replicate*.node-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --process-name glustershd --client-pid=-6 root 3730742 0.0 0.0 288264 14388 ? Ssl 18:44 0:00 /usr/bin/python3 /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --path=/opt/tier1data2019/brick --monitor -c /var/lib/glusterd/geo-replication/tier1data_drtier1data_drtier1data/gsyncd.conf --iprefix=/var :tier1data --glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d drtier1data::drtier1data root 3730763 2.4 0.0 2097216 35904 ? Sl 18:44 0:09...
2017 Sep 29
1
Gluster geo replication volume is faulty
...errlog] Popen: command returned error cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock --compress geo-rep-user at gfs6:/proc/17554/cwd error=12 [2017-09-29 15:53:29.797259] I [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. [2017-09-29 15:53:29.799386] I [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminat...
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2017 Oct 06
0
Gluster geo replication volume is faulty
...ed errorcmd=rsync -aR0 --inplace --files-from=- > --super --stats --numeric-ids --no-implied-dirs --existing --xattrs > --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no > -i /var/lib/glusterd/geo-replication/secret.pem -p 22 > -oControlMaster=auto -S > /tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock > --compress geo-rep-user at gfs6:/proc/17554/cwderror=12 > [2017-09-29 15:53:29.797259] I > [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting. > [2017-09-29 15:53:29.799386] I > [repce(/gfs/brick2/gv0):92:service_...
2012 Apr 27
1
geo-replication and rsync
Hi, can someone tell me the differenct between geo-replication and plain rsync? On which frequency files are replicated with geo-replication? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2011 May 03
3
Issue with geo-replication and nfs auth
hi, I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release). Geo-replication --------------- System : Debian 6.0 amd64 Glusterfs: 3.2.0 MASTER (volume) => SLAVE (directory) For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status: 2011-05-03 09:57:40.315774] E
2012 Jan 03
1
geo-replication loops
Hi, I was thinking about a common (I hope!) use case of Glusterfs geo-replication. Imagine 3 different facility having their own glusterfs deployment: * central-office * remote-office1 * remote-office2 Every client mount their local glusterfs deployment and write files (i.e.: user A deposit a PDF document on remote-office2), and it get replicated to the central-office glusterfs volume as soon
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...1data::drtier1data create push-pem force gluster volume geo-replication tier1data drtier1data::drtier1data stop gluster volume geo-replication tier1data drtier1data::drtier1data start Now I am able to start the geo-replication, but I am getting the same error. [2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}] [2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}] [2024-01-24 19:51:24.158021] I [resource(wo...
2018 Jan 19
2
geo-replication command rsync returned with 3
...me some hints how to proceed... ? any help is appreciated. best regards Dietmar [2018-01-19 14:23:20.141123] I [monitor(monitor):267:monitor] Monitor: ------------------------------------------------------------ [2018-01-19 14:23:20.141457] I [monitor(monitor):268:monitor] Monitor: starting gsyncd worker [2018-01-19 14:23:20.227952] I [gsyncd(/brick1/mvol1):733:main_i] <top>: syncing: gluster://localhost:mvol1 -> ssh://root at gl-slave-01-int:gluster://localhost:svol1 [2018-01-19 14:23:20.235563] I [changelogagent(agent):73:__init__] ChangelogAgent: Agent listining... [2018-01-19...
2011 Mar 31
1
Error rpmbuild Glusterfs 3.1.3
...try to build rpm?s out of the glusterfs 3.1.3 tgz on my SLES Servers (SLES10.1 & SLES11.1) all is running fine i guess until it try?s to build the rpm?s. Then i always run into this error : RPM build errors: File not found: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd File not found by glob: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/python/syncdaemon/* are there missing dependencies ore something ? Thx
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
...aster1 and master2), where master1 was active and master2 was in passive mode. However, today, we started experiencing issues where geo-replication suddenly stopped and became stuck in a loop of Initializing..., Active.. Faulty on master1, while master2 remained in passive mode. Upon checking the gsyncd.log on the master1 node, we observed the following error (please refer to the attached logs for more details): E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}] # gluster volume geo-replication tier1data status M...
2011 Sep 28
1
Custom rpms failing
I have managed to build i386 rpms for CentOS, based on the 3.2.3 SRPM, but they don't work: # rpm -Uhv glusterfs-core-3.2.3-1.i386.rpm glusterfs-fuse-3.2.3-1.i386.rpm glusterfs-rdma-3.2.3-1.i386.rpm Preparing... ########################################### [100%] 1:glusterfs-core ########################################### [ 33%] glusterd: error while loading shared
2018 Mar 06
1
geo replication
...9152 0 Y 294 Created & started the session with: gluster volume geo-replication testtomcat stogfstest11::testtomcat create no-verify gluster volume geo-replication testtomcat stogfstest11::testtomcat start getting the following logs: master: [2018-03-06 08:32:46.767544] I [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status Change status=Initializing... [2018-03-06 08:32:46.872857] I [monitor(monitor):280:monitor] Monitor: starting gsyncd worker brick=/gfs/testtomcat/mount slave_node=ssh://root at stogfstest11:glust...
2018 Jan 22
1
geo-replication initial setup with existing data
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...drtier1data::drtier1data create push-pem force gluster volume geo-replication tier1data drtier1data::drtier1data stop gluster volume geo-replication tier1data drtier1data::drtier1data start Now I am able to start the geo-replication, but I am getting the same error. [2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}] [2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}] [2024-01-24 19:51:24.158021] I [resource(work...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...1data::drtier1data create push-pem force gluster volume geo-replication tier1data drtier1data::drtier1data stop gluster volume geo-replication tier1data drtier1data::drtier1data start Now I am able to start the geo-replication, but I am getting the same error. [2024-01-24 19:51:24.80892] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}] [2024-01-24 19:51:24.81020] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/opt/tier1data2019/brick}, {slave_node=drtier1data}] [2024-01-24 19:51:24.158021] I [resource(wo...