search for: main_i

Displaying 14 results from an estimated 14 matches for "main_i".

Did you mean: main__
2012 Mar 20
1
issues with geo-replication
...Monitor: new state: starting... [2012-03-20 19:29:10.118187] I [monitor(monitor):59:monitor] Monitor: ------------------------------------------------------------ [2012-03-20 19:29:10.118295] I [monitor(monitor):60:monitor] Monitor: starting gsyncd worker [2012-03-20 19:29:10.168212] I [gsyncd:289:main_i] <top>: syncing: gluster://localhost:myvol -> ssh://root at remoteip:/data/path [2012-03-20 19:29:10.222372] D [repce:130:push] RepceClient: call 23154:47903647023584:1332271750.22 __repce_version__() ... [2012-03-20 19:29:10.504734] E [syncdutils:133:exception] <top>: FAIL: Tracebac...
2012 Jan 03
1
geo-replication loops
Hi, I was thinking about a common (I hope!) use case of Glusterfs geo-replication. Imagine 3 different facility having their own glusterfs deployment: * central-office * remote-office1 * remote-office2 Every client mount their local glusterfs deployment and write files (i.e.: user A deposit a PDF document on remote-office2), and it get replicated to the central-office glusterfs volume as soon
2018 Jan 22
1
geo-replication initial setup with existing data
2017 Aug 17
0
Extended attributes not supported by the backend storage
...88257.97 (entry_ops) failed on peer with OSError [2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main main_i() File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1656, in service_loop g1.crawlwrap(oneshot=Tru...
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
...88257.97 (entry_ops) failed on peer with OSError [2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main main_i() File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1656, in service_loop g1.crawlwrap(oneshot=Tru...
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2014 Jun 27
1
geo-replication status faulty
...4-06-26 17:09:08.794359] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------ [2014-06-26 17:09:08.795387] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker [2014-06-26 17:09:09.358588] I [gsyncd(/data/glusterfs/vol0/brick0/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root at node003:gluster://localhost:gluster_vol1 [2014-06-26 17:09:09.537219] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------ [2014-06-26 17:09:09.540030] I [monitor(monitor):1...
2017 Sep 29
1
Gluster geo replication volume is faulty
...7-09-29 15:53:32.736538] I [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status Change status=Faulty [2017-09-29 15:53:33.35219] I [resource(/gfs/brick1/gv0):1507:connect] GLUSTER: Mounted gluster volume duration=1.0954 [2017-09-29 15:53:33.35403] I [gsyncd(/gfs/brick1/gv0):799:main_i] <top>: Closing feedback fd, waking up the monitor [2017-09-29 15:53:35.50920] I [master(/gfs/brick1/gv0):1515:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/f0393acbf9a1583960edbbd2f1dfb6b4 [...
2017 Oct 06
0
Gluster geo replication volume is faulty
...> [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker > Status Changestatus=Faulty > [2017-09-29 15:53:33.35219] I [resource(/gfs/brick1/gv0):1507:connect] > GLUSTER: Mounted gluster volumeduration=1.0954 > [2017-09-29 15:53:33.35403] I [gsyncd(/gfs/brick1/gv0):799:main_i] > <top>: Closing feedback fd, waking up the monitor > [2017-09-29 15:53:35.50920] I [master(/gfs/brick1/gv0):1515:register] > _GMaster: Working > dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/f0393acbf9a1...
2018 Mar 06
1
geo replication
...I [resource(/gfs/testtomcat/mount):1493:connect] GLUSTER: Mounting gluster volume locally... [2018-03-06 08:32:49.739631] I [resource(/gfs/testtomcat/mount):1506:connect] GLUSTER: Mounted gluster volume duration=1.2232 [2018-03-06 08:32:49.739870] I [gsyncd(/gfs/testtomcat/mount):799:main_i] <top>: Closing feedback fd, waking up the monitor [2018-03-06 08:32:51.872872] I [master(/gfs/testtomcat/mount):1518:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/testtomcat/ssh%3A%2F%2Froot%40172.16.81.101%3Agluster%3A%2F%2F127.0.0.1%3Atesttomcat/b6a7905143e15...
2011 May 03
3
Issue with geo-replication and nfs auth
hi, I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release). Geo-replication --------------- System : Debian 6.0 amd64 Glusterfs: 3.2.0 MASTER (volume) => SLAVE (directory) For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status: 2011-05-03 09:57:40.315774] E
2018 Jan 19
2
geo-replication command rsync returned with 3
...ds Dietmar [2018-01-19 14:23:20.141123] I [monitor(monitor):267:monitor] Monitor: ------------------------------------------------------------ [2018-01-19 14:23:20.141457] I [monitor(monitor):268:monitor] Monitor: starting gsyncd worker [2018-01-19 14:23:20.227952] I [gsyncd(/brick1/mvol1):733:main_i] <top>: syncing: gluster://localhost:mvol1 -> ssh://root at gl-slave-01-int:gluster://localhost:svol1 [2018-01-19 14:23:20.235563] I [changelogagent(agent):73:__init__] ChangelogAgent: Agent listining... [2018-01-19 14:23:23.55553] I [master(/brick1/mvol1):83:gmaster_builder] <top&g...
2018 Jan 19
0
geo-replication command rsync returned with 3
...018-01-19 14:23:20.141123] I [monitor(monitor):267:monitor] Monitor: >------------------------------------------------------------ >[2018-01-19 14:23:20.141457] I [monitor(monitor):268:monitor] Monitor: >starting gsyncd worker >[2018-01-19 14:23:20.227952] I [gsyncd(/brick1/mvol1):733:main_i] ><top>: >syncing: gluster://localhost:mvol1 -> >ssh://root at gl-slave-01-int:gluster://localhost:svol1 >[2018-01-19 14:23:20.235563] I [changelogagent(agent):73:__init__] >ChangelogAgent: Agent listining... >[2018-01-19 14:23:23.55553] I >[master(/brick1/mvol1):83...
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
...Monitor: new state: starting... [2011-07-25 19:01:55.235734] I [monitor(monitor):42:monitor] Monitor: ------------------------------------------------------------ [2011-07-25 19:01:55.235909] I [monitor(monitor):43:monitor] Monitor: starting gsyncd worker [2011-07-25 19:01:55.295624] I [gsyncd:286:main_i] <top>: syncing: gluster://localhost:flvol -> ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave [2011-07-25 19:01:55.300410] D [repce:131:push] RepceClient: call 10976:139842552960768:1311620515.3 __repce_version__() ... [2011-07-25 19:01:55.883799] E [syncdutils:13...