Displaying 16 results from an estimated 16 matches for "service_loop".
2001 May 02
1
Problems getting Diablo2 to run
...e everything
works happily.
When I run the game nothing really seems to be happening. With tracing
enabled I can see that after some initialization the game just spins around
the in a loop, resulting in two select() calls being repeated over and over
again, as seen below:
--- snip ---
trace:timer:SERVICE_Loop Wait returned: 0
trace:timer:SERVICE_Loop Waiting for 3 objects
085f17a0: select( flags=6, cookie=0x41166d88, sec=0, usec=0,
handles={100,32,88} )
085f17a0: select() = PENDING
085494d8: *wakeup* signaled=0 cookie=0x40c16d88
trace:timer:SERVICE_Loop Wait returned: 0
trace:timer:SERVICE_Loop Waiting...
2018 Mar 06
1
geo replication
...[master(/gfs/testtomcat/mount):1518:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/testtomcat/ssh%3A%2F%2Froot%40172.16.81.101%3Agluster%3A%2F%2F127.0.0.1%3Atesttomcat/b6a7905143e15d9b079b804c0a8ebf42
[2018-03-06 08:32:51.873176] I [resource(/gfs/testtomcat/mount):1653:service_loop] GLUSTER: Register time time=1520325171
[2018-03-06 08:32:51.926801] E [syncdutils(/gfs/testtomcat/mount):299:log_raise_exception] <top>: master volinfo unavailable
[2018-03-06 08:32:51.936203] I [syncdutils(/gfs/testtomcat/mount):271:finalize] <top>: exiting.
[2018-03-06...
2017 Sep 29
1
Gluster geo replication volume is faulty
...-oControlMaster=auto -S
/tmp/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock --compress
geo-rep-user at gfs6:/proc/17554/cwd error=12
[2017-09-29 15:53:29.797259] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:29.799386] I [repce(/gfs/brick2/gv0):92:service_loop]
RepceServer: terminating on reaching EOF.
[2017-09-29 15:53:29.799570] I [syncdutils(/gfs/brick2/gv0):271:finalize]
<top>: exiting.
[2017-09-29 15:53:30.105407] I [monitor(monitor):280:monitor] Monitor:
starting gsyncd worker brick=/gfs/brick1/gv0
slave_node=ssh://geo-rep-user at gfs6:gluste...
2017 Oct 06
0
Gluster geo replication volume is faulty
...p/gsyncd-aux-ssh-fdyDHm/78cf8b204207154de59d7ac32eee737f.sock
> --compress geo-rep-user at gfs6:/proc/17554/cwderror=12
> [2017-09-29 15:53:29.797259] I
> [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting.
> [2017-09-29 15:53:29.799386] I
> [repce(/gfs/brick2/gv0):92:service_loop] RepceServer: terminating on
> reaching EOF.
> [2017-09-29 15:53:29.799570] I
> [syncdutils(/gfs/brick2/gv0):271:finalize] <top>: exiting.
> [2017-09-29 15:53:30.105407] I [monitor(monitor):280:monitor] Monitor:
> starting gsyncd
> workerbrick=/gfs/brick1/gv0slave_node=s...
2017 Aug 17
0
Extended attributes not supported by the backend storage
...raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1656, in service_loop
g1.crawlwrap(oneshot=True, register_time=register_time)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 600, in...
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
...raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i
local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1656, in service_loop
g1.crawlwrap(oneshot=True, register_time=register_time)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 600, in...
2018 Jan 22
1
geo-replication initial setup with existing data
2001 Jun 27
1
err:ntdll:RtlpWaitForCriticalSection...
...turned 2
trace:seh:EXC_CallHandler calling handler at 0x4005b2e0 code=c0000005
flags=10
trace:seh:EXC_CallHandler handler returned 2)
<snip much more of that,. until...>
trace:seh:EXC_CallHandler calling handler at 0x4005b2e0 code=c0000005
flags=10
080c6aa0: *wakeup* object=0
trace:timer:SERVICE_Loop Wait returned: 0
trace:timer:SERVICE_Loop Waiting for 6 objects
080c6aa0: select( flags=6, sec=0, usec=0, handles={72,84,64,52,44,32} )
080c6aa0: select() = PENDING
trace:seh:EXC_CallHandler handler returned 2
trace:seh:EXC_CallHandler calling handler at 0x4005b2e0 code=c0000005
flags=10
<snip...
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
...tatus', 'config-check', 'config-get', 'config-set', 'config-reset', 'voluuidget', 'delete')
[2018-07-11 18:42:49.365919] I [syncdutils(/urd-gds/gluster):271:finalize] <top>: exiting.
[2018-07-11 18:42:49.369316] I [repce(/urd-gds/gluster):92:service_loop] RepceServer: terminating on reaching EOF.
[2018-07-11 18:42:49.369921] I [syncdutils(/urd-gds/gluster):271:finalize] <top>: exiting.
[2018-07-11 18:42:49.369694] I [monitor(monitor):353:monitor] Monitor: worker died before establishing connection brick=/urd-gds/gluster
[2018-07-11 18:4...
2012 Jan 03
1
geo-replication loops
Hi,
I was thinking about a common (I hope!) use case of Glusterfs geo-replication.
Imagine 3 different facility having their own glusterfs deployment:
* central-office
* remote-office1
* remote-office2
Every client mount their local glusterfs deployment and write files
(i.e.: user A deposit a PDF document on remote-office2), and it get
replicated to the central-office glusterfs volume as soon
2018 Jan 19
2
geo-replication command rsync returned with 3
...-01-19 14:23:25.123959] I [master(/brick1/mvol1):1249:register]
_GMaster: xsync temp directory:
/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1/0a6056eb995956f1dc84f32256dae472/xsync
[2018-01-19 14:23:25.124351] I
[resource(/brick1/mvol1):1528:service_loop] GLUSTER: Register time:
1516371805
[2018-01-19 14:23:25.127505] I [master(/brick1/mvol1):510:crawlwrap]
_GMaster: primary master with volume id
2f5de6e4-66de-40a7-9f24-4762aad3ca96 ...
[2018-01-19 14:23:25.130393] I [master(/brick1/mvol1):519:crawlwrap]
_GMaster: crawl interval: 1 seconds
[201...
2018 Jan 19
0
geo-replication command rsync returned with 3
...123959] I [master(/brick1/mvol1):1249:register]
>_GMaster: xsync temp directory:
>/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1/0a6056eb995956f1dc84f32256dae472/xsync
>[2018-01-19 14:23:25.124351] I
>[resource(/brick1/mvol1):1528:service_loop] GLUSTER: Register time:
>1516371805
>[2018-01-19 14:23:25.127505] I [master(/brick1/mvol1):510:crawlwrap]
>_GMaster: primary master with volume id
>2f5de6e4-66de-40a7-9f24-4762aad3ca96 ...
>[2018-01-19 14:23:25.130393] I [master(/brick1/mvol1):519:crawlwrap]
>_GMaster: crawl...
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...back to monitor
[2024-01-24 19:51:29.139131] I [master(worker /opt/tier1data2019/brick):1662:register] _GMaster: Working dir [{path=/var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker /opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time [{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker /opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker /opt/tier1data2019/brick):253:set_worker_crawl_status] G...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...ck to monitor
[2024-01-24 19:51:29.139131] I [master(worker /opt/tier1data2019/brick):1662:register] _GMaster: Working dir [{path=/var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker /opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time [{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker /opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker /opt/tier1data2019/brick):253:set_worker_crawl_status]...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...ck to monitor
[2024-01-24 19:51:29.139131] I [master(worker /opt/tier1data2019/brick):1662:register] _GMaster: Working dir [{path=/var/lib/misc/gluster/gsyncd/tier1data_drtier1data_drtier1data/opt-tier1data2019-brick}]
[2024-01-24 19:51:29.139531] I [resource(worker /opt/tier1data2019/brick):1292:service_loop] GLUSTER: Register time [{time=1706125889}]
[2024-01-24 19:51:29.173877] I [gsyncdstatus(worker /opt/tier1data2019/brick):281:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2024-01-24 19:51:29.174407] I [gsyncdstatus(worker /opt/tier1data2019/brick):253:set_worker_crawl_status]...
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick