Displaying 18 results from an estimated 18 matches for "log_raise_exception".
2014 Jun 27
1
geo-replication status faulty
...g gsyncd worker
[2014-06-26 17:09:10.137434] I [gsyncd(/data/glusterfs/vol0/brick1/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root at node003:gluster://localhost:gluster_vol1
[2014-06-26 17:09:10.258044] E [syncdutils(/data/glusterfs/vol0/brick0/brick):223:log_raise_exception] <top>: connection to peer is broken
[2014-06-26 17:09:10.259278] W [syncdutils(/data/glusterfs/vol0/brick0/brick):227:log_raise_exception] <top>: !!!!!!!!!!!!!
[2014-06-26 17:09:10.260755] W [syncdutils(/data/glusterfs/vol0/brick0/brick):228:log_raise_exception] <top>: !!! gettin...
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
...8.941760] I [changelogagent(/urd-gds/gluster):73:__init__] ChangelogAgent: Agent listining...
[2018-07-11 18:42:48.947567] I [resource(/urd-gds/gluster):1780:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-07-11 18:42:49.363514] E [syncdutils(/urd-gds/gluster):304:log_raise_exception] <top>: connection to peer is broken
[2018-07-11 18:42:49.364279] E [resource(/urd-gds/gluster):210:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret\
.pem -p 22 -oControlMaster=auto -S /tmp/gsyn...
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
...ns=1}, {stime=(1705935991, 0)}, {etime=1706125889}, {entry_stime=(1705935991, 0)}]
[2024-01-24 19:51:30.251965] I [master(worker /opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time [{stime=(1705935991, 0)}]
[2024-01-24 19:51:30.376715] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}]
[2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}]
[2024-01-24 19:51:30.993608] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Stat...
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...=1}, {stime=(1705935991, 0)}, {etime=1706125889}, {entry_stime=(1705935991, 0)}]
[2024-01-24 19:51:30.251965] I [master(worker /opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time [{stime=(1705935991, 0)}]
[2024-01-24 19:51:30.376715] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}]
[2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}]
[2024-01-24 19:51:30.993608] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker St...
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
...ecame stuck in a loop of Initializing..., Active.. Faulty on master1, while master2 remained in passive mode.
Upon checking the gsyncd.log on the master1 node, we observed the following error (please refer to the attached logs for more details):
E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}]
# gluster volume geo-replication tier1data status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SY...
2012 Jan 03
1
geo-replication loops
Hi,
I was thinking about a common (I hope!) use case of Glusterfs geo-replication.
Imagine 3 different facility having their own glusterfs deployment:
* central-office
* remote-office1
* remote-office2
Every client mount their local glusterfs deployment and write files
(i.e.: user A deposit a PDF document on remote-office2), and it get
replicated to the central-office glusterfs volume as soon
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
...=1}, {stime=(1705935991, 0)}, {etime=1706125889}, {entry_stime=(1705935991, 0)}]
[2024-01-24 19:51:30.251965] I [master(worker /opt/tier1data2019/brick):1605:crawl] _GMaster: slave's time [{stime=(1705935991, 0)}]
[2024-01-24 19:51:30.376715] E [syncdutils(worker /opt/tier1data2019/brick):346:log_raise_exception] <top>: Gluster Mount process exited [{error=ENOTCONN}]
[2024-01-24 19:51:30.991856] I [monitor(monitor):228:monitor] Monitor: worker died in startup phase [{brick=/opt/tier1data2019/brick}]
[2024-01-24 19:51:30.993608] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker St...
2011 May 03
3
Issue with geo-replication and nfs auth
...nitial release).
Geo-replication
---------------
System : Debian 6.0 amd64
Glusterfs: 3.2.0
MASTER (volume) => SLAVE (directory)
For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status:
2011-05-03 09:57:40.315774] E [syncdutils:131:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/syncdutils.py", line 152, in twrap
tf(*aa)
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 118, in listen
rid, exc, res = recv(self.inf)...
2018 Mar 06
1
geo replication
...ot%40172.16.81.101%3Agluster%3A%2F%2F127.0.0.1%3Atesttomcat/b6a7905143e15d9b079b804c0a8ebf42
[2018-03-06 08:32:51.873176] I [resource(/gfs/testtomcat/mount):1653:service_loop] GLUSTER: Register time time=1520325171
[2018-03-06 08:32:51.926801] E [syncdutils(/gfs/testtomcat/mount):299:log_raise_exception] <top>: master volinfo unavailable
[2018-03-06 08:32:51.936203] I [syncdutils(/gfs/testtomcat/mount):271:finalize] <top>: exiting.
[2018-03-06 08:32:51.938469] I [repce(/gfs/testtomcat/mount):92:service_loop] RepceServer: terminating on reaching EOF.
[2018-03-06 08:32:51.938776] I [sync...
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
...'gid': 0, 'mode': 16877,
'entry':
'.gfid/5531bd64-ac50-462b-943e-c0bf1c52f52c/Oracle_VM_VirtualBox_Extension',
'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
[2018-03-12 13:37:14.835911] E
[syncdutils(/brick1/mvol1):299:log_raise_exception] <top>: The above
directory failed to sync. Please fix it to proceed further.
both gfid's of the directories as shown in the log :
brick1/mvol1/.trashcan/test1/b1 0x5531bd64ac50462b943ec0bf1c52f52c
brick1/mvol1/.trashcan/test1/b1/Oracle_VM_VirtualBox_Extension
0xc38f75e3194a4d22909450...
2017 Aug 17
0
Extended attributes not supported by the backend storage
...In the master, I'm getting the following error messages:
[2017-08-16 12:57:45.205311] E [repce(/mnt/storage/lapbacks):207:__call__] RepceClient: call 17769:140586894673664:1502888257.97 (entry_ops) failed on peer with OSError
[2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i
local.service_loop(*[...
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
...n the master, I'm getting the following error messages:
[2017-08-16 12:57:45.205311] E [repce(/mnt/storage/lapbacks):207:__call__] RepceClient: call 17769:140586894673664:1502888257.97 (entry_ops) failed on peer with OSError
[2017-08-16 12:57:45.205593] E [syncdutils(/mnt/storage/lapbacks):312:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main
main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i
local.service_loop(*[...
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
...9;: 0, 'mode': 16877, 'entry':
> '.gfid/5531bd64-ac50-462b-943e-c0bf1c52f52c/Oracle_VM_VirtualBox_Extension',
> 'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
> [2018-03-12 13:37:14.835911] E [syncdutils(/brick1/mvol1):299:log_raise_exception]
> <top>: The above directory failed to sync. Please fix it to proceed further.
>
>
> both gfid's of the directories as shown in the log :
> brick1/mvol1/.trashcan/test1/b1 0x5531bd64ac50462b943ec0bf1c52f52c
> brick1/mvol1/.trashcan/test1/b1/Oracle_VM_VirtualBox_Extensio...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...: 16877, 'entry':
> '.gfid/5531bd64-ac50-462b-943e-c0bf1c52f52c/Oracle_VM_VirtualBox_Extension',
> 'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
> [2018-03-12 13:37:14.835911] E
> [syncdutils(/brick1/mvol1):299:log_raise_exception] <top>: The
> above directory failed to sync. Please fix it to proceed further.
>
>
> both gfid's of the directories as shown in the log :
> brick1/mvol1/.trashcan/test1/b1 0x5531bd64ac50462b943ec0bf1c52f52c
> brick1/mvol1/.trashcan/test1/b1/Oracle_VM_Vir...
2013 Mar 07
4
[Gluster-devel] glusterfs-3.4.0alpha2 released
RPM: http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha2/
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.0alpha2.tar.gz
This release is made off jenkins-release-19
-- Gluster Build System
_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
...<top>: syncing:
gluster://localhost:flvol ->
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave
[2011-07-25 19:01:55.300410] D [repce:131:push] RepceClient: call
10976:139842552960768:1311620515.3 __repce_version__() ...
[2011-07-25 19:01:55.883799] E [syncdutils:131:log_raise_exception] <top>:
FAIL:
Traceback (most recent call last):
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/syncdutils.py", line
152, in twrap
tf(*aa)
File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 118,
in listen
rid, exc, res = recv(self.inf)...