Displaying 11 results from an estimated 11 matches for "3agluster".
Did you mean:
0gluster
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
...ansport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
root at gl-node1:/myvol-1/test1# gluster volume geo-replication mvol1
gl-node5-int::mvol1 config
special_sync_mode: partial
gluster_log_file:
/var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no
-i /var/lib/glusterd/geo-replication/secret.pem
change_detector: changelog
use_meta_volume: true
session_owner: a1c74931-568c-4f40-8573-dd344553e557
state_file:
/var/lib/glusterd/geo-repl...
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
...disable: on
> performance.client-io-threads: off
>
> root at gl-node1:/myvol-1/test1# gluster volume geo-replication mvol1
> gl-node5-int::mvol1 config
> special_sync_mode: partial
> gluster_log_file: /var/log/glusterfs/geo-replica
> tion/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%
> 2F%2F127.0.0.1%3Amvol1.gluster.log
> ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem
> change_detector: changelog
> use_meta_volume: true
> session_owner: a1c74931-568c-4f40-8573-dd344553e557
> stat...
2018 Jan 19
2
geo-replication command rsync returned with 3
...tection mode
[2018-01-19 14:23:23.58454] I [master(/brick1/mvol1):367:__init__]
_GMaster: using 'rsync' as the sync engine
[2018-01-19 14:23:25.123959] I [master(/brick1/mvol1):1249:register]
_GMaster: xsync temp directory:
/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1/0a6056eb995956f1dc84f32256dae472/xsync
[2018-01-19 14:23:25.124351] I
[resource(/brick1/mvol1):1528:service_loop] GLUSTER: Register time:
1516371805
[2018-01-19 14:23:25.127505] I [master(/brick1/mvol1):510:crawlwrap]
_GMaster: primary master with volume id
2f5de6e4-66...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...rformance.client-io-threads: off
>
> root at gl-node1:/myvol-1/test1# gluster volume geo-replication mvol1
> gl-node5-int::mvol1 config
> special_sync_mode: partial
> gluster_log_file:
> /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot%40192.168.178.65%3Agluster%3A%2F%2F127.0.0.1%3Amvol1.gluster.log
> ssh_command: ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem
> change_detector: changelog
> use_meta_volume: true
> session_owner: a1c74931-568c-4f40-8573...
2018 Jan 19
0
geo-replication command rsync returned with 3
...18-01-19 14:23:23.58454] I [master(/brick1/mvol1):367:__init__]
>_GMaster: using 'rsync' as the sync engine
>[2018-01-19 14:23:25.123959] I [master(/brick1/mvol1):1249:register]
>_GMaster: xsync temp directory:
>/var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%4082.199.131.135%3Agluster%3A%2F%2F127.0.0.1%3Asvol1/0a6056eb995956f1dc84f32256dae472/xsync
>[2018-01-19 14:23:25.124351] I
>[resource(/brick1/mvol1):1528:service_loop] GLUSTER: Register time:
>1516371805
>[2018-01-19 14:23:25.127505] I [master(/brick1/mvol1):510:crawlwrap]
>_GMaster: primary master with vo...
2017 Nov 25
1
How to read geo replication timestamps from logs
Folks, need help interpreting this message from my geo rep logs for my
volume mojo.
ssh%3A%2F%2Froot%40173.173.241.2%3Agluster%3A%2F%2F127.0.
0.1%3Amojo-remote.log:[2017-11-22 00:59:40.610574] I
[master(/bricks/lsi/mojo):1125:crawl] _GMaster: slave's time: (1511312352,
0)
The epoch of 1511312352 is Tuesday, November 21, 2017 12:59:12 AM GMT.
The clocks are using the same ntp stratum and seem right on the money for
ma...
2017 Sep 29
1
Gluster geo replication volume is faulty
.../gv0
[2017-09-29 15:53:30.252793] I
[gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker Status
Change status=Faulty
[2017-09-29 15:53:30.742058] I [master(/gfs/arbiter/gv0):1515:register]
_GMaster: Working dir
path=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
[2017-09-29 15:53:30.742360] I
[resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register time
time=1506700410
[2017-09-29 15:53:30.754738] I
[gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus: Worker Status
Change stat...
2017 Oct 06
0
Gluster geo replication volume is faulty
...52793] I
> [gsyncdstatus(monitor):242:set_worker_status] GeorepStatus: Worker
> Status Changestatus=Faulty
> [2017-09-29 15:53:30.742058] I
> [master(/gfs/arbiter/gv0):1515:register] _GMaster: Working
> dirpath=/var/lib/misc/glusterfsd/gfsvol/ssh%3A%2F%2Fgeo-rep-user%4010.1.1.104%3Agluster%3A%2F%2F127.0.0.1%3Agfsvol_rep/40efd54bad1d5828a1221dd560de376f
> [2017-09-29 15:53:30.742360] I
> [resource(/gfs/arbiter/gv0):1654:service_loop] GLUSTER: Register
> timetime=1506700410
> [2017-09-29 15:53:30.754738] I
> [gsyncdstatus(/gfs/arbiter/gv0):275:set_active] GeorepStatus...
2018 Mar 06
1
geo replication
...49.739870] I [gsyncd(/gfs/testtomcat/mount):799:main_i] <top>: Closing feedback fd, waking up the monitor
[2018-03-06 08:32:51.872872] I [master(/gfs/testtomcat/mount):1518:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/testtomcat/ssh%3A%2F%2Froot%40172.16.81.101%3Agluster%3A%2F%2F127.0.0.1%3Atesttomcat/b6a7905143e15d9b079b804c0a8ebf42
[2018-03-06 08:32:51.873176] I [resource(/gfs/testtomcat/mount):1653:service_loop] GLUSTER: Register time time=1520325171
[2018-03-06 08:32:51.926801] E [syncdutils(/gfs/testtomcat/mount):299:log_raise_exception] <top...
2017 Jul 28
0
/var/lib/misc/glusterfsd growing and using up space on OS disk
...my OS disk I just discovered that there is a /var/lib/misc/glusterfsd directory which seems to save data related to geo-replication.
In particular there is a hidden sub-directory called ".processed" as you can see here:
/var/lib/misc/glusterfsd/woelkli-pro/ssh%3A%2F%2Froot%40192.168.0.10%3Agluster%3A%2F%2F127.0.0.1%3Amyvolume-geo/6d844f56e12ecd14d2e36242f045e38c/.processed
which contains one archive file per month, example:
-rw-r--r-- 1 root root 152494080 Apr 30 23:34 archive_201704.tar
-rw-r--r-- 1 root root 43284480 May 31 23:35 archive_201705.tar
...
These tar files seem to save the CHAN...
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>