Displaying 3 results from an estimated 3 matches for "remotevol".
Did you mean:
remote_vol
2017 Jun 30
1
How to deal with FAILURES count in geo rep
Hello,
I have a replica 2 with a remote slave node for geo-replication (GlusterFS 3.8.11 on Debian 8) and saw for the first time a non zero number in the FAILURES column when running:
gluster volume geo-replcation myvolume remotehost:remotevol status detail
Right now the number under the FAILURES column is 32 and have a few questions regarding how to deal with that:
- first what does 32 mean? is it the number of files which failed to be geo replicated onto to slave node?
- how can I find out which files failed to replicate?
- how can I m...
2017 Jul 28
0
How to deal with FAILURES count in geo rep
...Gluster Users <gluster-users at gluster.org>
> Hello,
> I have a replica 2 with a remote slave node for geo-replication (GlusterFS 3.8.11 on Debian 8) and saw for the first time a non zero number in the FAILURES column when running:
> gluster volume geo-replcation myvolume remotehost:remotevol status detail
> Right now the number under the FAILURES column is 32 and have a few questions regarding how to deal with that:
> - first what does 32 mean? is it the number of files which failed to be geo replicated onto to slave node?
> - how can I find out which files failed to replicate...
2012 Mar 20
1
issues with geo-replication
...y. This is the command I am using:
gluster volume geo-replication myvol ssh://root at remoteip:/data/path start
I am able to perform a geo-replication from a local volume to a remote
volume with no problem using the following command:
gluster volume geo-replication myvol ssh://root at remoteip::remotevol start
The steps I am using to implement this:
1: Create key for geo-replication in
/etc/glusterd/geo-replication/secret.pem.pub and secret.pem.pub
2: Add pub key to ~root/.ssh/authorized_keys on target systems
3: Verify key works (using geo-replication's ssh syntax):
[root at myboxen ~]# ssh...