Displaying 20 results from an estimated 30000 matches similar to: "How to get amount of in fligt geo-replication data?"
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Jan 28
0
Geo-Replication Rsync Command, bug?
2018 Feb 07
1
geo-replication command rsync returned with 3
Hi,
?
Kotresh workaround works for me. But before I tried it, I created some strace-logs for Florian.
setup: 2 VMs?(192.168.222.120 master, 192.168.222.121 slave), both with a volume named vol with Ubuntu?16.04.3,?glusterfs 3.13.2, rsync 3.1.1 .
?
Best regards,
Tino
?
root at master:~# cat /usr/bin/rsync
#!/bin/bash
strace -o /tmp/rsync.trace -ff /usr/bin/rsynco "$@"
?
One of the traces
2018 Jan 29
0
geo-replication command rsync returned with 3
Hi all,
by downgrade of
ii? libc6:amd64 2.23-0ubuntu10
to
ii? libc6:amd64 2.23-0ubuntu3
the problem was solved. at least in gfs testenvironment running gfs 
3.13.2 and 3.7.20 and on our productive environment with 3.7.18.
possibly it helps someone...
best regards
Dietmar
Am 25.01.2018 um 14:06 schrieb Dietmar Putz:
>
> Hi Kotresh,
>
> thanks for your response...
>
> i
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades) and 
gfs 3.12.5 with following rsync version :
1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1
2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2
3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1
in each test all nodes had the same rsync version installed. all
2015 Aug 31
0
Cross-compiling tinc 1.1 for Windows
2013 Aug 27
0
Which Tag Editor with Linux can be used for Opus encoded files
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi,
As a quick workaround for geo-replication to work. Please configure the
following option.
gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config
access_mount true
The above option will not do the lazy umount and as a result, all the
master and slave volume mounts
maintained by geo-replication can be accessed by others. It's also visible
in df output.
2017 Aug 07
0
How to delete geo-replication session?
Hi,
I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation...
Can someone let me know how I can delete it?
Thanks
> -------- Original Message --------
> Subject: How to delete geo-replication session?
> Local Time: August 1, 2017 12:15 PM
> UTC Time: August
2012 Mar 22
0
HDD- and NIC-stats for dom0
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20120322/a53547cf/attachment.htm>
2017 Sep 21
0
Arbiter and geo-replication
Hi all!
Today I have a small gluster replication on 2 machines.
My plan is to scale this, I though need some feedback that how I plan 
things is in the right direction.
First of all I have understood the need of an arbiter.
When I scale this, say that I just have 2 replica and 1 arbiter, when I 
add another two machines can I still use the same physical machine as 
the arbiter?
Or when I add
2017 Sep 22
0
Arbiter and geo-replication
On 09/22/2017 02:25 AM, Kotresh Hiremath Ravishankar wrote:
> The volume layout of geo-replication slave volume could be different 
> from master volume.
> It's not mandatory that if the master volume is arbiter type, the 
> slave also needs to be arbiter.
> But if it's decided to use the arbiter both at master and slave, then 
> the expansion rules is
> applicable
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi,
When S3 is added to master volume from new node, the following cmd should
be run to generate and distribute ssh keys
1. Generate ssh keys from new node
       #gluster system:: execute gsec_create
2. Push those ssh keys of new node to slave
      #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create
push-pem force
3. Stop and start geo-rep
But note that
2012 Nov 15
0
Why does geo-replication stop when a replica member goes down
Hi,
We are testing glusterfs. We have a setup like this:
Site A: 4 nodes, 2 bricks per node, 1 volume, distributed, replicated,
replica count 2
Site B: 2 nodes, 2 bricks per node, 1 volume, distributed
georeplication setup: master: site A, node 1. slave:site B, node 1, ssh
replicasets on Site A:
node 1, brick 1 + node 3, brick 1
node 2, brick 1 + node 4, brick 1
node 2, brick 2 + node 3, brick
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2011 Jul 26
1
Error during geo-replication : Unable to get <uuid>.xtime attr
Hi,
I got a problem during geo-replication:
The master Gluster server log has the following error every second:
[2011-07-26 04:20:50.618532] W [libxlator.c:128:cluster_markerxtime_cbk]
0-flvol-dht: Unable to get <uuid>.xtime attr
While the slave log has the error every a few seconds:
[2011-07-26 04:25:08.77133] E
[stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
Hi,
is this a rare problem?
Cheers,
Kingsley.
On Tue, 2023-03-14 at 19:31 +0000, Kingsley Tart wrote:
> Hi,
> 
> using Gluster 9.2 on debian 11 I'm trying to set up geo replication.
> I am following this guide:
> 
> 
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh
> 
> I have a volume called "ansible" which is only a
2012 Mar 14
2
Memory statistics for DomU's
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20120314/0496343e/attachment.htm>
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh,
 
Thanks for response! 
 
After taking more tests with this specific geo-replication configuration I realized that  
file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication.
I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster.
But this is not a case in my tests. File on
2023 Nov 03
0
Gluster Geo replication
Hi,
You simply need to enable port 22 on the geo-replication slave side. This will allow the master node to establish an SSH connection with the slave server and transfer data securely over SSH.
Thanks,
Anant
________________________________
From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of dev devops <dev.devops12 at gmail.com>
Sent: 31 October 2023 3:10 AM