similar to: active-active georeplication?

Displaying 20 results from an estimated 800 matches similar to: "active-active georeplication?"

2017 Oct 24
0
active-active georeplication?
Hi, No, gluster doesn't support active-active geo-replication. It's not planned in near future. We will let you know when it's planned. Thanks, Kotresh HR On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote: > hi everybody, > > Have glusterfs released a feature named active-active georeplication? If > yes, in which version it is released?
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is available since 3.11 and the current plan is to have it fully supported in a 4.x release. Note that Halo replication is built on existing synchronous replication in Gluster and differs from the current geo-replication implementation. Kotresh's response is spot on for the current geo-replication implementation. Regards,
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me. How can I get these news about glusterfs new features? On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote: > > Halo replication [1] could be of interest here. This functionality is > available since 3.11 and the current plan is to have it fully supported in > a 4.x release. > > Note that Halo
2017 Sep 17
2
georeplication sync deamon
hi all, I want to know some more detail about glusterfs georeplication, more about syncdeamon, if 'file A' was mirorred in slave volume , a change happen to 'file A', then how the syncdeamon act? 1. transfer the whole 'file A' to slave 2. transfer the changes of file A to slave thx lot -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo replication on. The current volume is on 6 100tb bricks on 2 servers My plan is: 1) copy each of the bricks to a new arrays on the servers locally 2) move the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to following up on this. Unfortunately, I had already copied and shipped the data to the second datacenter before copying the GFIDs so I already stumbled before the first hurdle! I have been using the scripts in the extras/geo-rep provided for an earlier version upgrade. With a bit of tinkering, these have given me a file
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem. I cannot open port 24007 to allow RPC access. On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is > of glusterd. > glusterd will be listening in this port and all volume management > communication > happens via RPC. > > Thanks, >
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote: > That makes for an interesting problem. > > I cannot open port 24007 to allow RPC access. > > On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is of >
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume over ssh, It looks like the volume create command is trying to directly access the server over port 24007. The docs imply that all communications are over ssh. What am I missing? -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin at netvel.net
2018 Feb 08
0
georeplication over ssh.
Hi Alvin, Yes, geo-replication sync happens via SSH. Ther server port 24007 is of glusterd. glusterd will be listening in this port and all volume management communication happens via RPC. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote: > I am running gluster 3.8.9 and trying to setup a geo-replicated volume > over ssh, > > It looks
2018 Feb 04
2
halo not work as desired!!!
I have 2 data centers in two different region, each DC have 3 severs, I have created glusterfs volume with 4 replica, this is glusterfs volume info output: Volume Name: test-halo Type: Replicate Status: Started Snapshot Count: 0 Number of Bricks: 1 x 4 = 4 Transport-type: tcp Bricks: Brick1: 10.0.0.1:/mnt/test1 Brick2: 10.0.0.3:/mnt/test2 Brick3: 10.0.0.5:/mnt/test3 Brick4: 10.0.0.6:/mnt/test4
2018 Feb 05
0
halo not work as desired!!!
I have mounted the halo glusterfs volume in debug mode, and the output is as follows: . . . [2018-02-05 11:42:48.282473] D [rpc-clnt-ping.c:211:rpc_clnt_ping_cbk] 0-test-halo-client-1: Ping latency is 0ms [2018-02-05 11:42:48.282502] D [MSGID: 0] [afr-common.c:5025:afr_get_halo_latency] 0-test-halo-replicate-0: Using halo latency 10 [2018-02-05 11:42:48.282525] D [MSGID: 0]
2013 Jul 08
1
Possible to preload data on a georeplication target? First sync taking forever...
I have about 4 TB of data in a Gluster mirror configuration on top of ZFS, mostly consisting of 20KB files. I've added a georeplication target and the sync started ok. The target is using an SSH destination. It ran pretty quick for a while but it's taken over 2 weeks to sync just under 1 TB of data to the target server and it appears to be getting slower. The two servers are connected
2012 Jun 29
2
compile glusterfs for debian squeeze
Hello, I'm compiling glusterfs for a debian squeeze. When I do a make command, I see These parameter: GlusterFS configure summary =========================== FUSE client: yes Infiniband verbs: yes epoll IO multiplex: yes argp-standalone: no fusermount: no readline: no georeplication: yes I would like to create a package that can be used both as a client and a server. I'm not interested
2017 Oct 24
2
create volume in two different Data Centers
Hi I have two data centers, each of them have 3 servers. This two data centers can see each other over the internet. I want to create a distributed glusterfs volume with these 6 servers, but I have only one valid ip in each data center. Is it possible to create a glusterfs volume?Can anyone guide me? thx alot -------------- next part -------------- An HTML attachment was scrubbed... URL:
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>: Hi Marcus, Is the
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh, Thanks for response! After taking more tests with this specific geo-replication configuration I realized that file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication. I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster. But this is not a case in my tests. File on
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2018 Jan 16
2
Deploying geo-replication to local peer
Hi, I'm looking for glusterfs feature that can be used to transform data between volumes of different types provisioned on the same nodes. It could be, for example, transformation from disperse to distributed volume. The possible option is to invoke geo-replication between volumes. It seems is works properly. But I'm concern about requirement from Administration Guide for Red Hat Gluster