similar to: New Style Replication in Version 4

Displaying 20 results from an estimated 7000 matches similar to: "New Style Replication in Version 4"

2018 Apr 30
0
New style replication in version 4?
Good morning all. I am landing here again following a spell on the list when I worked at XMA in the UK. Hello again. I have a use case of having a remote office, which should be able to have a common storage area with a main office. I recently worked on GPFS with AFM to achieve this at another company (not really relevant to this list). I at first though Geo Replication would be ideal fo
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>: Hi Marcus, Is the
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem. I cannot open port 24007 to allow RPC access. On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is > of glusterd. > glusterd will be listening in this port and all volume management > communication > happens via RPC. > > Thanks, >
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2017 Oct 24
2
active-active georeplication?
Halo replication [1] could be of interest here. This functionality is available since 3.11 and the current plan is to have it fully supported in a 4.x release. Note that Halo replication is built on existing synchronous replication in Gluster and differs from the current geo-replication implementation. Kotresh's response is spot on for the current geo-replication implementation. Regards,
2018 Mar 02
1
geo-replication
Hi again, I have been testing and reading up on other solutions and just wanted to check if my ideas are ok. I have been looking at dispersed volumes and wonder if there are any problems running replicated-distributed cluster on the master node and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master:
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh, Thanks for response! After taking more tests with this specific geo-replication configuration I realized that file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication. I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster. But this is not a case in my tests. File on
2017 Oct 24
2
active-active georeplication?
hi everybody, Have glusterfs released a feature named active-active georeplication? If yes, in which version it is released? If no, is it planned to have this feature? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2018 Jan 16
2
Deploying geo-replication to local peer
Hi, I'm looking for glusterfs feature that can be used to transform data between volumes of different types provisioned on the same nodes. It could be, for example, transformation from disperse to distributed volume. The possible option is to invoke geo-replication between volumes. It seems is works properly. But I'm concern about requirement from Administration Guide for Red Hat Gluster
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume over ssh, It looks like the volume create command is trying to directly access the server over port 24007. The docs imply that all communications are over ssh. What am I missing? -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin at netvel.net
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote: > That makes for an interesting problem. > > I cannot open port 24007 to allow RPC access. > > On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is of >
2018 Jan 17
0
Deploying geo-replication to local peer
Hi Viktor, Answers inline On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov <vnosov at stonefly.com> wrote: > Hi, > > I'm looking for glusterfs feature that can be used to transform data > between > volumes of different types provisioned on the same nodes. > It could be, for example, transformation from disperse to distributed > volume. > The possible option is to
2010 Aug 16
1
lm prediction strange error
Dear all, I have an error in the simple prediction function for lm(). Maybe someone experienced the same? xma <- matrix(data = 0, nrow = 100, ncol = 2) xma[, 1] <- rnorm(100) xma[, 2] <- rchisq(100, df = 3) m1 <- lm(xma[, 1] ~ xma[, 2]) predict(m1, as.data.frame(seq(-13, 13, 0.5))) Thanks a lot, Trafim [[alternative HTML version deleted]]
2017 Oct 24
0
active-active georeplication?
thx for reply, that was so much interesting to me. How can I get these news about glusterfs new features? On Tue, Oct 24, 2017 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote: > > Halo replication [1] could be of interest here. This functionality is > available since 3.11 and the current plan is to have it fully supported in > a 4.x release. > > Note that Halo
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh, thanks for your response... i have made further tests based on ubuntu 16.04.3 (latest upgrades) and gfs 3.12.5 with following rsync version : 1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1 2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2 3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1 in each test all nodes had the same rsync version installed. all
2017 Oct 24
0
active-active georeplication?
Hi, No, gluster doesn't support active-active geo-replication. It's not planned in near future. We will let you know when it's planned. Thanks, Kotresh HR On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote: > hi everybody, > > Have glusterfs released a feature named active-active georeplication? If > yes, in which version it is released?
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello, in regard to https://bugzilla.redhat.com/show_bug.cgi?id=1434066 i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4) for e.g. removing an entire directory with subfolders : tron at gl-node1:/myvol-1/test1/b1$ rm -rf * afterwards listing files in the trashcan : tron at gl-node1:/myvol-1/test1$
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote: > Do you have strace output going further back, at least to the proceeding > getcwd call?? It would be interesting to see which path the kernel > reports, and if it starts with "(unreachable)". I got the strace output now, but it very difficult to read (chdir in a multi-threaded process ?). My current inclination is to blame
2018 Feb 21
2
Geo replication snapshot error
Hi all, I use gluster 3.12 on centos 7. I am writing a snapshot program for my geo-replicated cluster. Now when I started to run tests with my application I have found a very strange behavior regarding geo-replication in gluster. I have setup my geo-replication according to the docs: http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/ Both master and slave clusters are