similar to: Geo replication snapshot error

Displaying 20 results from an estimated 6000 matches similar to: "Geo replication snapshot error"

2018 Feb 21
0
Geo replication snapshot error
Hi, Thanks for reporting the issue. This seems to be a bug. Could you please raise a bug at https://bugzilla.redhat.com/ under community/glusterfs ? We will take a look at it and fix it. Thanks, Kotresh HR On Wed, Feb 21, 2018 at 2:01 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > I use gluster 3.12 on centos 7. > I am writing a snapshot program for my
2018 Feb 06
4
geo-replication
Hi all, I am planning my new gluster system and tested things out in a bunch of virtual machines. I need a bit of help to understand how geo-replication behaves. I have a master gluster cluster replica 2 (in production I will use an arbiter and replicatied/distributed) and the geo cluster is distributed with 2 machines. (in production I will have the geo cluster distributed) Everything is up
2018 Feb 06
0
geo-replication
Hi again, I made some more tests and the behavior I get is that if any of the slaves are down the geo-replication stops working. It this the way distributed volumes work, if one server goes down the entire system stops to work? The servers that are online do not continue to work? Sorry, for asking stupid questions. Best regards Marcus On Tue, Feb 06, 2018 at 12:09:40PM +0100, Marcus Peders?n
2018 Feb 07
0
geo-replication
We are happy to help you out. Please find the answers inline. On Tue, Feb 6, 2018 at 4:39 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > > I am planning my new gluster system and tested things out in > a bunch of virtual machines. > I need a bit of help to understand how geo-replication behaves. > > I have a master gluster cluster replica 2 > (in
2018 Feb 07
1
geo-replication
Thank you for your help! Just to make things clear to me (and get a better understanding of gluster): So, if I make the slave cluster just distributed and node 1 goes down, data (say file.txt) that belongs to node 1 will not be synced. When node 1 comes back up does the master not realize that file.txt has not been synced and makes sure that it is synced when it has contact with node 1 again? So
2018 Mar 02
1
geo-replication
Hi again, I have been testing and reading up on other solutions and just wanted to check if my ideas are ok. I have been looking at dispersed volumes and wonder if there are any problems running replicated-distributed cluster on the master node and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master:
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>: Hi Marcus, Is the
2018 Mar 02
0
geo-replication
Hi Kotresh, I am expecting my hardware to show up next week. My plan is to run gluster version 3.12 on centos 7. Has the issue been fixed in version 3.12? Thanks a lot for your help! /Marcus On Fri, Mar 02, 2018 at 05:12:13PM +0530, Kotresh Hiremath Ravishankar wrote: > Hi Marcus, > > There are no issues with geo-rep and disperse volumes. It works with > disperse volume > being
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There, We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication. # gluster volume info Volume Name: tier1data Type: Replicate Volume ID: 93c45c14-f700-4d50-962b-7653be471e27 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: master1:/opt/tier1data2019/brick Brick2: master2:/opt/tier1data2019/brick
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All, I have run the following commands on master3, and that has added master3 to geo-replication. gluster system:: execute gsec_create gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force gluster volume geo-replication tier1data drtier1data::drtier1data stop gluster volume geo-replication tier1data drtier1data::drtier1data start Now I am able to start the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root. Best Regards, Strahil Nikolov ? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi All, I have run the following commands on master3,
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory "Oracle_VM_VirtualBox_Extension" on slave. Usually this would be because of gfid
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello, in regard to https://bugzilla.redhat.com/show_bug.cgi?id=1434066 i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4) for e.g. removing an entire directory with subfolders : tron at gl-node1:/myvol-1/test1/b1$ rm -rf * afterwards listing files in the trashcan : tron at gl-node1:/myvol-1/test1$
2018 Apr 19
0
Problems with geo-replication for non root user
I was trying to follow the steps for this as mentioned at https://access.redhat.com/solutions/2485621/. One difference is that in my case, the servers are running Ubuntu Server 16.04 LTS and are using gluster 3.12.8 from https://www.gluster.org/. Passwordless SSH seems to be working for a non-root user from the master node being node1.test.com. # ssh -24q geouser at node2.test.com hostname node2
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All, we are running a dist. repl. volume on 4 nodes including geo-replication to another location. the geo-replication was running fine for months. since 18th jan. the geo-replication is faulty. the geo-rep log on the master shows following error in a loop while the logs on the slave just show 'I'nformations... somehow suspicious are the frequent 'shutting down connection'