similar to: add geo-replication "passive" node after node replacement

Displaying 20 results from an estimated 2000 matches similar to: "add geo-replication "passive" node after node replacement"

2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2017 Nov 22
2
error "Not able to add to index" in brick logs
in my /var/log/gluster/bricks/mybrick-path.log I get thousands of those errors: ------ [2017-11-22 21:06:23.768354] E [MSGID: 138003] [index.c:624:index_link_to_base] 0-sharedvol-index: /home/sharedvol/.glusterfs/indices/xattrop/0b852dad-b332-4bfe-a38b-976729ee46a2: Not able to add to index [Troppi collegamenti] The message "E [MSGID: 138003] [index.c:624:index_link_to_base]
2018 Feb 21
2
Geo replication snapshot error
Hi all, I use gluster 3.12 on centos 7. I am writing a snapshot program for my geo-replicated cluster. Now when I started to run tests with my application I have found a very strange behavior regarding geo-replication in gluster. I have setup my geo-replication according to the docs: http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/ Both master and slave clusters are
2017 Nov 22
0
error "Not able to add to index" in brick logs
Yes indeed it is probably what's going on. what filesystem are you using and what are the mount options? ? Original Message ? From: lists at bago.org Sent: November 22, 2017 4:26 PM To: gluster-users at gluster.org Subject: [Gluster-users] error "Not able to add to index" in brick logs in my /var/log/gluster/bricks/mybrick-path.log I get thousands of those errors: ------
2018 Feb 21
0
Geo replication snapshot error
Hi, Thanks for reporting the issue. This seems to be a bug. Could you please raise a bug at https://bugzilla.redhat.com/ under community/glusterfs ? We will take a look at it and fix it. Thanks, Kotresh HR On Wed, Feb 21, 2018 at 2:01 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > I use gluster 3.12 on centos 7. > I am writing a snapshot program for my
2018 Feb 06
4
geo-replication
Hi all, I am planning my new gluster system and tested things out in a bunch of virtual machines. I need a bit of help to understand how geo-replication behaves. I have a master gluster cluster replica 2 (in production I will use an arbiter and replicatied/distributed) and the geo cluster is distributed with 2 machines. (in production I will have the geo cluster distributed) Everything is up
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote: > Do you have strace output going further back, at least to the proceeding > getcwd call?? It would be interesting to see which path the kernel > reports, and if it starts with "(unreachable)". I got the strace output now, but it very difficult to read (chdir in a multi-threaded process ?). My current inclination is to blame
2018 Mar 02
1
geo-replication
Hi again, I have been testing and reading up on other solutions and just wanted to check if my ideas are ok. I have been looking at dispersed volumes and wonder if there are any problems running replicated-distributed cluster on the master node and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master:
2018 Feb 07
0
geo-replication
We are happy to help you out. Please find the answers inline. On Tue, Feb 6, 2018 at 4:39 PM, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > > I am planning my new gluster system and tested things out in > a bunch of virtual machines. > I need a bit of help to understand how geo-replication behaves. > > I have a master gluster cluster replica 2 > (in
2018 Feb 07
1
geo-replication
Thank you for your help! Just to make things clear to me (and get a better understanding of gluster): So, if I make the slave cluster just distributed and node 1 goes down, data (say file.txt) that belongs to node 1 will not be synced. When node 1 comes back up does the master not realize that file.txt has not been synced and makes sure that it is synced when it has contact with node 1 again? So
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output.
2018 Feb 06
0
geo-replication
Hi again, I made some more tests and the behavior I get is that if any of the slaves are down the geo-replication stops working. It this the way distributed volumes work, if one server goes down the entire system stops to work? The servers that are online do not continue to work? Sorry, for asking stupid questions. Best regards Marcus On Tue, Feb 06, 2018 at 12:09:40PM +0100, Marcus Peders?n
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory "Oracle_VM_VirtualBox_Extension" on slave. Usually this would be because of gfid
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello, in regard to https://bugzilla.redhat.com/show_bug.cgi?id=1434066 i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4) for e.g. removing an entire directory with subfolders : tron at gl-node1:/myvol-1/test1/b1$ rm -rf * afterwards listing files in the trashcan : tron at gl-node1:/myvol-1/test1$
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh, Thanks for response! After taking more tests with this specific geo-replication configuration I realized that file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication. I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster. But this is not a case in my tests. File on
2018 Jan 16
2
Deploying geo-replication to local peer
Hi, I'm looking for glusterfs feature that can be used to transform data between volumes of different types provisioned on the same nodes. It could be, for example, transformation from disperse to distributed volume. The possible option is to invoke geo-replication between volumes. It seems is works properly. But I'm concern about requirement from Administration Guide for Red Hat Gluster
2018 Jan 17
0
Deploying geo-replication to local peer
Hi Viktor, Answers inline On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov <vnosov at stonefly.com> wrote: > Hi, > > I'm looking for glusterfs feature that can be used to transform data > between > volumes of different types provisioned on the same nodes. > It could be, for example, transformation from disperse to distributed > volume. > The possible option is to
2018 May 23
0
cluster brick logs filling after upgrade from 3.6 to 3.12
Recently we updated a Gluster replicated setup from 3.6 to 3.12 (stepping through 3.8 first before going to 3.12). Afterwards I noticed the brick logs were filling at an alarming rate on the server we have the NFS service running from: $ sudo tail -20 /var/log/glusterfs/bricks/export-gluster-shared.log [2018-05-23 06:22:12.405240] I [MSGID: 139001] [posix-acl.c:269:posix_acl_log_permit_denied]
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters and slave nodes same? I have seen that has caused problems sometimes. -Kotresh HR On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: > Hi all, > i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades > were disabled... > the configuration was always the