similar to: Deploying geo-replication to local peer

Displaying 20 results from an estimated 3000 matches similar to: "Deploying geo-replication to local peer"

2018 Jan 17
0
Deploying geo-replication to local peer
Hi Viktor, Answers inline On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov <vnosov at stonefly.com> wrote: > Hi, > > I'm looking for glusterfs feature that can be used to transform data > between > volumes of different types provisioned on the same nodes. > It could be, for example, transformation from disperse to distributed > volume. > The possible option is to
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh, Thanks for response! After taking more tests with this specific geo-replication configuration I realized that file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication. I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster. But this is not a case in my tests. File on
2017 Dec 08
2
Testing sharding on tiered volume
Hi, I'm looking to use sharding on tiered volume. This is very attractive feature that could benefit tiered volume to let it handle larger files without hitting the "out of (hot)space problem". I decided to set test configuration on GlusterFS 3.12.3 when tiered volume has 2TB cold and 1GB hot segments. Shard size is set to be 16MB. For testing 100GB files are used. It seems writes
2017 Dec 18
0
Testing sharding on tiered volume
----- Original Message ----- > From: "Viktor Nosov" <vnosov at stonefly.com> > To: gluster-users at gluster.org > Cc: vnosov at stonefly.com > Sent: Friday, December 8, 2017 5:45:25 PM > Subject: [Gluster-users] Testing sharding on tiered volume > > Hi, > > I'm looking to use sharding on tiered volume. This is very attractive > feature that could
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello, in regard to https://bugzilla.redhat.com/show_bug.cgi?id=1434066 i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4) for e.g. removing an entire directory with subfolders : tron at gl-node1:/myvol-1/test1/b1$ rm -rf * afterwards listing files in the trashcan : tron at gl-node1:/myvol-1/test1$
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory "Oracle_VM_VirtualBox_Extension" on slave. Usually this would be because of gfid
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2017 Oct 03
1
how to verify bitrot signed file manually?
my volume is distributed disperse volume 8+2 EC. file1 and file2 are different files lying in same brick. I am able to read the file from mount point without any issue because of EC it reads rest of the available blocks in other nodes. my question is "file1" sha256 value matches bitrot signature value but still, it is also marked as bad by scrubber daemon. why is that? On Fri, Sep
2017 Sep 25
2
how to verify bitrot signed file manually?
resending mail. On Fri, Sep 22, 2017 at 5:30 PM, Amudhan P <amudhan83 at gmail.com> wrote: > ok, from bitrot code I figured out gluster using sha256 hashing algo. > > > Now coming to the problem, during scrub run in my cluster some of my files > were marked as bad in few set of nodes. > I just wanted to confirm bad file. so, I have used "sha256sum" tool in >
2017 Sep 29
1
how to verify bitrot signed file manually?
Hi Amudhan, Sorry for the late response as I was busy with other things. You are right bitrot uses sha256 for checksum. If file-1, file-2 are marked bad, the I/O should be errored out with EIO. If that is not happening, we need to look further into it. But what's the file contents of file-1 and file-2 on the replica bricks ? Are they matching ? Thanks and Regards, Kotresh HR On Mon, Sep 25,
2017 Jun 23
2
seeding my georeplication
I have a ~600tb distributed gluster volume that I want to start using geo replication on. The current volume is on 6 100tb bricks on 2 servers My plan is: 1) copy each of the bricks to a new arrays on the servers locally 2) move the new arrays to the new servers 3) create the volume on the new servers using the arrays 4) fix the layout on the new volume 5) start georeplication (which should be
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi, When S3 is added to master volume from new node, the following cmd should be run to generate and distribute ssh keys 1. Generate ssh keys from new node #gluster system:: execute gsec_create 2. Push those ssh keys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that
2018 Mar 02
1
geo-replication
Hi again, I have been testing and reading up on other solutions and just wanted to check if my ideas are ok. I have been looking at dispersed volumes and wonder if there are any problems running replicated-distributed cluster on the master node and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master:
2017 Dec 21
1
seeding my georeplication
Thanks for your response (6 months ago!) but I have only just got around to following up on this. Unfortunately, I had already copied and shipped the data to the second datacenter before copying the GFIDs so I already stumbled before the first hurdle! I have been using the scripts in the extras/geo-rep provided for an earlier version upgrade. With a bit of tinkering, these have given me a file
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote: > Do you have strace output going further back, at least to the proceeding > getcwd call?? It would be interesting to see which path the kernel > reports, and if it starts with "(unreachable)". I got the strace output now, but it very difficult to read (chdir in a multi-threaded process ?). My current inclination is to blame
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi, As a quick workaround for geo-replication to work. Please configure the following option. gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config access_mount true The above option will not do the lazy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output.
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters and slave nodes same? I have seen that has caused problems sometimes. -Kotresh HR On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: > Hi all, > i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades > were disabled... > the configuration was always the
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh, thanks for your response... i have made further tests based on ubuntu 16.04.3 (latest upgrades) and gfs 3.12.5 with following rsync version : 1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1 2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2 3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1 in each test all nodes had the same rsync version installed. all