search for: kotresh

Displaying 20 results from an estimated 52 matches for "kotresh".

2017 Oct 24
2
active-active georeplication?
...replication [1] could be of interest here. This functionality is available since 3.11 and the current plan is to have it fully supported in a 4.x release. Note that Halo replication is built on existing synchronous replication in Gluster and differs from the current geo-replication implementation. Kotresh's response is spot on for the current geo-replication implementation. Regards, Vijay [1] https://github.com/gluster/glusterfs/issues/199 On Tue, Oct 24, 2017 at 5:13 AM, Kotresh Hiremath Ravishankar < khiremat at redhat.com> wrote: > Hi, > > No, gluster doesn't support ac...
2018 Feb 08
2
georeplication over ssh.
That makes for an interesting problem. I cannot open port 24007 to allow RPC access. On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is > of glusterd. > glusterd will be listening in this port and all volume management > communication > happens via RPC. > > Thanks, > Kotresh HR > > O...
2018 Feb 08
0
georeplication over ssh.
Ccing glusterd team for information On Thu, Feb 8, 2018 at 10:02 AM, Alvin Starr <alvin at netvel.net> wrote: > That makes for an interesting problem. > > I cannot open port 24007 to allow RPC access. > > On 02/07/2018 11:29 PM, Kotresh Hiremath Ravishankar wrote: > > Hi Alvin, > > Yes, geo-replication sync happens via SSH. Ther server port 24007 is of > glusterd. > glusterd will be listening in this port and all volume management > communication > happens via RPC. > > Thanks, > Kotresh HR > &gt...
2017 Oct 24
0
active-active georeplication?
...nterest here. This functionality is > available since 3.11 and the current plan is to have it fully supported in > a 4.x release. > > Note that Halo replication is built on existing synchronous replication in > Gluster and differs from the current geo-replication implementation. > Kotresh's response is spot on for the current geo-replication > implementation. > > Regards, > Vijay > > [1] https://github.com/gluster/glusterfs/issues/199 > > On Tue, Oct 24, 2017 at 5:13 AM, Kotresh Hiremath Ravishankar < > khiremat at redhat.com> wrote: > >&gt...
2018 Mar 02
0
geo-replication
Hi Kotresh, I am expecting my hardware to show up next week. My plan is to run gluster version 3.12 on centos 7. Has the issue been fixed in version 3.12? Thanks a lot for your help! /Marcus On Fri, Mar 02, 2018 at 05:12:13PM +0530, Kotresh Hiremath Ravishankar wrote: > Hi Marcus, > > There are...
2017 Oct 24
2
active-active georeplication?
hi everybody, Have glusterfs released a feature named active-active georeplication? If yes, in which version it is released? If no, is it planned to have this feature? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171024/0656b41f/attachment.html>
2017 Oct 24
0
active-active georeplication?
Hi, No, gluster doesn't support active-active geo-replication. It's not planned in near future. We will let you know when it's planned. Thanks, Kotresh HR On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.adam at gmail.com> wrote: > hi everybody, > > Have glusterfs released a feature named active-active georeplication? If > yes, in which version it is released? If no, is it planned to have this > feature? > > _______...
2018 Mar 02
1
geo-replication
...de and a dispersed-distributed cluster on the slave side of a geo-replication. Second thought, running disperesed on both sides, is that a problem (Master: dispersed-distributed, slave: dispersed-distributed)? Many thanks in advance! Best regards Marcus On Thu, Feb 08, 2018 at 02:57:48PM +0530, Kotresh Hiremath Ravishankar wrote: > Answers inline > > On Thu, Feb 8, 2018 at 1:26 PM, Marcus Peders?n <marcus.pedersen at slu.se> > wrote: > > > Thank you, Kotresh > > > > I talked to your storage colleagues at Open Source Summit in Prag last > > year. >...
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:2...
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh, Thanks for response! After taking more tests with this specific geo-replication configuration I realized that file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication. I?m concern about attribute trusted.gfid because value of the attrib...
2018 Feb 08
0
georeplication over ssh.
Hi Alvin, Yes, geo-replication sync happens via SSH. Ther server port 24007 is of glusterd. glusterd will be listening in this port and all volume management communication happens via RPC. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 8:29 PM, Alvin Starr <alvin at netvel.net> wrote: > I am running gluster 3.8.9 and trying to setup a geo-replicated volume > over ssh, > > It looks like the volume create command is trying to directly access the > server over port 24007. > > Th...
2018 Feb 07
2
georeplication over ssh.
I am running gluster 3.8.9 and trying to setup a geo-replicated volume over ssh, It looks like the volume create command is trying to directly access the server over port 24007. The docs imply that all communications are over ssh. What am I missing? -- Alvin Starr || land: (905)513-7688 Netvel Inc. || Cell: (416)806-0133 alvin at netvel.net
2018 Feb 07
0
add geo-replication "passive" node after node replacement
...ys of new node to slave #gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create push-pem force 3. Stop and start geo-rep But note that while removing brick and adding a brick, you should make sure the data from the brick being removed is synced to slave. Thanks, Kotresh HR On Wed, Feb 7, 2018 at 4:21 PM, Stefano Bagnara <lists at bago.org> wrote: > Hi all, > > i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) > geo-replicated to S5 where both S1 and S2 were visible in the > geo-replication status and S2 "active&quo...
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all, i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node) geo-replicated to S5 where both S1 and S2 were visible in the geo-replication status and S2 "active" while S1 "passive". I had to replace S1 with S3, so I did an "add-brick replica 3 S3" and then "remove-brick replica 2 S1". Now I have again a replica 2 gluster between S3 and S2
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh, thanks for your response... i have made further tests based on ubuntu 16.04.3 (latest upgrades) and gfs 3.12.5 with following rsync version : 1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1 2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2 3. ii? rsync????????????????????????...
2017 Oct 03
1
how to verify bitrot signed file manually?
...ead the file from mount point without any issue because of EC it reads rest of the available blocks in other nodes. my question is "file1" sha256 value matches bitrot signature value but still, it is also marked as bad by scrubber daemon. why is that? On Fri, Sep 29, 2017 at 12:52 PM, Kotresh Hiremath Ravishankar < khiremat at redhat.com> wrote: > Hi Amudhan, > > Sorry for the late response as I was busy with other things. You are right > bitrot uses sha256 for checksum. > If file-1, file-2 are marked bad, the I/O should be errored out with EIO. > If that is not...
2018 Jan 17
0
Deploying geo-replication to local peer
...with this issue? > > Thanks for any information! > > Viktor Nosov > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > -- Thanks and Regards, Kotresh H R -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180117/738f1b64/attachment.html>
2018 Feb 06
0
geo-replication command rsync returned with 3
...azy umount and as a result, all the master and slave volume mounts maintained by geo-replication can be accessed by others. It's also visible in df output. There might be cases where the mount points not get cleaned up when worker goes faulty and come back. These needs manual cleaning. Thanks, Kotresh HR On Tue, Feb 6, 2018 at 12:37 AM, Florian Weimer <fweimer at redhat.com> wrote: > On 02/05/2018 01:33 PM, Florian Weimer wrote: > > Do you have strace output going further back, at least to the proceeding >> getcwd call? It would be interesting to see which path the kernel...
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters and slave nodes same? I have seen that has caused problems sometimes. -Kotresh HR On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: > Hi all, > i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades > were disabled... > the configuration was always the same...a distributed replicated volume on > 4 VM...
2018 Jan 16
2
Deploying geo-replication to local peer
Hi, I'm looking for glusterfs feature that can be used to transform data between volumes of different types provisioned on the same nodes. It could be, for example, transformation from disperse to distributed volume. The possible option is to invoke geo-replication between volumes. It seems is works properly. But I'm concern about requirement from Administration Guide for Red Hat Gluster