similar to: Glusterfs geo-replication security

Displaying 20 results from an estimated 20000 matches similar to: "Glusterfs geo-replication security"

2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
Hi, is this a rare problem? Cheers, Kingsley. On Tue, 2023-03-14 at 19:31 +0000, Kingsley Tart wrote: > Hi, > > using Gluster 9.2 on debian 11 I'm trying to set up geo replication. > I am following this guide: > > https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh > > I have a volume called "ansible" which is only a
2023 Mar 14
1
can't set up geo-replication: can't fetch slave details
Hi, using Gluster 9.2 on debian 11 I'm trying to set up geo replication. I am following this guide: https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh I have a volume called "ansible" which is only a small volume and seemed like an ideal test case. Firstly, for a bit of feedback (this isn't my issue as I worked around it) I had this
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'. As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path. I have followed the configuration steps as documented in
2014 Sep 30
1
geo-replication 3.5.2 not working on Ubuntu 12.0.4 - transport.address-family not specified
Hi, I am testing geo-replication 3.5.2 by following the instruction from https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md All commands are executed successfully without returning any error, but no replication is done from master to the slave. Enclosed please find the logs when starting the geo-replication volume. At the end of the log,
2018 Jan 18
1
Deploying geo-replication to local peer
Hi Kotresh, Thanks for response! After taking more tests with this specific geo-replication configuration I realized that file extended attributes trusted.gfid and trusted.gfid2path.*** are synced as well during geo replication. I?m concern about attribute trusted.gfid because value of the attribute has to be unique for glusterfs cluster. But this is not a case in my tests. File on
2012 Jan 03
1
geo-replication loops
Hi, I was thinking about a common (I hope!) use case of Glusterfs geo-replication. Imagine 3 different facility having their own glusterfs deployment: * central-office * remote-office1 * remote-office2 Every client mount their local glusterfs deployment and write files (i.e.: user A deposit a PDF document on remote-office2), and it get replicated to the central-office glusterfs volume as soon
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2018 Jan 22
1
geo-replication initial setup with existing data
2023 Nov 03
0
Gluster Geo replication
Hi, You simply need to enable port 22 on the geo-replication slave side. This will allow the master node to establish an SSH connection with the slave server and transfer data securely over SSH. Thanks, Anant ________________________________ From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of dev devops <dev.devops12 at gmail.com> Sent: 31 October 2023 3:10 AM
2023 Nov 03
1
Gluster Geo replication
While creating the Geo-replication session it mounts the secondary Volume to see the available size. To mount the secondary volume in Primary, port 24007 and 49152-49664 of the secondary volume needs to be accessible from the Primary (Only in the node from where the Geo-rep create command is executed). This need to be changed to use SSH(bug). Alternatively use georep setup tool
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
Hi, I have a Glusterfs (v3.11.2-1) geo replication master-slave setup between two sites. The idea is to provide an off-site backup for my storage. When I start the session, I get the following message: [2017-08-15 20:07:41.110635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage Then it starts syncing the data but it stops at the
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar, I am trying to understand the problem and have few questions. 1. Is trashcan enabled only on master volume? 2. Does the 'rm -rf' done on master volume synced to slave ? 3. If trashcan is disabled, the issue goes away? The geo-rep error just says the it failed to create the directory "Oracle_VM_VirtualBox_Extension" on slave. Usually this would be because of gfid
2011 May 12
1
geo-replication issue
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110512/765c02a7/attachment.html>
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine for months. >since 18th jan. the geo-replication is faulty.
2011 Jul 26
1
Error during geo-replication : Unable to get <uuid>.xtime attr
Hi, I got a problem during geo-replication: The master Gluster server log has the following error every second: [2011-07-26 04:20:50.618532] W [libxlator.c:128:cluster_markerxtime_cbk] 0-flvol-dht: Unable to get <uuid>.xtime attr While the slave log has the error every a few seconds: [2011-07-26 04:25:08.77133] E [stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
2023 Oct 31
2
Gluster Geo replication
Hi All, What are the ports needed to be opened for Gluster Geo replication ? We have a very closed setup, I could gather below info, does all of these ports need to be open on master and slave for inter communication or just 22 would work since it's using the rsync over ssh for actual data push ? *?* *Port 22 (TCP):* Used by SSH for secure data communication in Geo-replication. *?* *Port 24007
2018 Jan 17
0
Deploying geo-replication to local peer
Hi Viktor, Answers inline On Wed, Jan 17, 2018 at 3:46 AM, Viktor Nosov <vnosov at stonefly.com> wrote: > Hi, > > I'm looking for glusterfs feature that can be used to transform data > between > volumes of different types provisioned on the same nodes. > It could be, for example, transformation from disperse to distributed > volume. > The possible option is to
2012 Mar 20
1
issues with geo-replication
Hi all. I'm looking to see if anyone can tell me this is already working for them or if they wouldn't mind performing a quick test. I'm trying to set up a geo-replication instance on 3.2.5 from a local volume to a remote directory. This is the command I am using: gluster volume geo-replication myvol ssh://root at remoteip:/data/path start I am able to perform a geo-replication
2018 Apr 23
0
Geo-replication faulty
Hi all, I setup my gluster cluster with geo-replication a couple of weeks ago and everything worked fine! Today I descovered that one of the master nodes geo-replication status is faulty. On master side: Distributed-replicatied 2 x (2 + 1) = 6 On slave side: Replicated 1 x (2 + 1) = 3 After checking logs I see that the master node has the following error: OSError: Permission denied Looking at
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello, in regard to https://bugzilla.redhat.com/show_bug.cgi?id=1434066 i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4) for e.g. removing an entire directory with subfolders : tron at gl-node1:/myvol-1/test1/b1$ rm -rf * afterwards listing files in the trashcan : tron at gl-node1:/myvol-1/test1$