Displaying 20 results from an estimated 1000 matches similar to: "Geo-Replication Rsync Command, bug?"
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone,
I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner.
Is there any specific sequence or trick I need to follow? Currently, I am using the following command:
[root at master2 ~]# systemctl stop glusterd.service
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.
I have followed the configuration steps as documented in
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com>
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades
> were disabled...
> the configuration was always the
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the
2020 Oct 05
0
Getting error code 12 when using rsync with ssh in RHEL 8
Hi SMEs,
I have been working on a geo replication solution and it uses rsync
internally. Recently when I was running the utility in RHEL 8, I started
facing seeing the following message :
Popen: command returned error cmd=rsync -aR0 --inplace --files-from=-
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls
--ignore-missing-args . -e ssh -oPasswordAuthentication=no
2018 Jan 24
4
geo-replication command rsync returned with 3
Hi all,
i have made some tests on the latest Ubuntu 16.04.3 server image.
Upgrades were disabled...
the configuration was always the same...a distributed replicated volume
on 4 VM's with geo-replication to a dist. repl .volume on 4 VM's.
i started with 3.7.20, upgrade to 3.8.15, to 3.10.9 to 3.12.5. After
each upgrade i have tested the geo-replication which worked well anytime.
then i
2012 Mar 20
1
issues with geo-replication
Hi all. I'm looking to see if anyone can tell me this is already
working for them or if they wouldn't mind performing a quick test.
I'm trying to set up a geo-replication instance on 3.2.5 from a local
volume to a remote directory. This is the command I am using:
gluster volume geo-replication myvol ssh://root at remoteip:/data/path start
I am able to perform a geo-replication
2018 Jan 29
0
geo-replication command rsync returned with 3
Hi all,
by downgrade of
ii? libc6:amd64 2.23-0ubuntu10
to
ii? libc6:amd64 2.23-0ubuntu3
the problem was solved. at least in gfs testenvironment running gfs
3.13.2 and 3.7.20 and on our productive environment with 3.7.18.
possibly it helps someone...
best regards
Dietmar
Am 25.01.2018 um 14:06 schrieb Dietmar Putz:
>
> Hi Kotresh,
>
> thanks for your response...
>
> i
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades) and
gfs 3.12.5 with following rsync version :
1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1
2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2
3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1
in each test all nodes had the same rsync version installed. all
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Feb 07
1
geo-replication command rsync returned with 3
Hi,
?
Kotresh workaround works for me. But before I tried it, I created some strace-logs for Florian.
setup: 2 VMs?(192.168.222.120 master, 192.168.222.121 slave), both with a volume named vol with Ubuntu?16.04.3,?glusterfs 3.13.2, rsync 3.1.1 .
?
Best regards,
Tino
?
root at master:~# cat /usr/bin/rsync
#!/bin/bash
strace -o /tmp/rsync.trace -ff /usr/bin/rsynco "$@"
?
One of the traces
2024 Sep 01
1
geo-rep will not initialize
FYI, I will be traveling for the next week, and may not see email much
until then.
Your questions...
On 8/31/24 04:59, Strahil Nikolov wrote:
> One silly question: Did you try adding some files on the source volume
> after the georep was created ?
Yes. I wondered that, too, whether geo-rep would not start simply
because there was nothing to do. But yes, there are a few files created
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
Hi,
I have a Glusterfs (v3.11.2-1) geo replication master-slave setup between two sites. The idea is to provide an off-site backup for my storage. When I start the session, I get the following message:
[2017-08-15 20:07:41.110635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage
Then it starts syncing the data but it stops at the
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete