Displaying 20 results from an estimated 200 matches similar to: "geo-replication command rsync returned with 3"
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2018 Jan 24
4
geo-replication command rsync returned with 3
Hi all,
i have made some tests on the latest Ubuntu 16.04.3 server image.
Upgrades were disabled...
the configuration was always the same...a distributed replicated volume
on 4 VM's with geo-replication to a dist. repl .volume on 4 VM's.
i started with 3.7.20, upgrade to 3.8.15, to 3.10.9 to 3.12.5. After
each upgrade i have tested the geo-replication which worked well anytime.
then i
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades) and
gfs 3.12.5 with following rsync version :
1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1
2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2
3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1
in each test all nodes had the same rsync version installed. all
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com>
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades
> were disabled...
> the configuration was always the
2018 Jan 29
0
geo-replication command rsync returned with 3
Hi all,
by downgrade of
ii? libc6:amd64 2.23-0ubuntu10
to
ii? libc6:amd64 2.23-0ubuntu3
the problem was solved. at least in gfs testenvironment running gfs
3.13.2 and 3.7.20 and on our productive environment with 3.7.18.
possibly it helps someone...
best regards
Dietmar
Am 25.01.2018 um 14:06 schrieb Dietmar Putz:
>
> Hi Kotresh,
>
> thanks for your response...
>
> i
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking into the gluster mailing list i found
a very similar case in
http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
but i'm not sure if this issue is fixed...
even this outage happened on glusterfs 3.7.18 which gets no more updates
since ~.20 i would kindly ask
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
Hello Anoop,
thank you for your reply....
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
> On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
>> Hello,
>>
>> recently we had two times a partial gluster outage followed by a total
>> outage of all four nodes. Looking into the gluster mailing list i found
>> a very similar case
2017 Nov 25
1
How to read geo replication timestamps from logs
Folks, need help interpreting this message from my geo rep logs for my
volume mojo.
ssh%3A%2F%2Froot%40173.173.241.2%3Agluster%3A%2F%2F127.0.
0.1%3Amojo-remote.log:[2017-11-22 00:59:40.610574] I
[master(/bricks/lsi/mojo):1125:crawl] _GMaster: slave's time: (1511312352,
0)
The epoch of 1511312352 is Tuesday, November 21, 2017 12:59:12 AM GMT.
The clocks are using the same ntp stratum and
2018 Mar 06
1
geo replication
Hi,
Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
I can see a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number
2017 Jul 28
0
/var/lib/misc/glusterfsd growing and using up space on OS disk
Hello,
Today while freeing up some space on my OS disk I just discovered that there is a /var/lib/misc/glusterfsd directory which seems to save data related to geo-replication.
In particular there is a hidden sub-directory called ".processed" as you can see here:
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2011 Sep 04
1
mrtg 2.16.2 ipv6 on centos 6
Hi,
i'm running CentOS 6.0 on my server and installed mrtg from the
rpm-package mrtg-2.16.2 .
I also installed the depending packages perl-IO-Socket-INET6
perl-Socket6 ....
mrtg works fine with IPV4-Addresses. When i specify a Target by
IPV6-Address (or hostname resolving to a V6-address) mrtg fails.
Here i have a small sample-config for V4 which is working:
LogDir: /tmp
ThreshDir: /tmp
2018 Feb 05
0
geo-replication command rsync returned with 3
(resending, sorry for duplicates)
On 01/24/2018 05:59 PM, Dietmar Putz wrote:
> strace rsync :
>
> 30743 23:34:47 newfstatat(3, "6737", {st_mode=S_IFDIR|0755,
> st_size=4096, ...}, AT_SYMLINK_NOFOLLOW) = 0
> 30743 23:34:47 newfstatat(3, "6741", {st_mode=S_IFDIR|0755,
> st_size=4096, ...}, AT_SYMLINK_NOFOLLOW) = 0
> 30743 23:34:47 getdents(3, /* 0
2017 Nov 10
3
Some strange errors in logs
Hai,
cat "/var/lib/samba/private/named.conf" also please.
And check if the correct bind9_dlz is enabled.
dpkg -l | grep bind9
Jessie, should be 9.9
Stretch should be 9.10
If this server was upgraded then you need to manualy adjust the file above.
Looks to my bind9-dlz is enable in smb.conf but not loaded.
cat /var/log/daemon.log | grep dlz
You should see thing like:
samba_dlz: