Displaying 12 results from an estimated 12 matches for "3qsdn".
2018 Jan 29
0
geo-replication command rsync returned with 3
...ar:
>> It is clear that rsync is failing. Are the rsync versions on all
>> masters and slave nodes same?
>> I have seen that has caused problems sometimes.
>>
>> -Kotresh HR
>>
>> On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz
>> <dietmar.putz at 3qsdn.com <mailto:dietmar.putz at 3qsdn.com>> wrote:
>>
>> Hi all,
>>
>> i have made some tests on the latest Ubuntu 16.04.3 server image.
>> Upgrades were disabled...
>> the configuration was always the same...a distributed replicated
>>...
2018 Jan 25
2
geo-replication command rsync returned with 3
...:14 schrieb Kotresh Hiremath Ravishankar:
> It is clear that rsync is failing. Are the rsync versions on all
> masters and slave nodes same?
> I have seen that has caused problems sometimes.
>
> -Kotresh HR
>
> On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com
> <mailto:dietmar.putz at 3qsdn.com>> wrote:
>
> Hi all,
>
> i have made some tests on the latest Ubuntu 16.04.3 server image.
> Upgrades were disabled...
> the configuration was always the same...a distributed replicated
> volume on 4 VM...
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com>
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades
> were disabled...
> the configuration was always the same...a distributed replicated volume on
> 4 VM's with geo-replication to a dist. repl .volume on 4 VM's.
> i start...
2018 Jan 24
4
geo-replication command rsync returned with 3
...e(2, "\n", 1)??????? = 1
30743 23:34:47 exit_group(3)??????????? = ?
30743 23:34:47 +++ exited with 3 +++
Am 19.01.2018 um 17:27 schrieb Joe Julian:
> ubuntu 16.04
--
Dietmar Putz
3Q GmbH
Kurf?rstendamm 102
D-10711 Berlin
Mobile: +49 171 / 90 160 39
Mail: dietmar.putz at 3qsdn.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180124/76264a75/attachment.html>
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
...wing same errors and same
failure to sync the same directory. If
so does the parent 'test1/b1' exists on slave?
And doing ls on trashcan should not affect geo-rep. Is there a easy
reproducer for this ?
Thanks,
Kotresh HR
On Mon, Mar 12, 2018 at 10:13 PM, Dietmar Putz <dietmar.putz at 3qsdn.com>
wrote:
> Hello,
>
> in regard to
> https://bugzilla.redhat.com/show_bug.cgi?id=1434066
> i have been faced to another issue when using the trashcan feature on a
> dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
> for e.g. removing an entire...
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
...as one without activation of the trashcan feature on slave...with
same / similiar problems.
i will come back with a more comprehensive and reproducible description
of that issue...
>
>
> Thanks,
> Kotresh HR
>
> On Mon, Mar 12, 2018 at 10:13 PM, Dietmar Putz <dietmar.putz at 3qsdn.com
> <mailto:dietmar.putz at 3qsdn.com>> wrote:
>
> Hello,
>
> in regard to
> https://bugzilla.redhat.com/show_bug.cgi?id=1434066
> <https://bugzilla.redhat.com/show_bug.cgi?id=1434066>
> i have been faced to another issue when using the...
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty. the geo-rep log on the
>master shows following err...
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
...Jun 23 16:35
/var/crash/_usr_sbin_glusterfsd.0.crash
-----------------------------------------------------
--
Dietmar Putz
3Q GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2797 866 - 8
Mobile: +49 171 / 90 160 39
Mail: dietmar.putz at 3qsdn.com
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
> Hello,
>
> recently we had two times a partial gluster outage followed by a total?
> outage of all four nodes. Looking into the gluster mailing list i found?
> a very similar case in?
> http://lists.gluster.org/pipermail/gluster-users/2016-June/027124.html
If you are talking about a crash happening on bricks, were you
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
...isten-port=49152
ProcCwd: /
ProcEnviron:
LANGUAGE=en_GB:en
[ 14:48:52 ] - root at gl-master-03 ~ $
>
--
Dietmar Putz
3Q GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2797 866 - 8
Mobile: +49 171 / 90 160 39
Mail: dietmar.putz at 3qsdn.com