Displaying 20 results from an estimated 500 matches similar to: "How to read geo replication timestamps from logs"
2018 Mar 12
2
trashcan on dist. repl. volume with geo-replication
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a
dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron at gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron at gl-node1:/myvol-1/test1$
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2018 Mar 13
0
trashcan on dist. repl. volume with geo-replication
Hi Dietmar,
I am trying to understand the problem and have few questions.
1. Is trashcan enabled only on master volume?
2. Does the 'rm -rf' done on master volume synced to slave ?
3. If trashcan is disabled, the issue goes away?
The geo-rep error just says the it failed to create the directory
"Oracle_VM_VirtualBox_Extension" on slave.
Usually this would be because of gfid
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh,
thanks for your repsonse...
answers inside...
best regards
Dietmar
Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar:
> Hi Dietmar,
>
> I am trying to understand the problem and have few questions.
>
> 1. Is trashcan enabled only on master volume?
no, trashcan is also enabled on slave. settings are the same as on
master but trashcan on slave is complete
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2018 Mar 06
1
geo replication
Hi,
Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
I can see a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number
2018 Jan 24
4
geo-replication command rsync returned with 3
Hi all,
i have made some tests on the latest Ubuntu 16.04.3 server image.
Upgrades were disabled...
the configuration was always the same...a distributed replicated volume
on 4 VM's with geo-replication to a dist. repl .volume on 4 VM's.
i started with 3.7.20, upgrade to 3.8.15, to 3.10.9 to 3.12.5. After
each upgrade i have tested the geo-replication which worked well anytime.
then i
2018 Jan 25
0
geo-replication command rsync returned with 3
It is clear that rsync is failing. Are the rsync versions on all masters
and slave nodes same?
I have seen that has caused problems sometimes.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.putz at 3qsdn.com>
wrote:
> Hi all,
> i have made some tests on the latest Ubuntu 16.04.3 server image. Upgrades
> were disabled...
> the configuration was always the
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All,
I have run the following commands on master3, and that has added master3 to geo-replication.
gluster system:: execute gsec_create
gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force
gluster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root.
Best Regards,
Strahil Nikolov
? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi All,
I have run the following commands on master3,
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2018 Jan 25
2
geo-replication command rsync returned with 3
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades) and
gfs 3.12.5 with following rsync version :
1. ii? rsync????????????????????????????? 3.1.1-3ubuntu1
2. ii? rsync????????????????????????????? 3.1.1-3ubuntu1.2
3. ii? rsync????????????????????????????? 3.1.2-2ubuntu0.1
in each test all nodes had the same rsync version installed. all
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem
Best Regards,
Strahil Nikolov
? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????:
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2018 Jan 29
0
geo-replication command rsync returned with 3
Hi all,
by downgrade of
ii? libc6:amd64 2.23-0ubuntu10
to
ii? libc6:amd64 2.23-0ubuntu3
the problem was solved. at least in gfs testenvironment running gfs
3.13.2 and 3.7.20 and on our productive environment with 3.7.18.
possibly it helps someone...
best regards
Dietmar
Am 25.01.2018 um 14:06 schrieb Dietmar Putz:
>
> Hi Kotresh,
>
> thanks for your response...
>
> i
2011 May 03
3
Issue with geo-replication and nfs auth
hi,
I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release).
Geo-replication
---------------
System : Debian 6.0 amd64
Glusterfs: 3.2.0
MASTER (volume) => SLAVE (directory)
For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status:
2011-05-03 09:57:40.315774] E
2017 Jul 28
0
/var/lib/misc/glusterfsd growing and using up space on OS disk
Hello,
Today while freeing up some space on my OS disk I just discovered that there is a /var/lib/misc/glusterfsd directory which seems to save data related to geo-replication.
In particular there is a hidden sub-directory called ".processed" as you can see here:
2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2008 Mar 09
2
[LLVMdev] linker error (llvm-config, eclipse)
Hi,
I'm playing around with llvm and the Kaleidoscope tutorial and first of
all I have to say I'm really impressed. LLVM rocks!
Unfortunately I've now run into a linker error while trying to optimize
the IR or turn it to bitcode and likely due to my very limited
experience with c++ I just can't figure out how to resolve it.
The linker complains that the llvm::WriteBitcodeToFile