Displaying 20 results from an estimated 700 matches similar to: "Upgrade to 4.1.1 geo-replication does not work"
2023 Feb 20
1
Gluster 11.0 upgrade
I made a recusive diff on the upgraded arbiter.
/var/lib/glusterd/vols/gds-common is the upgraded aribiter
/home/marcus/gds-common is one of the other nodes still on gluster 10
diff -r /var/lib/glusterd/vols/gds-common/bricks/urd-gds-030:-urd-gds-gds-common /home/marcus/gds-common/bricks/urd-gds-030:-urd-gds-gds-common
5c5
< listen-port=60419
---
> listen-port=0
11c11
<
2023 Feb 20
2
Gluster 11.0 upgrade
Hi again Xavi,
I did some more testing on my virt machines
with same setup:
Number of Bricks: 1 x (2 + 1) = 3
If I do it the same way, I upgrade the arbiter first,
I get the same behavior that the bricks do not start
and the other nodes does not "see" the upgraded node.
If I upgrade one of the other nodes (non arbiter) and restart
glusterd on both the arbiter and the other the arbiter
2023 Feb 21
2
Gluster 11.0 upgrade
Hi Xavi,
Copy the same info file worked well and the gluster 11 arbiter
is now up and running and all the nodes are communication
the way they should.
Just another note on something I discovered on my virt machines.
All the three nodes has been upgarded to 11.0 and are working.
If I run:
gluster volume get all cluster.op-version
I get:
Option Value
------
2023 Oct 25
1
Replace faulty host
Hi all,
I have a problem with one of our gluster clusters.
This is the setup:
Volume Name: gds-common
Type: Distributed-Replicate
Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6
Status: Started
Snapshot Count: 26
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: urd-gds-031:/urd-gds/gds-common
Brick2: urd-gds-032:/urd-gds/gds-common
Brick3: urd-gds-030:/urd-gds/gds-common
2023 Oct 27
1
Replace faulty host
Hi Markus,
It looks quite well documented, but please use?https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-replacing_hosts?as 3.5?is the latest version for RHGS.
If the OS disks are failing, I would have tried?moving the data disks to the new machine and transferring the gluster files in /etc and /var/lib to the new node.
Any reason to reuse
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2024 Jan 19
1
Heal failure
Hi all,
I have a really strange problem with my cluster.
Running gluster 10.4, replicated with an arbiter:
Number of Bricks: 1 x (2 + 1) = 3
All my files in the system seems fine and I have not
found any broken files.
Even though I have 40000 files that needs healing,
in heal-count.
Heal fails for all the files over and over again.
If I use heal info I just get a long list of gfids
and trying
2018 Apr 10
0
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer probe
2018 Mar 06
1
geo replication
Hi,
Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
I can see a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2011 May 03
3
Issue with geo-replication and nfs auth
hi,
I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release).
Geo-replication
---------------
System : Debian 6.0 amd64
Glusterfs: 3.2.0
MASTER (volume) => SLAVE (directory)
For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status:
2011-05-03 09:57:40.315774] E
2012 Jan 03
1
geo-replication loops
Hi,
I was thinking about a common (I hope!) use case of Glusterfs geo-replication.
Imagine 3 different facility having their own glusterfs deployment:
* central-office
* remote-office1
* remote-office2
Every client mount their local glusterfs deployment and write files
(i.e.: user A deposit a PDF document on remote-office2), and it get
replicated to the central-office glusterfs volume as soon
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem
Best Regards,
Strahil Nikolov
? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????:
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'.
As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path.
I have followed the configuration steps as documented in
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There,
We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication.
# gluster volume info
Volume Name: tier1data
Type: Replicate
Volume ID: 93c45c14-f700-4d50-962b-7653be471e27
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: master1:/opt/tier1data2019/brick
Brick2: master2:/opt/tier1data2019/brick
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number