similar to: Problem with Gluster Geo Replication, status faulty

Displaying 20 results from an estimated 300 matches similar to: "Problem with Gluster Geo Replication, status faulty"

2011 Jun 28
2
Issue with Gluster Quota
An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110628/64de4f5c/attachment.html>
2011 May 03
3
Issue with geo-replication and nfs auth
hi, I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release). Geo-replication --------------- System : Debian 6.0 amd64 Glusterfs: 3.2.0 MASTER (volume) => SLAVE (directory) For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status: 2011-05-03 09:57:40.315774] E
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>: Hi Marcus, Is the
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone, I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner. Is there any specific sequence or trick I need to follow? Currently, I am using the following command: [root at master2 ~]# systemctl stop glusterd.service
2012 Apr 27
1
geo-replication and rsync
Hi, can someone tell me the differenct between geo-replication and plain rsync? On which frequency files are replicated with geo-replication? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2012 Jan 03
1
geo-replication loops
Hi, I was thinking about a common (I hope!) use case of Glusterfs geo-replication. Imagine 3 different facility having their own glusterfs deployment: * central-office * remote-office1 * remote-office2 Every client mount their local glusterfs deployment and write files (i.e.: user A deposit a PDF document on remote-office2), and it get replicated to the central-office glusterfs volume as soon
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2011 Jul 26
1
Error during geo-replication : Unable to get <uuid>.xtime attr
Hi, I got a problem during geo-replication: The master Gluster server log has the following error every second: [2011-07-26 04:20:50.618532] W [libxlator.c:128:cluster_markerxtime_cbk] 0-flvol-dht: Unable to get <uuid>.xtime attr While the slave log has the error every a few seconds: [2011-07-26 04:25:08.77133] E [stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
2011 Sep 16
2
Can't replace dead peer/brick
I have a simple setup: gluster> volume info Volume Name: myvolume Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: tcp Bricks: Brick1: 10.2.218.188:/srv Brick2: 10.116.245.136:/srv Brick3: 10.206.38.103:/srv Brick4: 10.114.41.53:/srv Brick5: 10.68.73.41:/srv Brick6: 10.204.129.91:/srv I *killed* Brick #4 (kill -9 and then shut down instance). My
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2012 Jan 05
1
Can't stop or delete volume
Hi, I can't stop or delete a replica volume: # gluster volume info Volume Name: sync1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: thinkpad:/gluster/export Brick2: quad:/raid/gluster/export # gluster volume stop sync1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Volume sync1 does not exist # gluster volume
2011 Sep 28
1
Custom rpms failing
I have managed to build i386 rpms for CentOS, based on the 3.2.3 SRPM, but they don't work: # rpm -Uhv glusterfs-core-3.2.3-1.i386.rpm glusterfs-fuse-3.2.3-1.i386.rpm glusterfs-rdma-3.2.3-1.i386.rpm Preparing... ########################################### [100%] 1:glusterfs-core ########################################### [ 33%] glusterd: error while loading shared
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody. I have a problem setting up gluster failover funcionality. Based on manual i setup ucarp which is working well ( tested with ping/ssh etc ) But when i use virtual address for gluster volume mount and i turn off one of nodes machine/gluster will freeze until node is back online. My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In gluster log i can see: [2011-06-06
2011 Mar 31
1
Error rpmbuild Glusterfs 3.1.3
Hi, i have a lot of troubles when i try to build rpm?s out of the glusterfs 3.1.3 tgz on my SLES Servers (SLES10.1 & SLES11.1) all is running fine i guess until it try?s to build the rpm?s. Then i always run into this error : RPM build errors: File not found: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd File not found by glob:
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All, we are running a dist. repl. volume on 4 nodes including geo-replication to another location. the geo-replication was running fine for months. since 18th jan. the geo-replication is faulty. the geo-rep log on the master shows following error in a loop while the logs on the slave just show 'I'nformations... somehow suspicious are the frequent 'shutting down connection'
2011 Aug 12
2
Replace brick of a dead node
Hi! Seeking pardon from the experts, but I have a basic usage question that I could not find a straightforward answer to. I have a two node cluster, with two bricks replicated, one on each node. Lets say one of the node dies and is unreachable. I want to be able to spin a new node and replace the dead node's brick to a location on the new node. The command 'gluster volume
2024 Jan 22
1
Geo-replication status is getting Faulty after few seconds
Hi There, We have a Gluster setup with three master nodes in replicated mode and one slave node with geo-replication. # gluster volume info Volume Name: tier1data Type: Replicate Volume ID: 93c45c14-f700-4d50-962b-7653be471e27 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: master1:/opt/tier1data2019/brick Brick2: master2:/opt/tier1data2019/brick
2018 Mar 06
1
geo replication
Hi, Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04. I can see a ?master volinfo unavailable? in master logfile. Any ideas? Master: Status of volume: testtomcat Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfstest07:/gfs/testtomcat/mount 49153 0
2011 May 05
1
CIFS Documentation
Hello, It would be a good idea to update the Documentation about CIFS: http://gluster.com/community/documentation/index.php/Gluster_3.2:_Exporting_Gluster_Volumes_Through_Samba When the simple truth is, that there is no CIFS Support in gluster itself - it should not be in the docs. As I found out, this was also suggested earlier: http://www.mail-archive.com/gluster-users at
2012 Oct 20
1
Gluster download link redirect to redhat
Dear Team, Please note that many download links of gluster.org redirect to redhat.com. Please refer below links and correct download link. http://gluster.org/community/documentation/index.php/Gluster_3.2:_Downloading_and_Installing_the_Gluster_Virtual_Storage_Appliance_for_KVM Click on link and try to download Gluster virtual storage appliance for kvm but it