similar to: Issue with Gluster Quota

Displaying 20 results from an estimated 700 matches similar to: "Issue with Gluster Quota"

2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2011 May 03
3
Issue with geo-replication and nfs auth
hi, I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release). Geo-replication --------------- System : Debian 6.0 amd64 Glusterfs: 3.2.0 MASTER (volume) => SLAVE (directory) For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status: 2011-05-03 09:57:40.315774] E
2018 Jan 22
1
geo-replication initial setup with existing data
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>: Hi Marcus, Is the
2012 Mar 20
1
issues with geo-replication
Hi all. I'm looking to see if anyone can tell me this is already working for them or if they wouldn't mind performing a quick test. I'm trying to set up a geo-replication instance on 3.2.5 from a local volume to a remote directory. This is the command I am using: gluster volume geo-replication myvol ssh://root at remoteip:/data/path start I am able to perform a geo-replication
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi, I've setup Gluster Geo Replication according the manual, # sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave config log-level DEBUG #sudo gluster volume geo-replication flvol ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start #sudo gluster volume geo-replication flvol ssh://root at
2018 Mar 06
1
geo replication
Hi, Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04. I can see a ?master volinfo unavailable? in master logfile. Any ideas? Master: Status of volume: testtomcat Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfstest07:/gfs/testtomcat/mount 49153 0
2011 Sep 28
1
Custom rpms failing
I have managed to build i386 rpms for CentOS, based on the 3.2.3 SRPM, but they don't work: # rpm -Uhv glusterfs-core-3.2.3-1.i386.rpm glusterfs-fuse-3.2.3-1.i386.rpm glusterfs-rdma-3.2.3-1.i386.rpm Preparing... ########################################### [100%] 1:glusterfs-core ########################################### [ 33%] glusterd: error while loading shared
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All, we are running a dist. repl. volume on 4 nodes including geo-replication to another location. the geo-replication was running fine for months. since 18th jan. the geo-replication is faulty. the geo-rep log on the master shows following error in a loop while the logs on the slave just show 'I'nformations... somehow suspicious are the frequent 'shutting down connection'
2017 Aug 17
0
Extended attributes not supported by the backend storage
Hi, I have a Glusterfs (v3.11.2-1) geo replication master-slave setup between two sites. The idea is to provide an off-site backup for my storage. When I start the session, I get the following message: [2017-08-15 20:07:41.110635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage Then it starts syncing the data but it stops at the
2017 Aug 16
0
Geo replication faulty-extended attribute not supported by the backend storage
Hi, I have a Glusterfs (v3.11.2-1) geo replication master-slave setup between two sites. The idea is to provide an off-site backup for my storage. When I start the session, I get the following message: [2017-08-15 20:07:41.110635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage Then it starts syncing the data but it stops at the
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone, I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner. Is there any specific sequence or trick I need to follow? Currently, I am using the following command: [root at master2 ~]# systemctl stop glusterd.service
2012 Apr 27
1
geo-replication and rsync
Hi, can someone tell me the differenct between geo-replication and plain rsync? On which frequency files are replicated with geo-replication? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2012 Jan 03
1
geo-replication loops
Hi, I was thinking about a common (I hope!) use case of Glusterfs geo-replication. Imagine 3 different facility having their own glusterfs deployment: * central-office * remote-office1 * remote-office2 Every client mount their local glusterfs deployment and write files (i.e.: user A deposit a PDF document on remote-office2), and it get replicated to the central-office glusterfs volume as soon
2013 Oct 23
3
Samba vfs_glusterfs Quota Support?
Hi All, I'm setting up a gluster cluster that will be accessed via smb. I was hoping that the quotas. I've configured a quota on the path itself: # gluster volume quota gfsv0 list path limit_set size ---------------------------------------------------------------------------------- /shares/testsharedave 10GB 8.0KB And I've
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote: > I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem Best Regards, Strahil Nikolov ? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????: Hi Anant, i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine for months. >since 18th jan. the geo-replication is faulty.
2011 Mar 31
1
Error rpmbuild Glusterfs 3.1.3
Hi, i have a lot of troubles when i try to build rpm?s out of the glusterfs 3.1.3 tgz on my SLES Servers (SLES10.1 & SLES11.1) all is running fine i guess until it try?s to build the rpm?s. Then i always run into this error : RPM build errors: File not found: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd File not found by glob: