Displaying 20 results from an estimated 300 matches similar to: "Geo replication faulty-extended attribute not supported by the backend storage"
2017 Aug 17
0
Extended attributes not supported by the backend storage
Hi,
I have a Glusterfs (v3.11.2-1) geo replication master-slave setup between two sites. The idea is to provide an off-site backup for my storage. When I start the session, I get the following message:
[2017-08-15 20:07:41.110635] E [fuse-bridge.c:3484:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage
Then it starts syncing the data but it stops at the
2018 Jan 22
1
geo-replication initial setup with existing data
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number
2012 Mar 20
1
issues with geo-replication
Hi all. I'm looking to see if anyone can tell me this is already
working for them or if they wouldn't mind performing a quick test.
I'm trying to set up a geo-replication instance on 3.2.5 from a local
volume to a remote directory. This is the command I am using:
gluster volume geo-replication myvol ssh://root at remoteip:/data/path start
I am able to perform a geo-replication
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave just
show 'I'nformations...
somehow suspicious are the frequent 'shutting down connection'
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is:
"Errors selecting input/output files, dirs"
On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote:
>Dear All,
>
>we are running a dist. repl. volume on 4 nodes including
>geo-replication
>to another location.
>the geo-replication was running fine for months.
>since 18th jan. the geo-replication is faulty.
2011 Jul 25
1
Problem with Gluster Geo Replication, status faulty
Hi,
I've setup Gluster Geo Replication according the manual,
# sudo gluster volume geo-replication flvol
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave
config log-level DEBUG
#sudo gluster volume geo-replication flvol
ssh://root at ec2-67-202-22-159.compute-1.amazonaws.com:file:///mnt/slave start
#sudo gluster volume geo-replication flvol
ssh://root at
2018 Mar 06
1
geo replication
Hi,
Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
I can see a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0
2018 Apr 23
0
Geo-replication faulty
Hi all,
I setup my gluster cluster with geo-replication a couple of weeks ago
and everything worked fine!
Today I descovered that one of the master nodes geo-replication
status is faulty.
On master side: Distributed-replicatied 2 x (2 + 1) = 6
On slave side: Replicated 1 x (2 + 1) = 3
After checking logs I see that the master node has the following error:
OSError: Permission denied
Looking at
2011 May 03
3
Issue with geo-replication and nfs auth
hi,
I've some issue with geo-replication (since 3.2.0) and nfs auth (since initial release).
Geo-replication
---------------
System : Debian 6.0 amd64
Glusterfs: 3.2.0
MASTER (volume) => SLAVE (directory)
For some volume it works, but for others i can't enable geo-replication and have this error with a faulty status:
2011-05-03 09:57:40.315774] E
2001 May 02
1
Problems getting Diablo2 to run
Hello everyone,
I'm trying to run Diablo2 version 1.06 on SuSE Linux 6.4 (kernel 2.4.2) with
WINE 20010418. Both the contents of the windoze installation and the game
have been copied over the network from another machine where everything
works happily.
When I run the game nothing really seems to be happening. With tracing
enabled I can see that after some initialization the game just spins
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Don't forget to test with the georep key. I think it was /var/lib/glusterd/geo-replication/secret.pem
Best Regards,
Strahil Nikolov
? ??????, 27 ?????? 2024 ?. ? 07:24:07 ?. ???????+2, Strahil Nikolov <hunter86_bg at yahoo.com> ??????:
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the
2012 Jan 03
1
geo-replication loops
Hi,
I was thinking about a common (I hope!) use case of Glusterfs geo-replication.
Imagine 3 different facility having their own glusterfs deployment:
* central-office
* remote-office1
* remote-office2
Every client mount their local glusterfs deployment and write files
(i.e.: user A deposit a PDF document on remote-office2), and it get
replicated to the central-office glusterfs volume as soon
2024 Jan 24
1
Geo-replication status is getting Faulty after few seconds
Hi All,
I have run the following commands on master3, and that has added master3 to geo-replication.
gluster system:: execute gsec_create
gluster volume geo-replication tier1data drtier1data::drtier1data create push-pem force
gluster volume geo-replication tier1data drtier1data::drtier1data stop
gluster volume geo-replication tier1data drtier1data::drtier1data start
Now I am able to start the
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root.
Best Regards,
Strahil Nikolov
? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi All,
I have run the following commands on master3,
2001 Jun 27
1
err:ntdll:RtlpWaitForCriticalSection...
Hi there,
I'm relativly new to this linux-stuff an especially to wine.
I'm trying to bring up an software that is using the FOXW2600.ESL als
runtime library (as far as i understand this).
All i can get out of wine is a blank and black (managed) screen an a
bunch of messages at starting konsole.
At the end wine claims to be successful.
I'm running
SuSE 7.2
codeweavers-wine-20010305
2011 Mar 31
1
Error rpmbuild Glusterfs 3.1.3
Hi,
i have a lot of troubles when i try to build rpm?s out of the glusterfs
3.1.3 tgz on my SLES Servers (SLES10.1 & SLES11.1)
all is running fine i guess until it try?s to build the rpm?s.
Then i always run into this error :
RPM build errors:
File not found:
/var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd
File not found by glob:
2024 Feb 05
1
Graceful shutdown doesn't stop all Gluster processes
Hello Everyone,
I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the Gluster server, it leaves many Gluster processes running. So, I just want to check how to shut down the Gluster server in a graceful manner.
Is there any specific sequence or trick I need to follow? Currently, I am using the following command:
[root at master2 ~]# systemctl stop glusterd.service