Displaying 20 results from an estimated 100 matches similar to: "can't set up geo-replication: can't fetch slave details"
2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
Hi,
is this a rare problem?
Cheers,
Kingsley.
On Tue, 2023-03-14 at 19:31 +0000, Kingsley Tart wrote:
> Hi,
>
> using Gluster 9.2 on debian 11 I'm trying to set up geo replication.
> I am following this guide:
>
>
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh
>
> I have a volume called "ansible" which is only a
2024 Sep 01
1
geo-rep will not initialize
FYI, I will be traveling for the next week, and may not see email much
until then.
Your questions...
On 8/31/24 04:59, Strahil Nikolov wrote:
> One silly question: Did you try adding some files on the source volume
> after the georep was created ?
Yes. I wondered that, too, whether geo-rep would not start simply
because there was nothing to do. But yes, there are a few files created
2017 Aug 18
2
gverify.sh purpose
Hi,
When creating a geo-replication session is the gverify.sh used or ran respectively? or is gverify.sh just an ad-hoc command to test manually if creating a geo-replication creationg would succeed?
Best,
M.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170818/ca2bed01/attachment.html>
2017 Aug 21
0
gverify.sh purpose
On Saturday 19 August 2017 02:05 AM, mabi wrote:
> Hi,
>
> When creating a geo-replication session is the gverify.sh used or ran
> respectively?
Yes, It is executed as part of geo-replication session creation.
> or is gverify.sh just an ad-hoc command to test manually if creating a
> geo-replication creationg would succeed?
>
No need to run separately
~
Saravana
2024 Aug 30
1
geo-rep will not initialize
On 8/30/24 04:17, Strahil Nikolov wrote:
> Have you done the following setup on the receiving gluster volume:
Yes. For completeness' sake:
grep geoacct /etc/passwd /etc/group
/etc/passwd:geoacct:x:5273:5273:gluster
geo-replication:/var/lib/glusterd/geoacct:/bin/bash
/etc/group:geoacct:x:5273:
gluster-mountbroker status
2010 Feb 02
1
OS X Clients Can't Create Sub-Directories
I'm running samba on a local linux server, with a bunch of shares. Over the last several years, this has worked perfectly in our heterogenous network of OS X and Windows. All my windows clients still work perfectly - my users can mount the samba shares and create, rename, move etc files and folders.
However, recently (starting yesterday) my OS X clients are unable to rename any sub
2020 Oct 27
4
Unable to get dummy interfaces to persist across reboots in CentOS 8
Have you tried to use NetworkManager ?
After all ,anything network related should be done by it.
[root at system ~]# nmcli connection add con-name dummy0 ifname dummy0 type dummy ?
Connection 'dummy0' (9fdd74fa-c143-4991-9bac-0e542704ac89) successfully added.
[root at system ~]# reboot
Shared connection to glustera closed.
[root at system ~]# uptime
03:23:44 up 0 min, ?1 user, ?load
2017 Nov 13
2
snapshot mount fails in 3.12
Hi,
quick question about snapshot mounting: Were there changes in 3.12 that
were not mentioned in the release notes for snapshot mounting?
I recently upgraded from 3.10 to 3.12 on CentOS (using
centos-release-gluster312). The upgrade worked flawless. The volume
works fine too. But mounting a snapshot fails with those two error messages:
[2017-11-13 08:46:02.300719] E
2020 Oct 28
0
Unable to get dummy interfaces to persist across reboots in CentOS 8
No. Network Manager is always disabled on our builds since at least
Cent5 days. The network stack has always been able to be managed
properly without relying on Network Manager. Is that now an absolute
requirement? It never has been prior.
On Mon, Oct 26, 2020 at 6:26 PM Strahil Nikolov via CentOS
<centos at centos.org> wrote:
>
> Have you tried to use NetworkManager ?
> After
2017 Jul 30
1
Lose gnfs connection during test
Hi all
I use Distributed-Replicate(12 x 2 = 24) hot tier plus
Distributed-Replicate(36 x (6 + 2) = 288) cold tier with gluster3.8.4
for performance test. When i set client/server.event-threads as small
values etc 2, it works ok. But if set client/server.event-threads as big
values etc 32, the netconnects will always become un-available during
the test, with following error messages in stree
2017 Nov 13
0
snapshot mount fails in 3.12
Hi Richard,
Thanks for posting this.
This issues is caused as part of the regression in 3.12.0 version [1], and
is already fixed in 3.12.3 [2] version (3.12.3 is tagged now with couple of
more subdirectory mount related fixes).
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1501235
[2] - https://review.gluster.org/#/c/18506/
If you don't want to change the versions, then please remove
2020 Oct 28
1
Unable to get dummy interfaces to persist across reboots in CentOS 8
Requirement is a very strong word , but you should consider using it and here is a short demo why:
- By default, RHEL uses NetworkManager to configure and manage network connections, and the /usr/sbin/ifup and /usr/sbin/ifdown scripts use NetworkManager to process ifcfg files in the /etc/sysconfig/network-scripts/ directory.
[root at system ~]# ls -l /usr/sbin/ifup
lrwxrwxrwx. 1 root root 22 21
2011 Oct 25
1
problems with gluster 3.2.4
Hi, we have 4 test machines (gluster01 to gluster04).
I've created a replicated volume with the 4 machines.
Then on the client machine i've executed:
mount -t glusterfs gluster01:/volume01 /mnt/gluster
And everything works ok.
The main problem occurs in every client machine that I do:
umount /mnt/gluster
and the
mount -t glusterfs gluster01:/volume01 /mnt/gluster
The client
2020 Oct 27
0
Unable to get dummy interfaces to persist across reboots in CentOS 8
Anyone have any ideas? It's rather annoying that I can't get these to
persist across reboots without using some kind of helper script.
On Fri, Oct 16, 2020 at 6:37 AM Frank Even
<lists+centos.org at elitists.org> wrote:
>
> Hello all, hoping someone can help me out here.
>
> I cannot get dummy interfaces on a new Cent8 build to persist across reboots.
>
> On Cent7
2020 Oct 16
3
Unable to get dummy interfaces to persist across reboots in CentOS 8
Hello all, hoping someone can help me out here.
I cannot get dummy interfaces on a new Cent8 build to persist across reboots.
On Cent7 - this is the process I use:
Create Dummies:
# cat /etc/modules-load.d/dummy.conf
dummy
# cat /etc/modprobe.d/dummyopts.conf
options dummy numdummies=4
# ip link add dummy0 type dummy
## - repeating a/ ascending dummyN adapters for as many needed
# service
2019 Aug 28
4
Permission Issue
Hi again,
regarding my post "plenty of vacuuuming process" a "gluster volume heal"
seems to improve the situation.
But I still have a strange problem:
Sometimes a user don't have permissions to? a restricted folder when h
connects to a share or logs in at a windows client. In some times all
permissions are granted. If the user creates a file, the user and group
is
2018 May 08
1
mount failing client to gluster cluster.
Hi,
On a debian 9 client,
========
root at kvm01:/var/lib/libvirt# dpkg -l glusterfs-client
8><---
ii glusterfs-client 3.8.8-1 amd64
clustered file-system (client package)
root at kvm01:/var/lib/libvirt#
=======
I am trying to to do a mount to a Centos 7 gluster setup,
=======
[root at glustep1 libvirt]# rpm -q glusterfs
glusterfs-4.0.2-1.el7.x86_64
2017 Dec 29
1
cannot mount with glusterfs-fuse after NFS-Ganesha enabled
Hi,
I've created a 2 node glusterFS test (Gluster 3.8).
Without enabling NFS-Ganesha, when I try to mount from a client using
glusterfs option - everything works.
However, after enabling NFS-Ganesha, when I try to mount from a client
using the glusterfs option (fuse), it fails with the following output (when
using the log-file option):
[2017-12-28 08:15:30.109110] I [MSGID: 100030]
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts,
Recently we have encountered a self-heal daemon crash issue after
rebalanced volume.
Crash stack bellow:
+------------------------------------------------------------------------------+
pending frames:
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-03-14 16:33:50
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi,
I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes
each with 16 bricks in a single cluster.
By default I have paused scrub process to have it run manually. for the
first time, i was trying to run scrub-on-demand and it was running fine,
but after some time, i decided to pause scrub process due to high CPU usage
and user reporting folder listing taking time.
But scrub