Displaying 20 results from an estimated 500 matches similar to: "geo-rep will not initialize"
2024 Sep 01
1
geo-rep will not initialize
FYI, I will be traveling for the next week, and may not see email much
until then.
Your questions...
On 8/31/24 04:59, Strahil Nikolov wrote:
> One silly question: Did you try adding some files on the source volume
> after the georep was created ?
Yes. I wondered that, too, whether geo-rep would not start simply
because there was nothing to do. But yes, there are a few files created
2024 Aug 15
1
geo-rep will not initialize
I am trying to test a trivial configuration of 2 hosts, each of which
has a simple 1-brick volume, that I wish to geo-rep from one to the other.
When I first experimented with this a couple years ago, it worked, but
that effort ended prematurely and I never finished the real setup. I am
coming back to it now for other purposes.
I'm on Fedora 39 with gluster 11.1. I'm using this guide:
2024 Aug 18
1
geo-rep will not initialize
Hi Karl,
I don't see anything mentioning shared storage in the docs and I assume it's now automatic but can you check 'gluster volume get all cluster.enable-shared-storage' ?
I would give a try with RH's documentation despite it's old it has some steps (like the shared volume) that might be needed:
2024 Aug 19
1
geo-rep will not initialize
On 8/18/24 16:41, Strahil Nikolov wrote:
> I don't see anything mentioning shared storage in the docs and I
> assume it's now automatic but can you check 'gluster volume get all
> cluster.enable-shared-storage' ?
> I would give a try with RH's documentation despite it's old it has
> some steps (like the shared volume) that might be needed
I appreciate the
2024 Aug 22
1
geo-rep will not initialize
Hi,
Yeah shared storage is needed only for more than 2 nodes to sync the geo rep status.
If I have some time , I can try to reproduce it if you could provide the gluster version, operating system and volume options.
Best Regards,
Strahil Nikolov
On Mon, Aug 19, 2024 at 4:45, Karl Kleinpaste<karl at kleinpaste.org> wrote: On 8/18/24 16:41, Strahil Nikolov wrote:
I don't see
2017 Aug 08
1
How to delete geo-replication session?
Sorry I missed your previous mail.
Please perform the following steps once a new node is added
- Run gsec create command again
gluster system:: execute gsec_create
- Run Geo-rep create command with force and run start force
gluster volume geo-replication <mastervol> <slavehost>::<slavevol>
create push-pem force
gluster volume geo-replication <mastervol>
2018 Feb 07
0
add geo-replication "passive" node after node replacement
Hi,
When S3 is added to master volume from new node, the following cmd should
be run to generate and distribute ssh keys
1. Generate ssh keys from new node
#gluster system:: execute gsec_create
2. Push those ssh keys of new node to slave
#gluster vol geo-rep <mastervol> <slavehost>::<slavevol> create
push-pem force
3. Stop and start geo-rep
But note that
2017 Aug 18
2
gverify.sh purpose
Hi,
When creating a geo-replication session is the gverify.sh used or ran respectively? or is gverify.sh just an ad-hoc command to test manually if creating a geo-replication creationg would succeed?
Best,
M.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170818/ca2bed01/attachment.html>
2018 Feb 07
2
add geo-replication "passive" node after node replacement
Hi all,
i had a replica 2 gluster 3.12 between S1 and S2 (1 brick per node)
geo-replicated to S5 where both S1 and S2 were visible in the
geo-replication status and S2 "active" while S1 "passive".
I had to replace S1 with S3, so I did an
"add-brick replica 3 S3"
and then
"remove-brick replica 2 S1".
Now I have again a replica 2 gluster between S3 and S2
2017 Aug 08
0
How to delete geo-replication session?
When I run the "gluster volume geo-replication status" I see my geo replication session correctly including the volume name under the "VOL" column. I see my two nodes (node1 and node2) but not arbiternode as I have added it later after setting up geo-replication. For more details have a quick look at my previous post here:
2017 Aug 21
0
gverify.sh purpose
On Saturday 19 August 2017 02:05 AM, mabi wrote:
> Hi,
>
> When creating a geo-replication session is the gverify.sh used or ran
> respectively?
Yes, It is executed as part of geo-replication session creation.
> or is gverify.sh just an ad-hoc command to test manually if creating a
> geo-replication creationg would succeed?
>
No need to run separately
~
Saravana
2023 Mar 14
1
can't set up geo-replication: can't fetch slave details
Hi,
using Gluster 9.2 on debian 11 I'm trying to set up geo replication. I
am following this guide:
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh
I have a volume called "ansible" which is only a small volume and
seemed like an ideal test case.
Firstly, for a bit of feedback (this isn't my issue as I worked around
it) I had this
2018 Feb 06
0
geo-replication command rsync returned with 3
Hi,
As a quick workaround for geo-replication to work. Please configure the
following option.
gluster vol geo-replication <mastervol> <slavehost>::<slavevol> config
access_mount true
The above option will not do the lazy umount and as a result, all the
master and slave volume mounts
maintained by geo-replication can be accessed by others. It's also visible
in df output.
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is
run(without any volume name)
gluster volume geo-replication status
Volume stop force should work even if Geo-replication session exists.
From the error it looks like node "arbiternode.domain.tld" in Master
cluster is down or not reachable.
regards
Aravinda VK
On 08/07/2017 10:01 PM, mabi wrote:
> Hi,
>
2018 Feb 05
2
geo-replication command rsync returned with 3
On 02/05/2018 01:33 PM, Florian Weimer wrote:
> Do you have strace output going further back, at least to the proceeding
> getcwd call?? It would be interesting to see which path the kernel
> reports, and if it starts with "(unreachable)".
I got the strace output now, but it very difficult to read (chdir in a
multi-threaded process ?).
My current inclination is to blame
2018 Apr 23
0
Geo-replication faulty
Hi all,
I setup my gluster cluster with geo-replication a couple of weeks ago
and everything worked fine!
Today I descovered that one of the master nodes geo-replication
status is faulty.
On master side: Distributed-replicatied 2 x (2 + 1) = 6
On slave side: Replicated 1 x (2 + 1) = 3
After checking logs I see that the master node has the following error:
OSError: Permission denied
Looking at
2006 Nov 02
1
Using perl-Net-SSH-Perl with pubkey authentication under CGI.
Guys, I wonder if anyone can give me any pointers here, I hope it's
CentOS related enough not to be too off topic, if it is then
apologies.
I'm attempting to setup a CGI which can connect to a remote system and
execute a command.
On the 'client', for the Apache user 'apache' I've given it a shell
and generated a key-pair. I've configured Keychain [
2024 Aug 22
1
geo-rep will not initialize
On 8/22/24 14:08, Strahil Nikolov wrote:
> I can try to reproduce it if you could provide the gluster version,
> operating system and volume options.
Most kind.
Fedora39,? Packages:
$ grep gluster /var/log/rpmpkgs
gluster-block-0.5-11.fc39.x86_64.rpm
glusterfs-11.1-1.fc39.x86_64.rpm
glusterfs-cli-11.1-1.fc39.x86_64.rpm
glusterfs-client-xlators-11.1-1.fc39.x86_64.rpm
2018 Feb 07
1
geo-replication command rsync returned with 3
Hi,
?
Kotresh workaround works for me. But before I tried it, I created some strace-logs for Florian.
setup: 2 VMs?(192.168.222.120 master, 192.168.222.121 slave), both with a volume named vol with Ubuntu?16.04.3,?glusterfs 3.13.2, rsync 3.1.1 .
?
Best regards,
Tino
?
root at master:~# cat /usr/bin/rsync
#!/bin/bash
strace -o /tmp/rsync.trace -ff /usr/bin/rsynco "$@"
?
One of the traces
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh,
Yes, all nodes have the same version 4.1.1 both master and slave.
All glusterd are crashing on the master side.
Will send logs tonight.
Thanks,
Marcus
################
Marcus Peders?n
Systemadministrator
Interbull Centre
################
Sent from my phone
################
Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>:
Hi Marcus,
Is the