Displaying 20 results from an estimated 9000 matches similar to: "Geo replication procedure for DR"
2023 Jun 07
1
Geo replication procedure for DR
It's just a setting on the target volume:
gluster volume set <VOL> read-only OFF
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 22:30, mabi<mabi at protonmail.ch> wrote: Hello,
I was reading the geo replication documentation here:
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/
and I was wondering how it works when in case of disaster recovery
2023 Jun 07
1
Geo replication procedure for DR
Dear Strahil,
Thank you for the detailed command. So once you want to switch all traffic to the DR site in case of disaster one should first disable the read-only setting on the secondary volume on the slave site.
What happens after when the master site is back online? What's the procedure there? I had the following question in my previous mail in this regard:
"And once the primary
2023 Jun 07
1
How to find out data alignment for LVM thin volume brick
Have you checked this page: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/brick_configuration ?
The alignment depends on the HW raid stripe unit size.
Best Regards,Strahil Nikolov?
On Tue, Jun 6, 2023 at 2:35, mabi<mabi at protonmail.ch> wrote: Hello,
I am preparing a brick as LVM thin volume for a test slave node using this
2024 Aug 18
1
Geo Replication sync intervals
Hi Gilberto,
I doubt you can change that stuff. Officially it's async replication and it might take some time to replicate.
What do you want to improve ?
Best Regards,
Strahil Nikolov
? ?????, 16 ?????? 2024 ?. ? 20:31:25 ?. ???????+3, Gilberto Ferreira <gilberto.nunes32 at gmail.com> ??????:
Hi there.
I have two sites with gluster geo replication, and all work pretty
2023 Jun 07
1
How to find out data alignment for LVM thin volume brick
Dear Strahil,
Thank you very much for pointing me to the RedHat documentation. I wasn't aware of it and it is much more detailed. I will have to read it carefully.
Now as I have a single disk (no RAID) based on that documentation I understand that I should use a data alignment value of 256kB.
Best regards,
Mabi
------- Original Message -------
On Wednesday, June 7th, 2023 at 6:56 AM,
2023 Mar 24
1
How to configure?
Can you check your volume file contents?Maybe it really can't find (or access) a specific volfile ?
Best Regards,Strahil Nikolov?
On Fri, Mar 24, 2023 at 8:07, Diego Zuccato<diego.zuccato at unibo.it> wrote: In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
2023 Mar 21
1
How to configure?
I have no clue. Have you checked for errors in the logs ? Maybe you might find something useful.
Best Regards,Strahil Nikolov?
On Tue, Mar 21, 2023 at 9:56, Diego Zuccato<diego.zuccato at unibo.it> wrote: Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports
2023 Mar 24
1
How to configure?
In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +0000] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
volume file [{from server}, {errno=2}, {error=File o directory non
esistente}]
And *lots* of gfid-mismatch errors in glustershd.log .
Couldn't find anything that would prevent heal to start. :(
Diego
Il 21/03/2023
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including
many files with names related to quorum bricks already moved to a
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol
that should already have been replaced by
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).
Is there something I should check inside the volfiles?
Diego
Il
2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
Hi,
is this a rare problem?
Cheers,
Kingsley.
On Tue, 2023-03-14 at 19:31 +0000, Kingsley Tart wrote:
> Hi,
>
> using Gluster 9.2 on debian 11 I'm trying to set up geo replication.
> I am following this guide:
>
>
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh
>
> I have a volume called "ansible" which is only a
2023 Jun 05
1
How to find out data alignment for LVM thin volume brick
Hello,
I am preparing a brick as LVM thin volume for a test slave node using this documentation:
https://docs.gluster.org/en/main/Administrator-Guide/formatting-and-mounting-bricks/
but I am confused regarding the right "--dataalignment" option to be used for pvcreate. The documentation mentions the following under point 1:
"Create a physical volume(PV) by using the pvcreate
2023 Mar 21
1
How to configure?
Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports lots of files quite quickly but does
not spawn any glfsheal process. And neither does restarting glusterd.
Is there some way to selectively run glfsheal to fix one brick at a time?
Diego
Il 21/03/2023 01:21,
2023 Mar 21
1
How to configure?
Theoretically it might help.If possible, try to resolve any pending heals.
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 15:29, Diego Zuccato<diego.zuccato at unibo.it> wrote: In Debian stopping glusterd does not stop brick processes: to stop
everything (and free the memory) I have to
systemctl stop glusterd
? killall glusterfs{,d}
? killall glfsheal
? systemctl start
2024 Aug 16
1
Geo Replication sync intervals
Hi there.
I have two sites with gluster geo replication, and all work pretty well.
But I want to check about the sync intervals and if there is some way to
change it.
Thanks for any tips.
---
Gilbert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20240816/08031744/attachment.html>
2023 Apr 23
1
How to configure?
After a lot of tests and unsuccessful searching, I decided to start from
scratch: I'm going to ditch the old volume and create a new one.
I have 3 servers with 30 12TB disks each. Since I'm going to start a new
volume, could it be better to group disks in 10 3-disk (or 6 5-disk)
RAID-0 volumes to reduce the number of bricks? Redundancy would be given
by replica 2 (still undecided
2023 Mar 16
1
How to configure?
Can you restart glusterd service (first check that it was not modified to kill the bricks)?
Best Regards,Strahil Nikolov?
On Thu, Mar 16, 2023 at 8:26, Diego Zuccato<diego.zuccato at unibo.it> wrote: OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal
2024 Jan 27
1
Geo-replication status is getting Faulty after few seconds
Hi Anant,
i would first start checking if you can do ssh from all masters to the slave node.If you haven't setup a dedicated user for the session, then gluster is using root.
Best Regards,
Strahil Nikolov
? ?????, 26 ?????? 2024 ?. ? 18:07:59 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi All,
I have run the following commands on master3,
2024 Aug 22
1
geo-rep will not initialize
On 8/22/24 14:08, Strahil Nikolov wrote:
> I can try to reproduce it if you could provide the gluster version,
> operating system and volume options.
Most kind.
Fedora39,? Packages:
$ grep gluster /var/log/rpmpkgs
gluster-block-0.5-11.fc39.x86_64.rpm
glusterfs-11.1-1.fc39.x86_64.rpm
glusterfs-cli-11.1-1.fc39.x86_64.rpm
glusterfs-client-xlators-11.1-1.fc39.x86_64.rpm
2023 Mar 16
1
How to configure?
OOM is just just a matter of time.
Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551
(well, one is actually the grep process, so "only" 550 glfsheal processes.
I'll take the last 5:
root 3266352 0.5 0.0 600292 93044 ? Sl 06:55 0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3267220 0.7 0.0 600292 91964 ?
2023 Mar 15
1
How to configure?
If you don't experience any OOM , you can focus on the heals.
284 processes of glfsheal seems odd.
Can you check the ppid for 2-3 randomly picked ?ps -o ppid= <pid>
Best Regards,Strahil Nikolov?
On Wed, Mar 15, 2023 at 9:54, Diego Zuccato<diego.zuccato at unibo.it> wrote: I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume