similar to: GFS performance under heavy traffic

Displaying 20 results from an estimated 3000 matches similar to: "GFS performance under heavy traffic"

2019 Dec 24
1
GFS performance under heavy traffic
Hi David, On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote: > > Hello, > > In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node? It makes sense, as no data is being generated towards
2019 Dec 28
1
GFS performance under heavy traffic
Hi David, It seems that I have misread your quorum options, so just ignore that from my previous e-mail. Best Regards, Strahil NikolovOn Dec 27, 2019 15:38, Strahil <hunter86_bg at yahoo.com> wrote: > > Hi David, > > Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
2019 Dec 27
0
GFS performance under heavy traffic
Hi David, Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first. Also, the gluster client should remount in order to bump the gluster op-version. What kind of workload do you have ? I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups . You
2023 Feb 14
1
File\Directory not healing
I guess you didn't receive my last e-mail. Use getfattr and identify if the gfid mismatch. If yes, move away the mismatched one. In order a dir to heal, you have to fix all files inside it before it can be healed. Best Regards, Strahil Nikolov ? ???????, 14 ???????? 2023 ?., 14:04:31 ?. ???????+2, David Dolan <daithidolan at gmail.com> ??????: I've touched the directory one
2023 Feb 14
1
File\Directory not healing
I've touched the directory one level above the directory with the I\O issue as the one above that is the one showing as dirty. It hasn't healed. Should the self heal daemon automatically kick in here? Is there anything else I can do? Thanks David On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg at yahoo.com> wrote: > You can always mount it locally on any of the
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well, you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ? Best Regards, Strahil Nikolov ? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container. The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind. Thanks, Anant ________________________________
2018 Apr 08
1
Wiki update
Hello Community, my name is Strahil Nikolov (hunter86_bg) and I would like to update the following wiki page . In section "Create the New Initramfs or Initrd" there should be an additional line for CentOS7: mount --bind /run /mnt/sysimage/run The 'run' directory is needed especially if you need to start the multipathd.service before recreating the initramfs ('/' is on
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 force volume add-brick: success pve01:~# gluster volume info Volume Name: VMS Type: Distributed-Replicate Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1:
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant, Do you use the fuse client in the container ?Wouldn't it be more reasonable to mount the fuse and then use bind mount to provide access to the container ? Best Regards,Strahil Nikolov On Fri, Feb 16, 2024 at 15:02, Anant Saraswat<anant.saraswat at techblue.co.uk> wrote: Okay, I understand. Yes, it would be beneficial to include an option for skipping the client
2024 Feb 26
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil, In our setup, the Gluster brick comes from an iSCSI SAN storage and is then used as a brick on the Gluster server. To extend the brick, we stop the Gluster server, extend the logical volume (LV) on the SAN server, resize it on the host, mount the brick with the extended size, and finally start the Gluster server. Please let me know if this process can be optimized, I will be happy to
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now? Best Regards, Strahil Nikolov On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!) gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 volume add-brick: failed: Multiple bricks of a replicate volume are present
2023 Jun 07
1
Geo replication procedure for DR
Dear Strahil, Thank you for the detailed command. So once you want to switch all traffic to the DR site in case of disaster one should first disable the read-only setting on the secondary volume on the slave site. What happens after when the master site is back online? What's the procedure there? I had the following question in my previous mail in this regard: "And once the primary
2023 Jun 07
1
How to find out data alignment for LVM thin volume brick
Dear Strahil, Thank you very much for pointing me to the RedHat documentation. I wasn't aware of it and it is much more detailed. I will have to read it carefully. Now as I have a single disk (no RAID) based on that documentation I understand that I should use a data alignment value of 256kB. Best regards, Mabi ------- Original Message ------- On Wednesday, June 7th, 2023 at 6:56 AM,
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Ok. I have a 3rd host with Debian 12 installed and Gluster v11. The name of the host is arbiter! I already add this host into the pool: arbiter:~# gluster pool list UUID Hostname State 0cbbfc27-3876-400a-ac1d-2d73e72a4bfd gluster1.home.local Connected 99ed1f1e-7169-4da8-b630-a712a5b71ccd gluster2 Connected
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts. ________________________________ From: Aravinda <aravinda at kadalu.tech> Sent: 16 February 2024 12:36 PM To: Anant Saraswat
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
So I went ahead and do the force (is with you!) gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant co nfiguration. Use 'force' at the end of the command
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
Right now you have 3 "sets" of replica 2 on 2 hosts.In your case you don't need so much space for arbiters (10-15GB with 95 maxpct is enough for each "set") and you need a 3rd system or when the node that holds the data brick + arbiter brick fails (2 node scenario) - that "set" will be unavailable. If you do have a 3rd host, I think the command would be:gluster
2024 Nov 06
1
Add an arbiter when have multiple bricks at same server.
But if I change replica 2 arbiter 1 to replica 3 arbiter 1 gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 I got thir error: volume add-brick: failed: Multiple bricks of a replicate volume are present on the same server. This setup is not optimal. Bricks should be on different nodes to have best fault tolerant configuration. Use
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on. But regard the arbiter isse, I always set this parameters in the gluster volume, in order to avoid split-brain and I might add that work pretty well to me. I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running different Linux distro - and Windows as well - with Cpanel and other stuff, in production. Anyway here the