Displaying 20 results from an estimated 10000 matches similar to: "GFS performance under heavy traffic"
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards
2019 Dec 27
0
GFS performance under heavy traffic
Hi David,
Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
Also, the gluster client should remount in order to bump the gluster op-version.
What kind of workload do you have ?
I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups .
You
2019 Dec 20
1
GFS performance under heavy traffic
Hi David,
Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).
In such way, when the primary is lost, your client can reach a backup one without disruption.
P.S.: Client may 'hang' - if the primary server got
2024 Nov 29
1
Disk size and virtual size drive me crazy!
No! I didn't! I wasn't aware of this option.
I will try.
Thanks
Em sex., 29 de nov. de 2024 ?s 16:43, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:
> Have you figured it out ?
>
> Have you tried setting storage.reserve to 0 ?
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Nov 21, 2024 at 0:39, Gilberto Ferreira
> <gilberto.nunes32 at
2024 Oct 21
1
How much disk can fail after a catastrophic failure occur?
Ok! I got it about how many disks I can lose and so on.
But regard the arbiter isse, I always set this parameters in the gluster
volume, in order to avoid split-brain and I might add that work pretty well
to me.
I already have a Proxmox VE cluster with 2 nodes and about 50 vms, running
different Linux distro - and Windows as well - with Cpanel and other stuff,
in production.
Anyway here the
2024 Dec 02
1
Disk size and virtual size drive me crazy!
qemu-img info 100/vm-100-disk-0.qcow2
image: 100/vm-100-disk-0.qcow2
file format: qcow2
virtual size: 120 GiB (128849018880 bytes)
disk size: 916 GiB
cluster_size: 65536
Format specific information:
compat: 1.1
compression type: zlib
lazy refcounts: false
refcount bits: 16
corrupt: false
extended l2: false
Child node '/file':
filename: 100/vm-100-disk-0.qcow2
2024 Nov 29
1
Disk size and virtual size drive me crazy!
Have you figured it out ?
Have you tried setting storage.reserve to 0 ?
Best Regards,
Strahil Nikolov
On Thu, Nov 21, 2024 at 0:39, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote:
11.1
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em qua., 20 de nov. de 2024, 19:28, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:
2024 Nov 20
1
Disk size and virtual size drive me crazy!
11.1
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em qua., 20 de nov. de 2024, 19:28, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:
> What's your gluster version ?
>
> Best Regards,
> Strahil Nikolov
>
> ? ??????????, 11 ??????? 2024 ?. ? 20:57:50 ?. ???????+2, Gilberto
> Ferreira <gilberto.nunes32 at
2024 Nov 20
1
Disk size and virtual size drive me crazy!
What's your gluster version ?
Best Regards,Strahil Nikolov
? ??????????, 11 ??????? 2024 ?. ? 20:57:50 ?. ???????+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> ??????:
Hi there.
I can't understand why I am having? this different values:
proxmox01:/vms/images# df
Sist. Arq. ? ? ?Tam. Usado Disp. Uso% Montado em
udev ? ? ? ? ? ?252G ? ? 0 ?252G ? 0% /dev
tmpfs ? ? ?
2024 Oct 20
1
How much disk can fail after a catastrophic failure occur?
If it's replica 2, you can loose up to 1 replica per distribution group.For example, if you have a volume TEST with such setup:
server1:/brick1
server2:/brick1
server1:/brick2
server2:/brick2
You can loose any brick of the replica "/brick1" and any brick in the replica "/brick2". So if you loose server1:/brick1 and server2:/brick2 -> no data loss will be experienced.
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2023 Feb 14
1
File\Directory not healing
I guess you didn't receive my last e-mail.
Use getfattr and identify if the gfid mismatch. If yes, move away the mismatched one.
In order a dir to heal, you have to fix all files inside it before it can be healed.
Best Regards,
Strahil Nikolov ? ???????, 14 ???????? 2023 ?., 14:04:31 ?. ???????+2, David Dolan <daithidolan at gmail.com> ??????:
I've touched the directory one
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Feb 14
1
File\Directory not healing
I've touched the directory one level above the directory with the I\O issue
as the one above that is the one showing as dirty.
It hasn't healed. Should the self heal daemon automatically kick in here?
Is there anything else I can do?
Thanks
David
On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
> You can always mount it locally on any of the
2023 Jul 04
1
remove_me files building up
Hi Strahil,
We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night.
The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low.
This is the df -h? output for the bricks on the arb server:
/dev/sdd1 15G 12G 3.3G 79%
2023 Mar 24
1
How to configure?
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including
many files with names related to quorum bricks already moved to a
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol
that should already have been replaced by
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).
Is there something I should check inside the volfiles?
Diego
Il
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1:
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now?
Best Regards,
Strahil Nikolov
On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!)
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3
volume add-brick: failed: Multiple bricks of a replicate volume are present
2023 Jul 04
1
remove_me files building up
Hi Liam,
I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low.
If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future.
Of course, always