Displaying 3 results from an estimated 3 matches for "diskutilfti5d".
2023 Mar 26
1
hardware issues and new server advice
...id1 (made out of 10TB hdds). file access to the
backends is pretty slow, even with low system load (which reaches >100
on the servers on high traffic days); even a simple 'ls' on a
directory with ~1000 sub-directories will take a couple of seconds.
Some images:
https://abload.de/img/gls-diskutilfti5d.png
https://abload.de/img/gls-io6cfgp.png
https://abload.de/img/gls-throughput3oicf.png
As you mentioned it: is a raid10 better than x*raid1? Anything misconfigured?
Thx a lot & best regards,
Hubert
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-...
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
...(searches of
non-existing/deleted objects) has highest penalty." <--- that happens
very often...
- server load on high traffic days: > 100 (mostly iowait)
- bad are server reboots (read filesystem info etc.)
- really bad is a sw raid rebuild/resync
Some images:
https://abload.de/img/gls-diskutilfti5d.png
https://abload.de/img/gls-io6cfgp.png
https://abload.de/img/gls-throughput3oicf.png
Our conclusion: the hardware is too slow, the disks are too big. For a
future setup we need to improve the performance (or switch to a
different solution). HW-Raid-controller might be an option, but SAS
disks a...
2023 Mar 24
2
hardware issues and new server advice
Actually,
pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
I would choose? LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several? (not the built-in ones) controllers.
@Martin,
in order to get a more reliable setup, you will have to