Displaying 20 results from an estimated 700 matches similar to: "Performance: lots of small files, hdd, nvme etc."
2023 Mar 26
1
hardware issues and new server advice
Hi,
sry if i hijack this, but maybe it's helpful for other gluster users...
> pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
> I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones)
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Good morning,
heal still not running. Pending heals now sum up to 60K per brick.
Heal was starting instantly e.g. after server reboot with version
10.4, but doesn't with version 11. What could be wrong?
I only see these errors on one of the "good" servers in glustershd.log:
[2024-01-18 06:08:57.328480 +0000] W [MSGID: 114031]
[client-rpc-fops_v2.c:2561:client4_0_lookup_cbk]
2024 Jan 17
2
Upgrade 10.4 -> 11.1 making problems
ok, finally managed to get all servers, volumes etc runnung, but took
a couple of restarts, cksum checks etc.
One problem: a volume doesn't heal automatically or doesn't heal at all.
gluster volume status
Status of volume: workdata
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
were you able to solve the problem? Can it be treated like a "normal"
split brain? 'gluster peer status' and 'gluster volume status' are ok,
so kinda looks like "pseudo"...
hubert
Am Do., 18. Jan. 2024 um 08:28 Uhr schrieb Diego Zuccato
<diego.zuccato at unibo.it>:
>
> That's the same kind of errors I keep seeing on my 2 clusters,
>
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
Good morning,
hope i got it right... using:
https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3.1/html/administration_guide/ch27s02
mount -t glusterfs -o aux-gfid-mount glusterpub1:/workdata /mnt/workdata
gfid 1:
getfattr -n trusted.glusterfs.pathinfo -e text
/mnt/workdata/.gfid/faf59566-10f5-4ddd-8b0c-a87bc6a334fb
getfattr: Removing leading '/' from absolute path
2024 Jan 18
1
Upgrade 10.4 -> 11.1 making problems
Since glusterd does not consider it a split brain, you can't solve it
with standard split brain tools.
I've found no way to resolve it except by manually handling one file at
a time: completely unmanageable with thousands of files and having to
juggle between actual path on brick and metadata files!
Previously I "fixed" it by:
1) moving all the data from the volume to a temp
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it.
Like this :
# getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e
# file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e
trusted.gfid=0x00462be83e6149318bdadae1645c639e
trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
That's the same kind of errors I keep seeing on my 2 clusters,
regenerated some months ago. Seems a pseudo-split-brain that should be
impossible on a replica 3 cluster but keeps happening.
Sadly going to ditch Gluster ASAP.
Diego
Il 18/01/2024 07:11, Hu Bert ha scritto:
> Good morning,
> heal still not running. Pending heals now sum up to 60K per brick.
> Heal was starting
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
Hi Strahil,
hm, don't get me wrong, it may sound a bit stupid, but... where do i
set the log level? Using debian...
https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3/html/administration_guide/configuring_the_log_level
ls /etc/glusterfs/
eventsconfig.json glusterfs-georep-logrotate
gluster-rsyslog-5.8.conf group-db-workload group-gluster-block
group-nl-cache
2024 Jan 18
2
Upgrade 10.4 -> 11.1 making problems
Are you able to set the logs to debug level ?It might provide a clue what it is going on.
Best Regards,Strahil Nikolov
On Thu, Jan 18, 2024 at 13:08, Diego Zuccato<diego.zuccato at unibo.it> wrote: That's the same kind of errors I keep seeing on my 2 clusters,
regenerated some months ago. Seems a pseudo-split-brain that should be
impossible on a replica 3 cluster but keeps
2024 Jan 19
1
Upgrade 10.4 -> 11.1 making problems
gluster volume set testvol diagnostics.brick-log-level WARNING
gluster volume set testvol diagnostics.brick-sys-log-level WARNING
gluster volume set testvol diagnostics.client-log-level ERROR
gluster --log-level=ERROR volume status
---
Gilberto Nunes Ferreira
Em sex., 19 de jan. de 2024 ?s 05:49, Hu Bert <revirii at googlemail.com>
escreveu:
> Hi Strahil,
> hm, don't get me
2024 Jan 24
1
Upgrade 10.4 -> 11.1 making problems
Hi,
Can you find and check the files with gfids:
60465723-5dc0-4ebe-aced-9f2c12e52642faf59566-10f5-4ddd-8b0c-a87bc6a334fb
Use 'getfattr -d -e hex -m. ' command from https://docs.gluster.org/en/main/Troubleshooting/resolving-splitbrain/#analysis-of-the-output .
Best Regards,Strahil Nikolov
On Sat, Jan 20, 2024 at 9:44, Hu Bert<revirii at googlemail.com> wrote: Good morning,
2024 Jan 20
1
Upgrade 10.4 -> 11.1 making problems
Good morning,
thx Gilberto, did the first three (set to WARNING), but the last one
doesn't work. Anyway, with setting these three some new messages
appear:
[2024-01-20 07:23:58.561106 +0000] W [MSGID: 114061]
[client-common.c:796:client_pre_lk_v2] 0-workdata-client-11: remote_fd
is -1. EBADFD [{gfid=faf59566-10f5-4ddd-8b0c-a87bc6a334fb},
{errno=77}, {error=File descriptor in bad state}]
2023 Mar 24
2
hardware issues and new server advice
Actually,
pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
I would choose? LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several? (not the built-in ones) controllers.
@Martin,
in order to get a more reliable setup, you will have to
2010 Sep 28
3
xsyon game, black textures? black flickering
Hi,
the last day i tried getting the game xsyon to run. I was some kind of succesfull but there is one last problem i cannot fix by myselfe.
Some regions / textures are black and the grass in the game is flickering.
Cause i cannot describe it better i made some screenshots so you can see how it looks. (The flickering is not visible, cause it only show up while moving)
[Image:
2019 Jun 25
1
[Bug 110988] New: [NV49] Graphical issues on KDE desktop with GeForce 7950 GX2
https://bugs.freedesktop.org/show_bug.cgi?id=110988
Bug ID: 110988
Summary: [NV49] Graphical issues on KDE desktop with GeForce
7950 GX2
Product: Mesa
Version: unspecified
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component:
2011 Sep 04
2
AICc function with gls
Hi
I get the following error when I try and get the AICc for a gls regression
using qpcR:
> AICc(gls1)
Loading required package: nlme
Error in n/(n - p - 1) : 'n' is missing
My gls is like this:
> gls1
Generalized least squares fit by REML
Model: thercarnmax ~ therherbmax
Data: NULL
Log-restricted-likelihood: 2.328125
Coefficients:
(Intercept) therherbmax
1.6441405
2011 Oct 05
2
Age of Empires 2 GOLD flickers
Hi,
i've a Problem with my Age of Empires 2 Gold. The Monitor flickers when i start a game or hover a button in menu. Half-Life is working well. glxgears is also working well.
Ubuntu 11.10
Wine Version: 1.3.28
Video Card: GeForce 9400 GT
Game Version: 1.0c with no-CD crack
[Image: http://www.abload.de/img/bildschirmfotoam2011-19pvm.png ]
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :)
Il 30/03/2023 11:26, Hu Bert ha scritto:
> Just an observation: is there a performance difference between a sw
> raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
> with
> the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
>
2007 Mar 06
1
blocks 256k chunks on RAID 1
Hi, I have a RAID 1 (using mdadm) on CentOS Linux and in /proc/mdstat I
see this:
md7 : active raid1 sda2[0] sdb2[1]
26627648 blocks [2/2] [UU] [-->> it's OK]
md1 : active raid1 sdb3[1] sda3[0]
4192896 blocks [2/2] [UU] [-->> it's OK]
md2 : active raid1 sda5[0] sdb5[1]
4192832 blocks [2/2] [UU] [-->> it's OK]
md3 : active raid1 sdb6[1] sda6[0]
4192832 blocks [2/2]