Displaying 20 results from an estimated 80000 matches similar to: "No samba-vfs-glusterfs package"
2019 Dec 20
1
GFS performance under heavy traffic
Hi David,
Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).
In such way, when the primary is lost, your client can reach a backup one without disruption.
P.S.: Client may 'hang' - if the primary server got
2020 Jul 01
3
Samba-4.10.4 strange behaviour
Hi Felix,
thanks for the share.
Sadly it doesn't work and I don't know how to start debugging this one.
I tried your config (had to switch from domain member to standalone) but it's the same:
[global]
??????? netbios name = yourName
??????? workgroup = yourWorkgroup
??????? realm = YourRealm
??????? log file = /var/log/samba/log.%m
??????? max log size = 50
??????? security = ads
2019 Nov 09
0
Sudden, dramatic performance drops with Glusterfs
There are options that can help a little bit with the ls/find.
Still, many devs will need to know your settings, so the volume's info is very important.
Try the 'noatime,nodiratime' (if ZFS supports them).
Also, as this is a new cluster you can try to setup XFS and verify if the issue is the same.
RedHat provide an XFS options' calculator but it requires aby kind of subscription
2020 Jul 01
0
Samba-4.10.4 strange behaviour
Dear Strahil,
please find my current settings below:
[global]
??????? netbios name = yourName
??????? workgroup = yourWorkgroup
??????? realm = YourRealm
??????? log file = /var/log/samba/log.%m
??????? max log size = 50
??????? security = ads
??????? clustering = yes
??? ??? max protocol = SMB3
? ? ?? ? kernel share modes = no
?? ? ? ? kernel change notify = no
??????? kernel
2024 Sep 21
1
GlusterFS Replica over ZFS
I assume you will be using the volumes for VM workload.There is a 'virt' group of settings optimized for virtualization (location at /var/lib/glusterd/groups/virt) which is also used by oVirt. It guarantees that VMs can live migrate without breaking.
Best Regards,
Strahil Nikolov
On Fri, Sep 20, 2024 at 19:00, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: Hi there.
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well,
you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ?
Best Regards,
Strahil Nikolov
? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????:
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil,
Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container.
The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind.
Thanks,
Anant
________________________________
2018 Apr 08
1
Wiki update
Hello Community,
my name is Strahil Nikolov (hunter86_bg) and I would like to update the
following wiki page .
In section "Create the New Initramfs or Initrd" there should be an
additional line for CentOS7:
mount --bind /run /mnt/sysimage/run
The 'run' directory is needed especially if you need to start the
multipathd.service before recreating the initramfs ('/' is on
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Anant,
Do you use the fuse client in the container ?Wouldn't it be more reasonable to mount the fuse and then use bind mount to provide access to the container ?
Best Regards,Strahil Nikolov
On Fri, Feb 16, 2024 at 15:02, Anant Saraswat<anant.saraswat at techblue.co.uk> wrote: Okay, I understand. Yes, it would be beneficial to include an option for skipping the client
2023 May 15
1
Error in gluster v11
Hi there, anyone in the Gluster Devel list.
Any fix about this issue?
May 14 07:05:39 srv01 vms[9404]: [2023-05-14 10:05:39.618424 +0000] C
[gf-io-uring.c:612:gf_io_uring_cq_process_some] (-->/lib/x86_64
-linux-gnu/libglusterfs.so.0(+0x849ae) [0x7fb4ebace9ae]
-->/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8a2e5) [0x7fb4ebad42e5]
-->/lib
/x86_64-linux-gnu/libglusterfs.so.0(+0x8a1a5)
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2023 May 16
1
[Gluster-devel] Error in gluster v11
The referenced GitHub issue now has a potential patch that could fix the
problem, though it will need to be verified. Could you try to apply the
patch and check if the problem persists ?
On Mon, May 15, 2023 at 2:10?AM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi there, anyone in the Gluster Devel list.
>
> Any fix about this issue?
>
> May 14 07:05:39
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Xavi
That's depend. Is it safe? I have this env production you know???
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 16 de mai. de 2023 ?s 07:45, Xavi Hernandez <jahernan at redhat.com>
escreveu:
> The referenced GitHub issue now has a potential patch that could fix the
> problem, though it will need to be verified. Could you try to apply the
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2024 Feb 16
2
Graceful shutdown doesn't stop all Gluster processes
Okay, I understand. Yes, it would be beneficial to include an option for skipping the client processes. This way, we could utilize the 'stop-all-gluster-processes.sh' script with that option to stop the gluster server process while retaining the fuse mounts.
________________________________
From: Aravinda <aravinda at kadalu.tech>
Sent: 16 February 2024 12:36 PM
To: Anant Saraswat
2023 May 16
1
[Gluster-devel] Error in gluster v11
Hi Gilberto,
On Tue, May 16, 2023 at 12:56?PM Gilberto Ferreira <
gilberto.nunes32 at gmail.com> wrote:
> Hi Xavi
> That's depend. Is it safe? I have this env production you know???
>
It should be safe, but I wouldn't test it on production. Can't you try it
in any test environment before ?
Xavi
>
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 -
2019 Dec 27
0
GFS performance under heavy traffic
Hi David,
Gluster supports live rolling upgrade, so there is no need to redeploy at all - but the migration notes should be checked as some features must be disabled first.
Also, the gluster client should remount in order to bump the gluster op-version.
What kind of workload do you have ?
I'm asking as there are predefined (and recommended) settings located at /var/lib/gluster/groups .
You
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it.
Like this :
# getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e
# file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e
trusted.gfid=0x00462be83e6149318bdadae1645c639e
trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2023 Nov 27
1
Announcing Gluster release 11.1
I am getting this errors:
Err:10
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/bookworm/amd64/apt
bookworm/main amd64 glusterfs-server amd64 11.1-1
Error reading from server - read (5: Input/output error) [IP: 8.43.85.185
443]
Fetched 35.9 kB in 36s (1,006 B/s)
E: Failed to fetch