search for: gvol0

Displaying 8 results from an estimated 8 matches for "gvol0".

Did you mean: vol0
2023 Feb 23
1
Big problems after update to 9.6
...root 2052 1 0 2022 ? 00:00:00 /usr/bin/python3 /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid root 2062 1 3 2022 ? 10-11:57:16 /usr/sbin/glusterfs --fuse-mountopts=noatime --process-name fuse --volfile-server=br --volfile-server=sg --volfile-id=/gvol0 --fuse-mountopts=noatime /mnt/glusterfs root 2379 2052 0 2022 ? 00:00:00 /usr/bin/python3 /usr/sbin/glustereventsd --pid-file /var/run/glustereventsd.pid root 5884 1 5 2022 ? 18-16:08:53 /usr/sbin/glusterfsd -s br --volfile-id gvol0.br.nodirectwritedata-gluster-gvol...
2023 Feb 24
1
Big problems after update to 9.6
...am <dcunningham at voisonics.com<mailto:dcunningham at voisonics.com>> wrote: We've tried to remove "sg" from the cluster so we can re-install the GlusterFS node on it, but the following command run on "br" also gives a timeout error: gluster volume remove-brick gvol0 replica 1 sg:/nodirectwritedata/gluster/gvol0 force How can we tell "br" to just remove "sg" without trying to contact it? On Fri, 24 Feb 2023 at 10:31, David Cunningham <dcunningham at voisonics.com<mailto:dcunningham at voisonics.com>> wrote: Hello, We have a c...
2017 Jul 26
0
Web python framework under GlusterFS
...0My-2BVFmUDLlBqJEqreuhu7EDXCsuFZNiU-2BzNCIU1zvMUJfIa-2BVUFsKnV79r55VhVqwO8zybChNZjxNFzlt5zrS8uoNmGehlbcvdchvejw9LCAZkSwNNqCy7u2oDzxpLweA7R32OEfhT9eiWCudBC5K9lk-2Bgeb65df8trm-2B8KtHcmR6B8XlDYK4r-2BBgNptA-3D But even with the new settings below the application still very slow. sudo gluster volume set gvol0 features.cache-invalidation on sudo gluster volume set gvol0 features.cache-invalidation-timeout 600 sudo gluster volume set gvol0 performance.stat-prefetch on sudo gluster volume set gvol0 performance.cache-samba-metadata on sudo gluster volume set gvol0 performance.cache-invalidation on sudo glus...
2018 May 27
0
gluster volume status under FreeBSD 11.1
...I guess this has nothing to do with GlusterFS per se however any tips would be greatly appreciated. The issue is that executing 'gluster volume status' under FreeBSD 11.1 results in no data although functionality wise everything is working fine: # gluster volume status Status of volume: gvol0 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick srv-smtp-01:/mnt/da1/brick1/gvol0 N/A N/A N N/A Brick srv-smtp-02:/mnt/da1/brick2/gvol0 N/A N/A N...
2019 Dec 20
1
GFS performance under heavy traffic
...nics.com> ??????: >> >> >> Hi Strahil, >> >> The chart attached to my original email is taken from the GFS server. >> >> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >> >> Should we do something different to access all bricks simultaneously? >> >> Thanks for your help! >> >> >> On Fri, 20 Dec 2019 at 11:47,...
2019 Dec 28
1
GFS performance under heavy traffic
...t; The chart attached to my original email is taken from the GFS server. >>> >>>>> >>> >>>>> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >>> >>>>> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >>> >>>>> >>> >>>>> Should we do something different to access all bricks simultaneously? >>> >>>>> >>>...
2019 Dec 24
1
GFS performance under heavy traffic
...>>>>> >>>>> The chart attached to my original email is taken from the GFS server. >>>>> >>>>> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >>>>> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >>>>> >>>>> Should we do something different to access all bricks simultaneously? >>>>> >>>>> Thanks for your help! >>&...
2019 Dec 27
0
GFS performance under heavy traffic
...t;>>> The chart attached to my original email is taken from the GFS server. >> >>>>> >> >>>>> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >> >>>>> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >> >>>>> >> >>>>> Should we do something different to access all bricks simultaneously? >> >>>>> >> >>>>...