similar to: glusterFS

Displaying 20 results from an estimated 10000 matches similar to: "glusterFS"

2010 Feb 17
3
GlusterFs - Any new progress reports?
GlusterFs always strikes me as being "the solution" (one day...). It's had a lot of growing pains, but there have been a few on the list had success using it already. Given some time has gone by since I last asked - has anyone got any more recent experience with it and how has it worked out with particular emphasis on Dovecot maildir storage? How has version 3 worked out for
2023 Sep 20
0
GlusterFS, move files, Samba ACL...
[ Received no feedback, i resend it... ] A little strange things, but i'm hitting my head on the wall... I needed to 'enlarge' my main filesystem (XFS backed-up), that contain my main samba share and a brick for a GFS share; i've setup a new volume (for the VM), formatted XFS, move all the file taking care to umount and stop GFS (so, syncing the brick, not the GFS filesystem)
2011 Jun 06
2
uninterruptible processes writing to glusterfs share
hi! sometimes we've on some client-servers hanging uninterruptible processes ("ps aux" stat is on "D" ) and on one the CPU wait I/O grows within some minutes to 100%. you are not able to kill such processes - also "kill -9" doesnt work - when you connect via "strace" to such an process, you wont see anything and you cannot detach it again. there
2023 Aug 28
1
GlusterFS, move files, Samba ACL...
A little strange things, but i'm hitting my head on the wall... I needed to 'enlarge' my main filesystem (XFS backed-up), that contain my main samba share and a brick for a GFS share; i've setup a new volume (for the VM), formatted XFS, move all the file taking care to umount and stop GFS (so, syncing the brick, not the GFS filesystem) using --acls and -attrs rsync options. All
2023 Aug 28
1
GlusterFS, move files, Samba ACL...
On Mon, 28 Aug 2023 15:34:13 +0200 Marco Gaiarin via samba <samba at lists.samba.org> wrote: > > A little strange things, but i'm hitting my head on the wall... > > > I needed to 'enlarge' my main filesystem (XFS backed-up), that > contain my main samba share and a brick for a GFS share; i've setup a > new volume (for the VM), formatted XFS, move all
2010 Nov 11
1
NFS Mounted GlusterFS, secondary groups not working
Howdy, I have a GlusterFS 3.1 volume being mounted on a client using NFS. From the client I created a directory under the mount point and set the permissions to root:groupa 750 My user account is a member of groupa on the client, yet I am unable to list the contents of the directory: $ ls -l /gfs/dir1 ls: /gfs/dir1/: Permission denied $ ls -ld /gfs/dir1 rwxr-x--- 9 root groupa 73728 Nov 9
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all, I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > > > Hi all, > > >
2011 Sep 12
0
cannot access /mnt/glusterfs: Stale NFS file handle
I've mounted my glusterfs share as I always do: mount -t glusterfs `hostname`:/bhl-volume /mnt/glusterfs and I can see it in df: # df -h | tail -n1 clustr-01:/bhl-volume 90T 51T 39T 57% /mnt/glusterfs but I can't change into it, or access any of the files in it: # ls -al /mnt/glusterfs ls: cannot access /mnt/glusterfs: Stale NFS file handle Any idea what could be causing
2023 Jun 30
1
remove_me files building up
Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause. Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2013 Dec 10
1
Error after crash of Virtual Machine during migration
Greetings, Legend: storage-gfs-3-prd - the first gluster. storage-1-saas - new gluster where "the first gluster" had to be migrated. storage-gfs-4-prd - the second gluster (which had to be migrated later). I've started command replace-brick: 'gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared storage-1-saas:/ydp/shared start' During that Virtual
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure - what is the reason for iops to stop/fail? Rebooting a node is somewhat similar to updating gluster, replacing cabling etc. IMO this should not always end up with arbiter blaming the other node and even though I did not investigate this issue deeply, I do not believe the blame is the reason for iops to drop. On Sep 7, 2017
2010 Feb 16
1
Migrate from an NFS storage to GlusterFS
Hi - I already have an NFS server in production which shares Web data for a 4-node Apache cluster. I'd like to switch to GlusterFS. Do I have to copy the files from the NFS storage to a GlusterFS one, or may it work if I just install GlusterFS on that server, configuring a GFS volume to the existing storage directory (assuming, of course, the NFS server is shuuted down and not used
2023 Jul 03
1
remove_me files building up
Hi, you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ? Best Regards,Strahil Nikolov? On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2009 May 11
1
Problem of afr in glusterfs 2.0.0rc1
Hello: i had met the problem twice when i copy some files into the GFS space . i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear. in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version