Displaying 20 results from an estimated 10000 matches similar to: "glusterFS"
2010 Feb 17
3
GlusterFs - Any new progress reports?
GlusterFs always strikes me as being "the solution" (one day...). It's
had a lot of growing pains, but there have been a few on the list had
success using it already.
Given some time has gone by since I last asked - has anyone got any more
recent experience with it and how has it worked out with particular
emphasis on Dovecot maildir storage? How has version 3 worked out for
2011 Jun 06
2
uninterruptible processes writing to glusterfs share
hi!
sometimes we've on some client-servers hanging uninterruptible processes
("ps aux" stat is on "D" ) and on one the CPU wait I/O grows within some
minutes to 100%.
you are not able to kill such processes - also "kill -9" doesnt work -
when you connect via "strace" to such an process, you wont see anything
and you cannot detach it again.
there
2023 Aug 28
1
GlusterFS, move files, Samba ACL...
A little strange things, but i'm hitting my head on the wall...
I needed to 'enlarge' my main filesystem (XFS backed-up), that contain my
main samba share and a brick for a GFS share; i've setup a new volume (for
the VM), formatted XFS, move all the file taking care to umount and stop GFS
(so, syncing the brick, not the GFS filesystem) using --acls and -attrs
rsync options.
All
2023 Sep 20
0
GlusterFS, move files, Samba ACL...
[ Received no feedback, i resend it... ]
A little strange things, but i'm hitting my head on the wall...
I needed to 'enlarge' my main filesystem (XFS backed-up), that contain my
main samba share and a brick for a GFS share; i've setup a new volume (for
the VM), formatted XFS, move all the file taking care to umount and stop GFS
(so, syncing the brick, not the GFS filesystem)
2010 Nov 11
1
NFS Mounted GlusterFS, secondary groups not working
Howdy,
I have a GlusterFS 3.1 volume being mounted on a client using NFS. From the client I created a directory under the mount point and set the permissions to root:groupa 750
My user account is a member of groupa on the client, yet I am unable to list the contents of the directory:
$ ls -l /gfs/dir1
ls: /gfs/dir1/: Permission denied
$ ls -ld /gfs/dir1
rwxr-x--- 9 root groupa 73728 Nov 9
2023 Aug 28
1
GlusterFS, move files, Samba ACL...
On Mon, 28 Aug 2023 15:34:13 +0200
Marco Gaiarin via samba <samba at lists.samba.org> wrote:
>
> A little strange things, but i'm hitting my head on the wall...
>
>
> I needed to 'enlarge' my main filesystem (XFS backed-up), that
> contain my main samba share and a brick for a GFS share; i've setup a
> new volume (for the VM), formatted XFS, move all
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all,
I have promised to do some testing and I finally find some time and
infrastructure.
So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
replicated volume with arbiter (2+1) and VM on KVM (via Openstack)
with disk accessible through gfapi. Volume group is set to virt
(gluster volume set gv_openstack_1 virt). VM runs current (all
packages updated) Ubuntu Xenial.
I set up
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an
arbiter specific thing ? With replica 3 it just works.
On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote:
> you need to set
>
> cluster.server-quorum-ratio 51%
>
> On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
>
> > Hi all,
> >
>
2017 Sep 08
0
GlusterFS as virtual machine storage
Seems to be so, but if we look back at the described setup and procedure -
what is the reason for iops to stop/fail? Rebooting a node is somewhat
similar to updating gluster, replacing cabling etc. IMO this should not
always end up with arbiter blaming the other node and even though I did not
investigate this issue deeply, I do not believe the blame is the reason for
iops to drop.
On Sep 7, 2017
2009 May 11
1
Problem of afr in glusterfs 2.0.0rc1
Hello:
i had met the problem twice when i copy some files into the GFS space .
i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear.
in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set
cluster.server-quorum-ratio 51%
On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi all,
>
> I have promised to do some testing and I finally find some time and
> infrastructure.
>
> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
> replicated volume with arbiter (2+1) and VM on KVM (via
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder
to achieve than with just replica 2 + arbiter.
On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote:
> Hi Neil, docs mention two live nodes of replica 3 blaming each other and
> refusing to do IO.
>
> https://gluster.readthedocs.io/en/latest/Administrator%
>
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2010 Feb 16
1
Migrate from an NFS storage to GlusterFS
Hi -
I already have an NFS server in production which shares Web data for a
4-node Apache cluster. I'd like to switch to GlusterFS.
Do I have to copy the files from the NFS storage to a GlusterFS one, or
may it work if I just install GlusterFS on that server, configuring a
GFS volume to the existing storage directory (assuming, of course, the
NFS server is shuuted down and not used
2011 Sep 12
0
cannot access /mnt/glusterfs: Stale NFS file handle
I've mounted my glusterfs share as I always do:
mount -t glusterfs `hostname`:/bhl-volume /mnt/glusterfs
and I can see it in df:
# df -h | tail -n1
clustr-01:/bhl-volume 90T 51T 39T 57% /mnt/glusterfs
but I can't change into it, or access any of the files in it:
# ls -al /mnt/glusterfs
ls: cannot access /mnt/glusterfs: Stale NFS file handle
Any idea what could be causing
2023 Jun 30
1
remove_me files building up
Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's
2013 Dec 10
1
Error after crash of Virtual Machine during migration
Greetings,
Legend:
storage-gfs-3-prd - the first gluster.
storage-1-saas - new gluster where "the first gluster" had to be
migrated.
storage-gfs-4-prd - the second gluster (which had to be migrated later).
I've started command replace-brick:
'gluster volume replace-brick sa_bookshelf storage-gfs-3-prd:/ydp/shared
storage-1-saas:/ydp/shared start'
During that Virtual