similar to: mount volumes ON the arbiter?

Displaying 20 results from an estimated 9000 matches similar to: "mount volumes ON the arbiter?"

2017 Aug 25
2
GlusterFS as virtual machine storage
On 8/25/2017 12:56 AM, Gionatan Danti wrote: > > >> WK wrote: >> 2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2 >> node with a VM > > This is true even if I manage locking at application level (via > virlock or sanlock)? We ran Rep2 for years on 3.4.? It does work if you are really,really? careful,? But in a crash on one side, you might
2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote: > is possible to add an arbiter node on the client? I've been running in that configuration for a couple months now with no problems. I have 6 data + 3 arbiter bricks hosting VM disk images and all three of my arbiter bricks are on one of the kvm hosts. > Can I use multiple arbiter for the same volume ? In example,
2017 Aug 25
0
GlusterFS as virtual machine storage
Il 25-08-2017 21:48 WK ha scritto: > On 8/25/2017 12:56 AM, Gionatan Danti wrote: > > > We ran Rep2 for years on 3.4.? It does work if you are really,really? > careful,? But in a crash on one side, you might have lost some bits > that were on the fly. The VM would then try to heal. > Without sharding, big VMs take a while because the WHOLE VM file has > to be copied over.
2017 Aug 24
2
GlusterFS as virtual machine storage
That really isnt an arbiter issue or for that matter a Gluster issue. We have seen that with vanilla NAS servers that had some issue or another. Arbiter simply makes it less likely to be an issue than replica 2 but in turn arbiter is less 'safe' than replica 3. However, in regards to Gluster and RO behaviour The default timeout for most OS versions is 30 seconds and the Gluster
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` . It means (or at least intended to mean) that out of the 3 bricks, 1 brick is the arbiter. There has been some feedback while implementing arbiter support in glusterd2 for glusterfs-4.0 that we should change this to `replica 2 arbiter
2023 Jun 30
1
remove_me files building up
Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause. Since then however, we've seen some strange behaviour,
2023 Jul 04
1
remove_me files building up
Hi Strahil, We're using gluster to act as a share for an application to temporarily process and store files, before they're then archived off over night. The issue we're seeing isn't with the inodes running out of space, but the actual disk space on the arb server running low. This is the df -h? output for the bricks on the arb server: /dev/sdd1 15G 12G 3.3G 79%
2023 Jul 04
1
remove_me files building up
Hi, Thanks for your response, please find the xfs_info for each brick on the arbiter below: root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1 meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 =
2023 Jul 03
1
remove_me files building up
Hi, you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ? Best Regards,Strahil Nikolov? On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi, We're running a cluster with two data nodes and one arbiter, and have sharding enabled. We had an issue a while back where one of the server's
2023 Jul 04
1
remove_me files building up
Thanks for the clarification. That behaviour is quite weird as arbiter bricks should hold?only metadata. What does the following show on host?uk3-prod-gfs-arb-01: du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick If indeed the shards are taking space -?that is a really strange situation.From which version
2023 Jul 04
1
remove_me files building up
Hi Liam, I saw that your XFS uses ?imaxpct=25? which for an arbiter brick is a little bit low. If you have free space on the bricks, increase the maxpct to a bigger value, like:xfs_growfs -m 80 /path/to/brickThat will set 80% of the Filesystem for inodes, which you can verify with df -i /brick/path (compare before and after).?This way?you won?t run out of inodes in the future. Of course, always
2023 Jul 05
1
remove_me files building up
Hi Strahil, This is the output from the commands: root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick 2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs 24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings 16K /data/glusterfs/gv1/brick1/brick/mytute 18M /data/glusterfs/gv1/brick1/brick/.shard 0
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client? Let's assume a gluster storage made with 2 storage server. This is prone to split-brains. An arbiter node can be added, but can I put the arbiter on one of the client ? Can I use multiple arbiter for the same volume ? In example, one arbiter on each client.
2017 Jun 29
1
issue with trash feature and arbiter volumes
Gluster 3.10.2 I have a replica 3 (2+1) volume and I have just seen both data bricks go down (arbiter stayed up). I had to disable trash feature to get the bricks to start. I had a quick look on bugzilla but did not see anything that looked similar. I just wanted to check that I was not hitting some know issue and/or doing something stupid, before I open a bug. This is from the brick log:
2017 Sep 09
0
GlusterFS as virtual machine storage
Hi, On Sat, Sep 9, 2017 at 2:35 AM, WK <wkmail at bneit.com> wrote: > Pavel. > > Is there a difference between native client (fuse) and libgfapi in regards > to the crashing/read-only behaviour? I switched to FUSE now and the VM crashed (read-only remount) immediately after one node started rebooting. I tried to mount.glusterfs same volume on different server (not VM), running
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel. Is there a difference between native client (fuse) and libgfapi in regards to the crashing/read-only behaviour? We use Rep2 + Arb and can shutdown a node cleanly, without issue on our VMs. We do it all the time for upgrades and maintenance. However we are still on native client as we haven't had time to work on libgfapi yet. Maybe that is more tolerant. We have linux VMs mostly
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > > I will try to explain how you can end up in split-brain even with cluster > > wide quorum: > > Yep, the explanation made sense. I hadn't considered the possibility of > alternating outages. Thanks! > >
2017 Aug 24
1
GlusterFS as virtual machine storage
On 8/23/2017 10:44 PM, Pavel Szalbot wrote: > Hi, > > On Thu, Aug 24, 2017 at 2:13 AM, WK <wkmail at bneit.com> wrote: >> The default timeout for most OS versions is 30 seconds and the Gluster >> timeout is 42, so yes you can trigger an RO event. > I get read-only mount within approximately 2 seconds after failed IO. Hmm, we don't see that, even on busy VMs. We
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > Since arbiter bricks need not be of same size as the data bricks, if you > > > can configure three more arbiter bricks > > > based on the guidelines in the doc [1], you can do it live and you will > > > have the distribution count also unchanged. > > > > I can probably find
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > > Since arbiter bricks need not be of same size as the data bricks, if > you > > > > can configure three more arbiter bricks > > > > based on the guidelines in the doc [1], you can do it live and