Displaying 20 results from an estimated 10000 matches similar to: "Hosted VM Pause when one node of gluster goes down"
2018 Apr 24
0
Hosted VM Pause when one node of gluster goes down
Hi Russell,
Since I also ran into this when setting up gluster, the solution is to
tweak network.ping-timeout to a lower value (default is 42 seconds). If a
node goes down and starts timing out, the whole cluster will attempt to
block access, including reads, for network.ping-timeout seconds and only
let them through after.
I set mine to 5 (seconds) because 42 is nowhere near an acceptable wait
2017 Jun 29
0
Arbiter node as VM
As long as the VM isn't hosted on one of the two Gluster nodes, that's
perfectly fine. One of my smaller clusters uses the same setup.
As for your other questions, as long as it supports Unix file permissions,
Gluster doesn't care what filesystem you use. Mix & match as you wish. Just
try to keep matching Gluster versions across your nodes.
On 29 June 2017 at 16:10, mabi
2017 Jun 29
2
Arbiter node as VM
Hello,
I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers using ZFS as filesystem. Now in order to avoid a split-brain situation I would like to add a third node as arbiter.
Regarding the arbiter node I have a few questions:
- can the arbiter node be a virtual machine? (I am planning to use Xen as hypervisor)
- can I use ext4 as file system on my arbiter? or does it need
2017 Jun 01
1
Restore a node in a replicating Gluster setup after data loss
Hi
We have a Replica 2 + Arbiter Gluster setup with 3 nodes Server1,
Server2 and Server3 where Server3 is the Arbiter node. There are several
Gluster volumes ontop of that setup. They all look a bit like this:
gluster volume info gv-tier1-vm-01
[...]
Number of Bricks: 1 x (2 + 1) = 3
[...]
Bricks:
Brick1: Server1:/var/data/lv-vm-01
Brick2: Server2:/var/data/lv-vm-01
Brick3:
2018 Apr 15
1
un-expected warning message when atempting to build a 4 node gluster setup.
Hi,
I am on centos 7.4 with gluster 4.
I am trying to a distributed and replicated volume on the 4 nodes
I am getting this un-expected qualification,
[root at glustep1 brick1]# gluster volume create gv0 replica 2
glusterp1:/bricks/brick1/gv0 glusterp2:/bricks/brick1/gv0
glusterp3:/bricks/brick1/gv0 glusterp4:/bricks/brick1/gv0
8><----
Replica 2 volumes are prone to split-brain. Use
2018 Jan 16
0
Problem with Gluster 3.12.4, VM and sharding
Please share the volume-info output and the logs under /var/log/glusterfs/
from all your nodes. for investigating the issue.
-Krutika
On Tue, Jan 16, 2018 at 1:30 PM, Ing. Luca Lazzeroni - Trend Servizi Srl <
luca at gvnet.it> wrote:
> Hi to everyone.
>
> I've got a strange problem with a gluster setup: 3 nodes with Centos 7.4,
> Gluster 3.12.4 from Centos/Gluster
2018 Jan 16
2
Problem with Gluster 3.12.4, VM and sharding
Hi to everyone.
I've got a strange problem with a gluster setup: 3 nodes with Centos
7.4, Gluster 3.12.4 from Centos/Gluster repositories, QEMU-KVM version
2.9.0 (compiled from RHEL sources).
I'm running volumes in replica 3 arbiter 1 mode (but I've got a volume
in "pure" replica 3 mode too). I've applied the "virt" group settings to
my volumes since they
2018 Mar 26
0
rhev/gluster questions , possibly restoring from a failed node
In my lab, one of my RAID cards started acting up and took one of my three
gluster nodes offline (two nodes with data and an arbiter node). I'm hoping
it's simply the backplane, but during that time troubleshooting and waiting
for parts, the hypervisors was fenced. Since the firewall was replaced and
now several VMs are not starting correctly, fsck, scandisk and xfs_repair
on the
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Well it was probably caused by running replica 2 and doing online
upgrade. However I added brick, turned volume to replica 3 with
arbiter and got very strange issue I will mail to this list in a
moment...
Thanks.
-ps
On Tue, Jul 11, 2017 at 1:55 PM, Pranith Kumar Karampuri
<pkarampu at redhat.com> wrote:
>
>
> On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at
2023 Oct 24
0
Gluster heal script?
Hello all.
Is there a script to help healing files that remain in heal info even
after a pass with heal full ?
I recently (~august) restarted from scratch our Gluster cluster in
"replica 3 arbiter 1" but I already found some files that are not
healing and inaccessible (socket not connected) from the fuse mount.
volume info:
-8<--
Volume Name: cluster_data
Type:
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> Please share the volume-info output and the logs under /var/log/glusterfs/
> from all your
2018 May 07
0
arbiter node on client?
On Sun, May 06, 2018 at 11:15:32AM +0000, Gandalf Corvotempesta wrote:
> is possible to add an arbiter node on the client?
I've been running in that configuration for a couple months now with no
problems. I have 6 data + 3 arbiter bricks hosting VM disk images and
all three of my arbiter bricks are on one of the kvm hosts.
> Can I use multiple arbiter for the same volume ? In example,
2018 Jan 29
0
Replacing a third data node with an arbiter one
On 01/29/2018 08:56 PM, Hoggins! wrote:
> Thank you, for that, however I have a problem.
>
> Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?:
>> Yes, you would need to reduce it to replica 2 and then convert it to
>> arbiter.
>> 1. Ensure there are no pending heals, i.e. heal info shows zero entries.
>> 2. gluster volume remove-brick thedude replica 2
>>
2018 Feb 19
0
Upgrade from 3.8.15 to 3.12.5
I believe the peer rejected issue is something we recently identified and
has been fixed through https://bugzilla.redhat.com/show_bug.cgi?id=1544637
and is available in 3.12.6. I'd request you to upgrade to the latest
version in 3.12 series.
On Mon, Feb 19, 2018 at 12:27 PM, <rwecker at ssd.org> wrote:
> Hi,
>
> I have a 3 node cluster (Found1, Found2, Found2) which i wanted
2018 May 06
3
arbiter node on client?
is possible to add an arbiter node on the client?
Let's assume a gluster storage made with 2 storage server. This is prone to
split-brains.
An arbiter node can be added, but can I put the arbiter on one of the
client ?
Can I use multiple arbiter for the same volume ? In example, one arbiter on
each client.
2017 Dec 11
0
How large the Arbiter node?
Hi,
there is good suggestion here : http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing <http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#arbiter-bricks-sizing>
Since the arbiter brick does not store file data, its disk usage will be considerably less than the other bricks of the replica. The sizing of
2017 Dec 20
4
Syntax for creating arbiter volumes in gluster 4.0
Hi,
The existing syntax in the gluster CLI for creating arbiter volumes is
`gluster volume create <volname> replica 3 arbiter 1 <list of bricks>` .
It means (or at least intended to mean) that out of the 3 bricks, 1
brick is the arbiter.
There has been some feedback while implementing arbiter support in
glusterd2 for glusterfs-4.0 that we should change this to `replica 2
arbiter
2018 Jan 29
2
Replacing a third data node with an arbiter one
Thank you, for that, however I have a problem.
Le 26/01/2018 ? 02:35, Ravishankar N a ?crit?:
> Yes, you would need to reduce it to replica 2 and then convert it to
> arbiter.
> 1. Ensure there are no pending heals, i.e. heal info shows zero entries.
> 2. gluster volume remove-brick thedude replica 2
> ngluster-3.network.hoggins.fr:/export/brick/thedude force
> 3. gluster volume
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
I've just done all the steps to reproduce the problem.
Tha VM volume has been created via "qemu-img create -f qcow2
Test-vda2.qcow2 20G" on the gluster volume mounted via FUSE. I've tried
also to create the volume with preallocated metadata, which moves the
problem a bit far away (in time). The volume is a replice 3 arbiter 1
volume hosted on XFS bricks.
Here are the
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number