similar to: GlusterFS as virtual machine storage

Displaying 20 results from an estimated 6000 matches similar to: "GlusterFS as virtual machine storage"

2017 Aug 23
4
GlusterFS as virtual machine storage
Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > Hi, after many VM crashes during upgrades of Gluster, losing network > connectivity on one node etc. I would advise running replica 2 with > arbiter. Hi Pavel, this is bad news :( So, in your case at least, Gluster was not stable? Something as simple as an update would let it crash? > I once even managed to break this setup (with
2017 Aug 30
4
GlusterFS as virtual machine storage
Ciao Gionatan, I run Gluster 3.10.x (Replica 3 arbiter or 2 + 1 arbiter) to provide storage for oVirt 4.x and I have had no major issues so far. I have done online upgrades a couple of times, power losses, maintenance, etc with no issues. Overall, it is very resilient. Important thing to keep in mind is your network, I run the Gluster nodes on a redundant network using bonding mode 1 and I have
2017 Aug 26
2
GlusterFS as virtual machine storage
Il 26-08-2017 01:13 WK ha scritto: > Big +1 on what was Kevin just said.? Just avoiding the problem is the > best strategy. Ok, never run Gluster with anything less than a replica2 + arbiter ;) > However, for the record,? and if you really, really want to get deep > into the weeds on the subject, then the? Gluster people have docs on > Split-Brain recovery. > >
2017 Aug 23
0
GlusterFS as virtual machine storage
Hi, after many VM crashes during upgrades of Gluster, losing network connectivity on one node etc. I would advise running replica 2 with arbiter. I once even managed to break this setup (with arbiter) due to network partitioning - one data node never healed and I had to restore from backups (it was easier and kind of non-production). Be extremely careful and plan for failure. -ps On Mon, Aug
2017 Aug 30
3
GlusterFS as virtual machine storage
Solved as to 3.7.12. The only bug left is when adding new bricks to create a new replica set, now sure where we are now on that bug but that's not a common operation (well, at least for me). On Wed, Aug 30, 2017 at 05:07:44PM +0200, Ivan Rossi wrote: > There has ben a bug associated to sharding that led to VM corruption that > has been around for a long time (difficult to reproduce I
2017 Aug 30
0
GlusterFS as virtual machine storage
There has ben a bug associated to sharding that led to VM corruption that has been around for a long time (difficult to reproduce I understood). I have not seen reports on that for some time after the last fix, so hopefully now VM hosting is stable. 2017-08-30 3:57 GMT+02:00 Everton Brogliatto <brogliatto at gmail.com>: > Ciao Gionatan, > > I run Gluster 3.10.x (Replica 3 arbiter
2017 Aug 26
0
GlusterFS as virtual machine storage
Il 26-08-2017 07:38 Gionatan Danti ha scritto: > I'll surely give a look at the documentation. I have the "bad" habit > of not putting into production anything I know how to repair/cope > with. > > Thanks. Mmmm, this should read as: "I have the "bad" habit of not putting into production anything I do NOT know how to repair/cope with" Really :D
2017 Aug 23
3
GlusterFS as virtual machine storage
Hi, I believe it is not that simple. Even replica 2 + arbiter volume with default network.ping-timeout will cause the underlying VM to remount filesystem as read-only (device error will occur) unless you tune mount options in VM's fstab. -ps On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote: > What he is saying is that, on a two node volume, upgrading a node will
2017 Sep 03
3
GlusterFS as virtual machine storage
Il 30-08-2017 17:07 Ivan Rossi ha scritto: > There has ben a bug associated to sharding that led to VM corruption > that has been around for a long time (difficult to reproduce I > understood). I have not seen reports on that for some time after the > last fix, so hopefully now VM hosting is stable. Mmmm... this is precisely the kind of bug that scares me... data corruption :| Any
2017 Aug 23
0
GlusterFS as virtual machine storage
Really ? I can't see why. But I've never used arbiter so you probably know more about this than I do. In any case, with replica 3, never had a problem. On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote: > Hi, I believe it is not that simple. Even replica 2 + arbiter volume > with default network.ping-timeout will cause the underlying VM to > remount filesystem as
2017 Aug 23
0
GlusterFS as virtual machine storage
What he is saying is that, on a two node volume, upgrading a node will cause the volume to go down. That's nothing weird, you really should use 3 nodes. On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote: > Il 23-08-2017 18:14 Pavel Szalbot ha scritto: > > Hi, after many VM crashes during upgrades of Gluster, losing network > > connectivity on one node etc. I would
2017 Sep 06
2
GlusterFS as virtual machine storage
you need to set cluster.server-quorum-ratio 51% On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi all, > > I have promised to do some testing and I finally find some time and > infrastructure. > > So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created > replicated volume with arbiter (2+1) and VM on KVM (via
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3. There are some gotchas with using an arbiter for VM workloads. If quorum-type is auto and a brick that is not the arbiter drop out then if the up brick is dirty as far as the arbiter is concerned i.e. the only good copy is on the down brick you will get ENOTCONN and your VMs will halt on IO. On 6 September 2017 at 16:06,
2017 Sep 06
0
GlusterFS as virtual machine storage
Hi all, I have promised to do some testing and I finally find some time and infrastructure. So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created replicated volume with arbiter (2+1) and VM on KVM (via Openstack) with disk accessible through gfapi. Volume group is set to virt (gluster volume set gv_openstack_1 virt). VM runs current (all packages updated) Ubuntu Xenial. I set up
2017 Sep 06
0
GlusterFS as virtual machine storage
Mh, I never had to do that and I never had that problem. Is that an arbiter specific thing ? With replica 3 it just works. On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote: > you need to set > > cluster.server-quorum-ratio 51% > > On 6 September 2017 at 10:12, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > > > Hi all, > > >
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and refusing to do IO. https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote: > *shrug* I don't use arbiter for vm work loads just straight replica 3.
2017 Sep 07
2
GlusterFS as virtual machine storage
True but to work your way into that problem with replica 3 is a lot harder to achieve than with just replica 2 + arbiter. On 7 September 2017 at 14:06, Pavel Szalbot <pavel.szalbot at gmail.com> wrote: > Hi Neil, docs mention two live nodes of replica 3 blaming each other and > refusing to do IO. > > https://gluster.readthedocs.io/en/latest/Administrator% >
2018 Jun 25
5
ZFS on Linux repository
Hi list, we all know why ZFS is not included in RHEL/CentOS distributions: its CDDL license is/seems not compatible with GPL license. I'm not a layer, and I do not have any strong opinion on the matter. However, as a sysadmin, I found ZFS to be extremely useful, especially considering BTRFS sad state. I would *really* love to have ZFS on Linux more intergrated with current CentOS. From
2017 Sep 04
0
GlusterFS as virtual machine storage
The latter one is the one I have been referring to. And it is pretty dangerous Imho Il 31/ago/2017 01:19, <lemonnierk at ulrar.net> ha scritto: > Solved as to 3.7.12. The only bug left is when adding new bricks to > create a new replica set, now sure where we are now on that bug but > that's not a common operation (well, at least for me). > > On Wed, Aug 30, 2017 at
2015 Feb 09
2
Per-protocol ssl_protocols settings
Sorry for the bump... Anyone know if it is possible to have multiple protocols instances with different ssl_protocols settings? Regards. On 07/02/15 00:03, Gionatan Danti wrote: > Hi all, > anyone with some ideas? > > Thanks. > > Il 2015-02-02 23:08 Gionatan Danti ha scritto: >> Hi all, >> I have a question regarding the "ssl_protocols" parameter.