Displaying 20 results from an estimated 10000 matches similar to: "best shared storage solution ?"
2015 Dec 17
1
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 04:09:17PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 17, 2015 at 04:34:57PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Dec 17, 2015 at 03:02:12PM +0100, Peter Zijlstra wrote:
>
> > > > commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9
> > > > Author: Alexander Duyck <alexander.h.duyck at redhat.com>
> > > > Date:
2015 Dec 17
1
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 04:09:17PM +0100, Peter Zijlstra wrote:
> On Thu, Dec 17, 2015 at 04:34:57PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Dec 17, 2015 at 03:02:12PM +0100, Peter Zijlstra wrote:
>
> > > > commit 9e1a27ea42691429e31f158cce6fc61bc79bb2e9
> > > > Author: Alexander Duyck <alexander.h.duyck at redhat.com>
> > > > Date:
2017 Aug 25
0
GlusterFS as virtual machine storage
Il 25-08-2017 08:32 Gionatan Danti ha scritto:
> Hi all,
> any other advice from who use (or do not use) Gluster as a replicated
> VM backend?
>
> Thanks.
Sorry, I was not seeing messages because I was not subscribed on the
list; I read it from the web.
So it seems that Pavel and WK have vastly different experience with
Gluster. Any plausible cause for that difference?
> WK
2012 Mar 13
2
libvirt with sanlock
Hello,
I configured libvirtd with the sanlock lock manager plugin:
# rpm -qa | egrep "libvirt-0|sanlock-[01]"
libvirt-lock-sanlock-0.9.4-23.el6_2.4.x86_64
sanlock-1.8-2.el6.x86_64
libvirt-0.9.4-23.el6_2.4.x86_64
# egrep -v "^#|^$" /etc/libvirt/qemu-sanlock.conf
auto_disk_leases = 1
disk_lease_dir = "/var/lib/libvirt/sanlock"
host_id = 4
# mount | grep sanlock
2017 Aug 23
0
GlusterFS as virtual machine storage
What he is saying is that, on a two node volume, upgrading a node will
cause the volume to go down. That's nothing weird, you really should use
3 nodes.
On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:
> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
> > Hi, after many VM crashes during upgrades of Gluster, losing network
> > connectivity on one node etc. I would
2012 May 08
1
release open_disk error
Hello,
I wonder what the "open error -1" / "release open_disk error" messages in sanlock.log actually mean.
I saw these messages in the log on a KVM host that rebooted, and after running "/usr/sbin/virt-sanlock-cleanup" on that host.
The resources where disks from 2 guests running on another KVM host.
So in fact the disks are still in use, bot got cleaned up by
2017 Aug 23
4
GlusterFS as virtual machine storage
Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
> Hi, after many VM crashes during upgrades of Gluster, losing network
> connectivity on one node etc. I would advise running replica 2 with
> arbiter.
Hi Pavel, this is bad news :(
So, in your case at least, Gluster was not stable? Something as simple
as an update would let it crash?
> I once even managed to break this setup (with
2017 Aug 25
2
GlusterFS as virtual machine storage
Il 23-08-2017 18:51 Gionatan Danti ha scritto:
> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
>> Hi, after many VM crashes during upgrades of Gluster, losing network
>> connectivity on one node etc. I would advise running replica 2 with
>> arbiter.
>
> Hi Pavel, this is bad news :(
> So, in your case at least, Gluster was not stable? Something as simple
> as an
2017 Nov 08
2
Does libvirt-sanlock support network disk?
Hello,
As we know, libvirt sanlock support file type storage. I wonder *if it
supports network storage.*
I tried *iSCSI*, but found it didn't generate any resource file:
Version: *qemu-2.10 libvirt-3.9 sanlock-3.5*
1. Set configuration:
qemu.conf:
*lock_manager = "sanlock"*
qemu-sanlock.conf:
*auto_disk_leases = 1disk_lease_dir = "/var/lib/libvirt/sanlock"host_id =
2013 Aug 02
3
Oracle RAC in libvirt+KVM environment
We wan't to setup two Oracle instance and make RAC work on them.
Both VM are setup based on libvirt + KVM, we use a lvm lun which
formated in qcow2 format and set the shareable properties in the disk
driver like this:
<disk type='block' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source
2016 Sep 02
0
Re: Ang: Ang: Re: Ang: Re: attaching storage pool error
On 08/24/2016 06:31 AM, Johan Kragsterman wrote:
>
> Hi again!
>
I saw this last week while I was at KVM Forum, but just haven't had the
time until now to start thinking about this stuff again ... as you point
out with your questions and replies - NPIV/vHBA is tricky and
complicated... I always have try to "clear the decks" of anything else
before trying to page how this
2013 May 03
1
sanlockd, virtlock and GFS2
Hi,
I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm
running into some issues with either sanlock or virtlockd. All virtual
machines are handled via the cluster (in /etc/cluser/cluster.conf) but I
want some kind of locking to be in place as extra security measurement.
Sanlock
=======
At first I tried sanlock, but it seems if one node goes down
unexpectedly,
2013 Aug 08
2
Re: Oracle RAC in libvirt+KVM environment
On 08/08/2013 03:54 AM, Timon Wang wrote:
> Anybody have idea on it?
>
> I tried to set the disk as raw format, and retried the setup process,
> but still can't get through.
Caveat: I know nothing in particular about Oracle RAC, but...
Assuming that RAC uses something like SCSI reservations in order to
share the disk, I would guess it doesn't like the disk being on the IDE
2013 Jan 31
1
Sanlock gives up lock when VM is paused
Hello,
I'm using libvirt and sanlock on qemu-kvm guests. Each guest has it's
own Logical Volume for it's root filesystem. Sanlock is configured and
working and prevents me from starting the same VM twice on multiple
nodes and corrupting it's root filesystem. Each VM's domain XML resides
on 2 servers that share the LVM volume group over fiber channel.
In testing, I noticed
2013 Aug 08
0
Re: Oracle RAC in libvirt+KVM environment
Anybody have idea on it?
I tried to set the disk as raw format, and retried the setup process,
but still can't get through.
On Fri, Aug 2, 2013 at 1:58 PM, Timon Wang <timonwst@gmail.com> wrote:
> We wan't to setup two Oracle instance and make RAC work on them.
> Both VM are setup based on libvirt + KVM, we use a lvm lun which
> formated in qcow2 format and set the
2013 Aug 10
0
Re: Oracle RAC in libvirt+KVM environment
I have tryied change the disk bus to SCSI, add a SCSI controller whose
model is virtio-scsi, still can't setup the RAC instance.
I tried to use windows 2008 Failover Cluster feature to setup a a
Failover Cluster instead, and I can't find any cluster disk to share
between two nodes. So when Failover Cluster is setup, I can't add any
Cluster disk to the Failover Cluster.
Have I missed
2016 Mar 30
0
Re: VM crash and lock manager
[moderator note: .pngs were stripped to avoid overwhelming the mail
server and list recipients with 1M of data]
-------- Forwarded Message --------
Date: Wed, 30 Mar 2016 11:25:09 +0200
Message-ID:
<20160330112509.Horde.0Oad-ZfkzdZnouXf-VkEmw1@webmailperso.univ-brest.fr>
From: villeneu@kassis.univ-brest.fr
To: Franky Van Liedekerke <liedekef@telenet.be>
Cc: libvirt-users@redhat.com
2017 Dec 05
1
[PATCH] lib: libvirt: stop using <shareable/> for appliance disk (RHBZ#1518517)
Commit aa9e0057b19e29f76c9a81f9aebeeb1cb5bf1fdb made the libvirt backend
use <shareable/> for the disk of the appliance, since at that time all
the instances were using the disk directly.
OTOH, commit 3ad44c866042919374e2d840502e53da2ed8aef0 switched to
overlays for read-only disks, including the appliance, so effectively
protecting the appliance.
Because of this, the libvirt backend does
2016 Mar 31
0
CEBA-2016:0541 CentOS 7 sanlock BugFix Update
CentOS Errata and Bugfix Advisory 2016:0541
Upstream details at : https://rhn.redhat.com/errata/RHBA-2016-0541.html
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
96a4bc7ef0285522786a39d02c0b427656aaa71e1abb779a75c65c2e1c600748 fence-sanlock-3.2.4-2.el7_2.x86_64.rpm
2016 Sep 03
0
Ang: Re: Ang: Ang: Re: Ang: Re: attaching storage pool error
Hi, John, and thank you!
This was a very thorough and welcome response, I was wondering where all the storage guys were...
I will get back to you with more details later, specifically about multipath, since this needs to be investigated thoroughly.
I have, with trial and error method, during the elapsed time, been able to attach the NPIV pool LUN to a virtio-scsi controller, and it seems it