similar to: Ceph RBD locking for libvirt-managed LXC (someday) live migrations

Displaying 20 results from an estimated 5000 matches similar to: "Ceph RBD locking for libvirt-managed LXC (someday) live migrations"

2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote: > Hi, > > I'm trying to build an active/active virtualization cluster using a Ceph > RBD as backing for each libvirt-managed LXC. I know live migration for LXC > isn't yet possible, but I'd like to build my infrastructure as if it were. > That is, I would like to be sure proper locking is in place for
2013 Nov 07
4
Re: RBD images locking
Eric, Well, in case where several servers may start the same virtual machines after a reboot for exemple. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/ But it's a hook... Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2013 Nov 08
1
Re: RBD images locking
On Thu, Nov 07, 2013 at 09:08:58AM -0700, Eric Blake wrote: > On 11/07/2013 09:04 AM, NEVEU Stephane wrote: > > Eric, > > [please don't top-post on technical lists] > > > > > Well, in case where several servers may start the same virtual machines after a reboot for exemple. > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
2014 Aug 06
2
python-guestfs rbd
how to use python-guestfs to access rbd device? The function i found is g.add_drive_opts, but i dono know how it receive ceph's configuration. I found this link http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/ Is that the only way i should use to access ceph rbd? Can we use python-guestfs to get the same effect? Thanks
2013 Nov 07
2
RBD images locking
Hi, One short question : Is there any chance to see locks on rbd/images in the next release ? Thank you :)
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote: > On 06/07/2013 02:41 PM, John Nielsen wrote: >> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: >> [libvirt] [PATCH] Forbid use of ':'
2015 Oct 12
3
[ovirt-users] CEPH rbd support in EL7 libvirt
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/10/15 10:13, Nux! wrote: > Hi Nir, > > I have not tried to use Ovirt with Ceph, my question was about > libvirt and was directed to ask the question here, sorry for the > noise; I understand libvirt is not really ovirt's people concern. > > The thing is qemu can do ceph rbd in EL7, libvirt does not, > although
2015 Jun 08
2
ceph rbd pool and libvirt manageability (virt-install)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everybody, I created a rbd pool and activated it, but I can't seem to create volumes in it with virsh or virt-install? # virsh pool-dumpxml myrbdpool <pool type='rbd'> <name>myrbdpool</name> <uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid> <capacity
2018 Aug 07
1
Re: ceph rbd pool and libvirt manageability (virt-install)
On Mon, Aug 06, 2018 at 09:19:59PM +0200, Jelle de Jong wrote: > Hello everybody, > > virt-install --version > 1.4.0 > > How do I create a ceph network disk with virt-install without having to > edit it? > > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <auth
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi, A bit of explanation of what I''m trying to achieve : We have a bunch of homogeneous nodes that have CPU + RAM + Storage and we want to use that as some generic cluster. The idea is to have Xen on all of these and run Ceph OSD in a domU on each to "export" the local storage space to the entire cluster. And then use RBD to store / access VM images from any of the machines.
2013 Oct 17
2
Create RBD Format 2 disk images with qemu-image
Hello, I would like to use RBD Format 2 images so I can take advantage of layering. However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get format 1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img core dumps, but it looks like that feature may have been deprecated [1]) Is there any way to have qemu-img create RBD Format 2 images or am I better off
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi, I''ve been working on getting a working blktap driver allowing to access ceph RBD block devices without relying on the RBD kernel driver and it finally got to a point where, it works and is testable. Some of the advantages are: - Easier to update to newer RBD version - Allows functionality only available in the userspace RBD library (write cache, layering, ...) - Less issue when
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2013 Jun 07
2
Setting RBD cache parameters for libvirt+qemu
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':' in RBD pool names ...People are known to be abusing the lack of escaping in current libvirt to pass arbitrary args to QEMU. I am one of those
2019 Apr 26
2
Libvirt pool cannot see or create rbd clones
Hello everyone, To increase my odds of finding an answer I also wanted to ask here. This is my post from serverfault[1] in verbatim: While trying to get a cloned disk running from my OS snapshot I run into the problem that Libvirt cannot see existing images cloned from a snapshot. Created via: # rbd -p vmdisks clone vmdisks/coreos_2023@base vmdisks/coreos00.disk The base image has the one
2019 Apr 01
2
Re: guestfish Remote Images IPv6 Support
Unfortunately I do need to use the address explicitly as opposed to hostnames because the source of the data fed here is Ceph's monmap which returns the addresses explicitly. I've tried all the common ways to escape the : in the v6 address to no avail.  I definitely agree that the problem looks to be it parsing the colons as if the port comes next and then everything after that is
2013 May 03
1
sanlockd, virtlock and GFS2
Hi, I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm running into some issues with either sanlock or virtlockd. All virtual machines are handled via the cluster (in /etc/cluser/cluster.conf) but I want some kind of locking to be in place as extra security measurement. Sanlock ======= At first I tried sanlock, but it seems if one node goes down unexpectedly,