similar to: Create RBD Format 2 disk images with qemu-image

Displaying 20 results from an estimated 3000 matches similar to: "Create RBD Format 2 disk images with qemu-image"

2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi, I''ve been working on getting a working blktap driver allowing to access ceph RBD block devices without relying on the RBD kernel driver and it finally got to a point where, it works and is testable. Some of the advantages are: - Easier to update to newer RBD version - Allows functionality only available in the userspace RBD library (write cache, layering, ...) - Less issue when
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote: > On 06/07/2013 02:41 PM, John Nielsen wrote: >> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: >> [libvirt] [PATCH] Forbid use of ':'
2013 Jun 07
2
Setting RBD cache parameters for libvirt+qemu
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':' in RBD pool names ...People are known to be abusing the lack of escaping in current libvirt to pass arbitrary args to QEMU. I am one of those
2019 Apr 26
2
Libvirt pool cannot see or create rbd clones
Hello everyone, To increase my odds of finding an answer I also wanted to ask here. This is my post from serverfault[1] in verbatim: While trying to get a cloned disk running from my OS snapshot I run into the problem that Libvirt cannot see existing images cloned from a snapshot. Created via: # rbd -p vmdisks clone vmdisks/coreos_2023@base vmdisks/coreos00.disk The base image has the one
2013 Oct 21
0
Re: Create RBD Format 2 disk images with qemu-image
On 10/17/2013 05:28 AM, Jon wrote: > Hello, > > I would like to use RBD Format 2 images so I can take advantage of layering. > > However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get > format 1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img > core dumps, but it looks like that feature may have been deprecated [1]) > > Is
2019 Jul 29
3
Why librbd disallow VM live migration if the disk cache mode is not none or directsync
I'm curious that why librbd sets this limitation? The rule first appeared in librbd.git commit d57485f73ab. Theoretically, a write-through cache is also safe for VM migration, if the cache implementation guarantees that cache invalidation and disk write are synchronous operations. For example, I'm using Ceph RBD images as VM storage backend. The Ceph librbd supports synchronous
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2013 Nov 26
3
Re: Problem Connecting to RBD images using Sys::Guestfs Perl Module
Hey Hilko, >> I'm guessing that you're using Ubuntu, am I right? Pretty good guess. :) For the time being I'm using 13.10. I know I've said it previously, but your right, I should include that info so I don't make people have to go digging. How do you normally deal with that warning? I typically rename the kvm binary, then link qemu-system-x86_64 but my concern is
2012 Aug 06
2
using RBD with libvirt 0.9.13
I'm having some trouble creating KVM domains with RBD block devices using virsh. I've managed to get virsh to define the domain, but it gives an error when trying to start the domain: error: Failed to start domain test0 error: internal error process exited while connecting to monitor: char device redirected to /dev/pts/3 kvm: -drive
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi, A bit of explanation of what I''m trying to achieve : We have a bunch of homogeneous nodes that have CPU + RAM + Storage and we want to use that as some generic cluster. The idea is to have Xen on all of these and run Ceph OSD in a domU on each to "export" the local storage space to the entire cluster. And then use RBD to store / access VM images from any of the machines.
2013 May 07
7
[PATCH 0/5] rbd improvements
This series improves ceph rbd support in libguestfs. It uses the servers list, adds support for a custom username, and starts to add support for custom secret.
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2014 Aug 06
2
python-guestfs rbd
how to use python-guestfs to access rbd device? The function i found is g.add_drive_opts, but i dono know how it receive ceph's configuration. I found this link http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/ Is that the only way i should use to access ceph rbd? Can we use python-guestfs to get the same effect? Thanks
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2013 Nov 07
4
Re: RBD images locking
Eric, Well, in case where several servers may start the same virtual machines after a reboot for exemple. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/ But it's a hook... Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2013 Oct 31
1
Fwd: libvirt unsupport rbd storage pool? "missing backend for pool type 8"
I use "virsh pool-define rbd.xml" to create a rbd storage pool,but get this error virsh pool-define /tmp/rbd.xml error: Failed to define pool from /tmp/rbd.xml error: internal error: missing backend for pool type 8 rbd.xml <pool type="rbd"> <name>cloudstack</name> <source> <name>cloudstack</name> <host
2014 Aug 25
2
help? looking for limits on in-flight write operations for virtio-blk
Hi, I'm trying to figure out what controls the number if in-flight virtio block operations when running linux in qemu on top of a linux host. The problem is that we're trying to run as many VMs as possible, using ceph/rbd for the rootfs. We've tripped over the fact the the memory consumption of qemu can spike noticeably when doing I/O (something as simple as "dd" from
2014 Aug 25
2
help? looking for limits on in-flight write operations for virtio-blk
Hi, I'm trying to figure out what controls the number if in-flight virtio block operations when running linux in qemu on top of a linux host. The problem is that we're trying to run as many VMs as possible, using ceph/rbd for the rootfs. We've tripped over the fact the the memory consumption of qemu can spike noticeably when doing I/O (something as simple as "dd" from
2015 Oct 12
3
[ovirt-users] CEPH rbd support in EL7 libvirt
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/10/15 10:13, Nux! wrote: > Hi Nir, > > I have not tried to use Ovirt with Ceph, my question was about > libvirt and was directed to ask the question here, sorry for the > noise; I understand libvirt is not really ovirt's people concern. > > The thing is qemu can do ceph rbd in EL7, libvirt does not, > although