similar to: virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?

Displaying 20 results from an estimated 3000 matches similar to: "virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?"

2015 Jun 08
2
ceph rbd pool and libvirt manageability (virt-install)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everybody, I created a rbd pool and activated it, but I can't seem to create volumes in it with virsh or virt-install? # virsh pool-dumpxml myrbdpool <pool type='rbd'> <name>myrbdpool</name> <uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid> <capacity
2018 Aug 07
1
Re: ceph rbd pool and libvirt manageability (virt-install)
On Mon, Aug 06, 2018 at 09:19:59PM +0200, Jelle de Jong wrote: > Hello everybody, > > virt-install --version > 1.4.0 > > How do I create a ceph network disk with virt-install without having to > edit it? > > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <auth
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote: > On 06/07/2013 02:41 PM, John Nielsen wrote: >> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: >> [libvirt] [PATCH] Forbid use of ':'
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2015 Oct 12
3
[ovirt-users] CEPH rbd support in EL7 libvirt
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/10/15 10:13, Nux! wrote: > Hi Nir, > > I have not tried to use Ovirt with Ceph, my question was about > libvirt and was directed to ask the question here, sorry for the > noise; I understand libvirt is not really ovirt's people concern. > > The thing is qemu can do ceph rbd in EL7, libvirt does not, > although
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi, A bit of explanation of what I''m trying to achieve : We have a bunch of homogeneous nodes that have CPU + RAM + Storage and we want to use that as some generic cluster. The idea is to have Xen on all of these and run Ceph OSD in a domU on each to "export" the local storage space to the entire cluster. And then use RBD to store / access VM images from any of the machines.
2013 Oct 17
2
Create RBD Format 2 disk images with qemu-image
Hello, I would like to use RBD Format 2 images so I can take advantage of layering. However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get format 1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img core dumps, but it looks like that feature may have been deprecated [1]) Is there any way to have qemu-img create RBD Format 2 images or am I better off
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi, I''ve been working on getting a working blktap driver allowing to access ceph RBD block devices without relying on the RBD kernel driver and it finally got to a point where, it works and is testable. Some of the advantages are: - Easier to update to newer RBD version - Allows functionality only available in the userspace RBD library (write cache, layering, ...) - Less issue when
2013 Jun 07
2
Setting RBD cache parameters for libvirt+qemu
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':' in RBD pool names ...People are known to be abusing the lack of escaping in current libvirt to pass arbitrary args to QEMU. I am one of those
2014 Aug 06
2
python-guestfs rbd
how to use python-guestfs to access rbd device? The function i found is g.add_drive_opts, but i dono know how it receive ceph's configuration. I found this link http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/ Is that the only way i should use to access ceph rbd? Can we use python-guestfs to get the same effect? Thanks
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2012 Aug 06
2
using RBD with libvirt 0.9.13
I'm having some trouble creating KVM domains with RBD block devices using virsh. I've managed to get virsh to define the domain, but it gives an error when trying to start the domain: error: Failed to start domain test0 error: internal error process exited while connecting to monitor: char device redirected to /dev/pts/3 kvm: -drive
2019 Apr 26
2
Libvirt pool cannot see or create rbd clones
Hello everyone, To increase my odds of finding an answer I also wanted to ask here. This is my post from serverfault[1] in verbatim: While trying to get a cloned disk running from my OS snapshot I run into the problem that Libvirt cannot see existing images cloned from a snapshot. Created via: # rbd -p vmdisks clone vmdisks/coreos_2023@base vmdisks/coreos00.disk The base image has the one
2013 Nov 07
4
Re: RBD images locking
Eric, Well, in case where several servers may start the same virtual machines after a reboot for exemple. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/ But it's a hook... Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2013 Nov 25
4
Problem Connecting to RBD images using Sys::Guestfs Perl Module
Hello, I'm having trouble connecting to rbd images. It seems like somewhere the name is getting chewed up. I wonder if this is related to my previous troubles [1] [2] with rbd images. I'm trying to add an rbd image, but when I launch the guestfs object I get an error: >> libguestfs: trace: launch = -1 (error) I'm adding a single RBD >> libguestfs: trace: add_drive
2014 Mar 13
2
--rbd volume access--
http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/#comment-8806 I came across this link and and i was able to retrieve the rbd image. $ guestfish ><fs> set-attach-method appliance ><fs> add-drive /dev/null ><fs> config -set drive.hd0.file=rbd:pool/volume ><fs> run I was able to retrieve file from rbd image using the above
2018 Aug 06
0
Re: ceph rbd pool and libvirt manageability (virt-install)
Hello everybody, virt-install --version 1.4.0 How do I create a ceph network disk with virt-install without having to edit it? <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <auth username='libvirt'> <secret type='ceph' uuid='ec9be0c4-a60f-490e-af83-f0f27aaf48c9'/>
2013 Nov 06
1
Re: Problem using virt-sysprep with RBD images
Hello Rich, Interesting. Thanks for the explanation. When you specify an rbd on the command line for virt-sysprep, do you expect the path to include the monitor address? e.g.: >> virt-sysprep -a rbd://host-name/pool-name/device-name If I understand correctly, libvirt is able to understand the ceph configuration, so when I create a device with qemu-img I only specify the protocol and
2013 Nov 08
1
Re: RBD images locking
On Thu, Nov 07, 2013 at 09:08:58AM -0700, Eric Blake wrote: > On 11/07/2013 09:04 AM, NEVEU Stephane wrote: > > Eric, > > [please don't top-post on technical lists] > > > > > Well, in case where several servers may start the same virtual machines after a reboot for exemple. > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html