similar to: VM with ceph backend snapshot-revert error

Displaying 20 results from an estimated 5000 matches similar to: "VM with ceph backend snapshot-revert error"

2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody, This is a cross post to libvirt-users, libguestfs and ceph-users. I came back from FOSDEM 2016 and this was my 7th year or so and seen the awesome development around visualization going on and want to thank everybody for there contributions. I seen presentations from oVirt, OpenStack and quite a few great Redhat people, just like the last previous years. I personally been
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2015 Oct 12
3
[ovirt-users] CEPH rbd support in EL7 libvirt
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/10/15 10:13, Nux! wrote: > Hi Nir, > > I have not tried to use Ovirt with Ceph, my question was about > libvirt and was directed to ask the question here, sorry for the > noise; I understand libvirt is not really ovirt's people concern. > > The thing is qemu can do ceph rbd in EL7, libvirt does not, > although
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote: > On 06/07/2013 02:41 PM, John Nielsen wrote: >> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: >> [libvirt] [PATCH] Forbid use of ':'
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2015 Jun 08
2
ceph rbd pool and libvirt manageability (virt-install)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everybody, I created a rbd pool and activated it, but I can't seem to create volumes in it with virsh or virt-install? # virsh pool-dumpxml myrbdpool <pool type='rbd'> <name>myrbdpool</name> <uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid> <capacity
2014 Jan 23
7
[PATCH 0/7] Various fixes for Ceph drives and parsing libvirt XML.
Miscellaneous fixes to: - Handling of Ceph drives now works end-to-end (RHBZ#1026688). - In particular, you can now use rbd:/// URIs in guestfish (and they work). - Parse Ceph & NBD network drives from libvirt XML correctly, so that existing domains with Ceph/NBD drives can be added (eg. using guestfish -d option). - Add more testing of the above.
2018 Aug 07
1
Re: ceph rbd pool and libvirt manageability (virt-install)
On Mon, Aug 06, 2018 at 09:19:59PM +0200, Jelle de Jong wrote: > Hello everybody, > > virt-install --version > 1.4.0 > > How do I create a ceph network disk with virt-install without having to > edit it? > > <disk type='network' device='disk'> > <driver name='qemu' type='raw'/> > <auth
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi, A bit of explanation of what I''m trying to achieve : We have a bunch of homogeneous nodes that have CPU + RAM + Storage and we want to use that as some generic cluster. The idea is to have Xen on all of these and run Ceph OSD in a domU on each to "export" the local storage space to the entire cluster. And then use RBD to store / access VM images from any of the machines.
2013 Apr 18
39
Xen blktap driver for Ceph RBD : Anybody wants to test ? :p
Hi, I''ve been working on getting a working blktap driver allowing to access ceph RBD block devices without relying on the RBD kernel driver and it finally got to a point where, it works and is testable. Some of the advantages are: - Easier to update to newer RBD version - Allows functionality only available in the userspace RBD library (write cache, layering, ...) - Less issue when
2016 May 27
2
migrate local storage to ceph | exchanging the storage system
TLDR: Why is virsh migrate --persistent --live domain qemu+ssh://root@host/system --xml domain.ceph.xml not persistent and what could i do about it? Hi, after years of beeing pleased with local storage and migrating the complete storage from one host to another, it was time for ceph. After setting up a cluster and testing it, its time now for moving a lot of VMs on that type of storage, without
2018 Feb 27
1
Reply: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
Dear Michal After I fix the local libvirt master branch follow your patch, and build rpm for CentOS 7.4. virDomainUpdateDeviceFlags as bellow: ================================================ 2018-02-27 09:27:43.782+0000: 16656: debug : virDomainUpdateDeviceFlags:8326 : dom=0x7f2084000c50, (VM: name=6ec499397d594e f2a64fcfc938f38225, uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), xml=<disk
2013 Oct 17
2
Create RBD Format 2 disk images with qemu-image
Hello, I would like to use RBD Format 2 images so I can take advantage of layering. However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get format 1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img core dumps, but it looks like that feature may have been deprecated [1]) Is there any way to have qemu-img create RBD Format 2 images or am I better off
2014 Jan 07
2
Issue with virt-sysprep
Hi, I'm trying to run virt-sysprep against a disk in Ceph RBD storage but I appear to be unable to do so. The commands (bold) and outputs I've had are: On Ceph node: *virt-sysprep -a rbd://localhost/libvirt-pool/ubuntu-12-04-beanstalk001* libguestfs: new guestfs handle 0x113b060 rbd://localhost/libvirt-pool/ubuntu-12-04-beanstalk001: No such file or directory libguestfs: trace: close
2015 Oct 31
3
Libvirt enhancement requests
Hi Lucian, It seems to be upstream libvirt-1.2.15-2 with options with_xen and with_libxl enabled. http://cbs.centos.org/koji/buildinfo?buildID=1348 Regards, Jean-Marc Le 28/10/2015 09:38, Nux! a ?crit : > Pasi, > > Where are these RPMs, how are they built, what exactly are the differences vs the stock ones? > > Regards, > Lucian > > -- > Sent from the Delta quadrant
2015 Nov 02
2
Libvirt enhancement requests
Le 02/11/2015 18:28, Johnny Hughes a ?crit : > On 10/31/2015 04:34 PM, Jean-Marc LIGER wrote: >> Hi Lucian, >> >> It seems to be upstream libvirt-1.2.15-2 with options with_xen and >> with_libxl enabled. >> http://cbs.centos.org/koji/buildinfo?buildID=1348 >> > Right, and we can use that version, or a newer one and enable rbd as well. You might use this
2014 Aug 06
2
python-guestfs rbd
how to use python-guestfs to access rbd device? The function i found is g.add_drive_opts, but i dono know how it receive ceph's configuration. I found this link http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/ Is that the only way i should use to access ceph rbd? Can we use python-guestfs to get the same effect? Thanks
2013 Jun 07
2
Setting RBD cache parameters for libvirt+qemu
I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change: [libvirt] [PATCH] Forbid use of ':' in RBD pool names ...People are known to be abusing the lack of escaping in current libvirt to pass arbitrary args to QEMU. I am one of those