Displaying 20 results from an estimated 2000 matches similar to: "libvirt + ceph rbd will hang"
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote:
> Hi,
>
> I'm trying to build an active/active virtualization cluster using a Ceph
> RBD as backing for each libvirt-managed LXC. I know live migration for LXC
> isn't yet possible, but I'd like to build my infrastructure as if it were.
> That is, I would like to be sure proper locking is in place for
2013 Jun 08
0
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On 06/07/2013 04:18 PM, John Nielsen wrote:
> On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote:
>
>> On 06/07/2013 02:41 PM, John Nielsen wrote:
>>> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this
2018 Aug 06
0
Re: ceph rbd pool and libvirt manageability (virt-install)
Hello everybody,
virt-install --version
1.4.0
How do I create a ceph network disk with virt-install without having to
edit it?
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<auth username='libvirt'>
<secret type='ceph' uuid='ec9be0c4-a60f-490e-af83-f0f27aaf48c9'/>
2015 Mar 31
0
Re: couple of ceph/rbd questions
On 03/31/2015 11:47 AM, Brian Kroth wrote:
> Hi, I've recently been working on setting up a set of libvirt compute
> nodes that will be using a ceph rbd pool for storing vm disk image
> files. I've got a couple of issues I've run into.
>
> First, per the standard ceph documentation examples [1], the way to add
> a disk is to create a block in the VM definition XML
2013 Jun 07
0
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On 06/07/2013 02:41 PM, John Nielsen wrote:
> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
> [libvirt] [PATCH] Forbid use of ':' in RBD pool names
> ...People are known to be abusing the lack of escaping in current
2013 Jun 07
1
Re: [ceph-users] Setting RBD cache parameters for libvirt+qemu
On Jun 7, 2013, at 5:01 PM, Josh Durgin <josh.durgin@inktank.com> wrote:
> On 06/07/2013 02:41 PM, John Nielsen wrote:
>> I am running some qemu-kvm virtual machines via libvirt using Ceph RBD as the back-end storage. Today I was testing an update to libvirt-1.0.6 on one of my hosts and discovered that it includes this change:
>> [libvirt] [PATCH] Forbid use of ':'
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody,
This is a cross post to libvirt-users, libguestfs and ceph-users.
I came back from FOSDEM 2016 and this was my 7th year or so and seen the
awesome development around visualization going on and want to thank
everybody for there contributions.
I seen presentations from oVirt, OpenStack and quite a few great Redhat
people, just like the last previous years.
I personally been
2016 Feb 01
2
virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?
Hello everybody,
This is a cross post to libvirt-users, libguestfs and ceph-users.
I came back from FOSDEM 2016 and this was my 7th year or so and seen the
awesome development around visualization going on and want to thank
everybody for there contributions.
I seen presentations from oVirt, OpenStack and quite a few great Redhat
people, just like the last previous years.
I personally been
2015 Oct 13
0
[ovirt-users] CEPH rbd support in EL7 libvirt
hi
is ovirt useable with xen? is there any doc/howto how to use it?
Am 2015-10-12 15:04, schrieb Sven Kieske:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>
> On 12/10/15 10:13, Nux! wrote:
>> Hi Nir,
>>
>> I have not tried to use Ovirt with Ceph, my question was about
>> libvirt and was directed to ask the question here, sorry for the
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2015 Oct 12
3
[ovirt-users] CEPH rbd support in EL7 libvirt
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 12/10/15 10:13, Nux! wrote:
> Hi Nir,
>
> I have not tried to use Ovirt with Ceph, my question was about
> libvirt and was directed to ask the question here, sorry for the
> noise; I understand libvirt is not really ovirt's people concern.
>
> The thing is qemu can do ceph rbd in EL7, libvirt does not,
> although
2015 Jun 08
2
ceph rbd pool and libvirt manageability (virt-install)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity
2019 Jun 21
0
Intermittent live migration hang with ceph RBD attached volume
Software in use:
*Source hypervisor:* *Qemu:* stable-2.12 branch *Libvirt*: v3.2-maint
branch *OS*: CentOS 6
*Destination hypervisor: **Qemu:* stable-2.12 branch *Libvirt*: v4.9-maint
branch *OS*: CentOS 7
I'm experiencing an intermittent live migration hang of a virtual machine
(KVM) with a ceph RBD volume attached.
At the high level what I see is that when this does happen, the virtual
2018 Aug 07
1
Re: ceph rbd pool and libvirt manageability (virt-install)
On Mon, Aug 06, 2018 at 09:19:59PM +0200, Jelle de Jong wrote:
> Hello everybody,
>
> virt-install --version
> 1.4.0
>
> How do I create a ceph network disk with virt-install without having to
> edit it?
>
> <disk type='network' device='disk'>
> <driver name='qemu' type='raw'/>
> <auth
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute
nodes that will be using a ceph rbd pool for storing vm disk image
files. I've got a couple of issues I've run into.
First, per the standard ceph documentation examples [1], the way to add a
disk is to create a block in the VM definition XML that looks something
like this:
<disk type='network'
2012 Aug 30
5
Ceph + RBD + Xen: Complete collapse -> Network issue in domU / Bad data for OSD / OOM Kill
Hi,
A bit of explanation of what I''m trying to achieve :
We have a bunch of homogeneous nodes that have CPU + RAM + Storage and
we want to use that as some generic cluster. The idea is to have Xen
on all of these and run Ceph OSD in a domU on each to "export" the
local storage space to the entire cluster. And then use RBD to store /
access VM images from any of the machines.
2017 Jul 02
2
Re: 答复: virtual drive performance
Just a little catch-up. This time I was able to resolve the issue by doing:
virsh blockjob domain hda --abort
virsh blockcommit domain hda --active --pivot
Last time I had to shut down the virtual machine and do this while being
offline.
Thanks Wang for your valuable input. As far as the memory goes, there's
plenty of head room:
$ free -h
total used free
2013 Oct 17
2
Create RBD Format 2 disk images with qemu-image
Hello,
I would like to use RBD Format 2 images so I can take advantage of layering.
However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get format
1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img core dumps,
but it looks like that feature may have been deprecated [1])
Is there any way to have qemu-img create RBD Format 2 images or am I better
off
2013 Oct 21
0
Re: Create RBD Format 2 disk images with qemu-image
On 10/17/2013 05:28 AM, Jon wrote:
> Hello,
>
> I would like to use RBD Format 2 images so I can take advantage of layering.
>
> However, when I use "qemu-img create -f rbd rbd:data/foo 10G", I get
> format 1 RBD images. (Actually, when I use the "-f rbd" flag, qemu-img
> core dumps, but it looks like that feature may have been deprecated [1])
>
> Is
2015 Jan 08
0
Libvirt guest can't boot up when use ceph as storage backend with Selinux enabled
Hi there,
I met one problem that guest fail to boot up when Selinux is enabled with guest storage
based on ceph. However, I can boot the guest with qemu directly. I also can boot it up
with Selinux disabled. Not sure it is a libvirt bug or wrong use case.
1. Enable Selinux
# getenforce && iptables -L
Enforcing
Chain INPUT (policy ACCEPT)
target prot opt source destination