Displaying 20 results from an estimated 30000 matches similar to: "RBD volume pools and locks"
2015 Mar 31
0
Re: couple of ceph/rbd questions
On 03/31/2015 11:47 AM, Brian Kroth wrote:
> Hi, I've recently been working on setting up a set of libvirt compute
> nodes that will be using a ceph rbd pool for storing vm disk image
> files. I've got a couple of issues I've run into.
>
> First, per the standard ceph documentation examples [1], the way to add
> a disk is to create a block in the VM definition XML
2020 Oct 26
0
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
On a Friday in 2020, Marcel Juffermans wrote:
>Hi there,
>
>Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the
>virtual guests don't get their RBD disks made available to them. On
>openSUSE 15.1 (which includes libvirt 5.1.0) that worked fine. The XML
>is as follows:
>
[...]
>I tried to strace libvirtd. The results are as follows:
>
>On
2020 Oct 27
0
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
Thanks Jim,
Looking at the logs for the working and non-working setups, the command
line for QEMU is identical and the qmp commands almost the same: they
both do "query-chardev" and "query-vnc" and the working setup does an
additional "qmp_capabilities" which is likely not relevant.
So I guess it must be in QEMU - I'll head over to the bug tracker.
Thanks
2014 Mar 13
2
--rbd volume access--
http://rwmj.wordpress.com/2013/03/12/accessing-ceph-rbd-sheepdog-etc-using-libguestfs/#comment-8806
I came across this link and and i was able to retrieve the rbd image.
$ guestfish
><fs> set-attach-method appliance
><fs> add-drive /dev/null
><fs> config -set drive.hd0.file=rbd:pool/volume
><fs> run
I was able to retrieve file from rbd image using the above
2020 Oct 26
1
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
It's QEMU 4.2.1-lp152.9.6.1.
I've tried updating it from the Open Build Service repos but there's too
many version conflicts.
Marcel
On 26/10/20 9:02 pm, Ján Tomko wrote:
> On a Friday in 2020, Marcel Juffermans wrote:
>> Hi there,
>>
>> Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the
>> virtual guests don't get their RBD disks
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute
nodes that will be using a ceph rbd pool for storing vm disk image
files. I've got a couple of issues I've run into.
First, per the standard ceph documentation examples [1], the way to add a
disk is to create a block in the VM definition XML that looks something
like this:
<disk type='network'
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote:
> Hi,
>
> I'm trying to build an active/active virtualization cluster using a Ceph
> RBD as backing for each libvirt-managed LXC. I know live migration for LXC
> isn't yet possible, but I'd like to build my infrastructure as if it were.
> That is, I would like to be sure proper locking is in place for
2014 Mar 13
2
Re: --rbd volume access--
i will test it out and will update you..
Thankyou so much for your reply
kind regards
Shumaila Naeem
Software Engineer , Ovex Technologies
On Thu, Mar 13, 2014 at 5:15 PM, Richard W.M. Jones <rjones@redhat.com>wrote:
> On Thu, Mar 13, 2014 at 03:06:17PM +0500, Shumaila Naeem wrote:
> >
>
2020 Oct 23
2
RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
Hi there,
Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the
virtual guests don't get their RBD disks made available to them. On
openSUSE 15.1 (which includes libvirt 5.1.0) that worked fine. The XML
is as follows:
<domain type='xen' id='7'>
<name>mytwotel-a</name>
<uuid>a56daa5d-c095-49d5-ae1b-00b38353614e</uuid>
2013 Oct 31
1
Fwd: libvirt unsupport rbd storage pool? "missing backend for pool type 8"
I use "virsh pool-define rbd.xml" to create a rbd storage pool,but get this
error
virsh pool-define /tmp/rbd.xml
error: Failed to define pool from /tmp/rbd.xml
error: internal error: missing backend for pool type 8
rbd.xml
<pool type="rbd">
<name>cloudstack</name>
<source>
<name>cloudstack</name>
<host
2013 Nov 07
4
Re: RBD images locking
Eric,
Well, in case where several servers may start the same virtual machines after a reboot for exemple.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
I've seen this hook here : http://www.wogri.at/en/linux/ceph-libvirt-locking/
But it's a hook...
Yes, I may try to write a patch. My coding skills are surely not as good as yours but I 'd be glad to make
2013 Nov 07
2
RBD images locking
Hi,
One short question : Is there any chance to see locks on rbd/images in the next release ?
Thank you :)
2014 Mar 14
5
Re: --rbd volume access--
On Fri, Mar 14, 2014 at 12:47:08PM +0500, Shumaila Naeem wrote:
> Also i can't use the newest version of libguestfs as im on a production
> system and i could not use the newest version due to some limitations.
You can use the new version without installing it. However you would
need to be able to install a compiler and other dependencies, which
may not be possible on a production
2018 Aug 06
0
Re: ceph rbd pool and libvirt manageability (virt-install)
Hello everybody,
virt-install --version
1.4.0
How do I create a ceph network disk with virt-install without having to
edit it?
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<auth username='libvirt'>
<secret type='ceph' uuid='ec9be0c4-a60f-490e-af83-f0f27aaf48c9'/>
2013 Nov 07
0
Re: RBD images locking
On 11/07/2013 07:56 AM, NEVEU Stephane wrote:
> Hi,
>
> One short question : Is there any chance to see locks on rbd/images in the next release ?
What exactly are you looking for? Are you willing to contribute the
patches?
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
2019 Apr 26
2
Libvirt pool cannot see or create rbd clones
Hello everyone,
To increase my odds of finding an answer I also wanted to ask here.
This is my post from serverfault[1] in verbatim:
While trying to get a cloned disk running from my OS snapshot I run
into the problem that Libvirt cannot see existing images cloned from a
snapshot. Created via:
# rbd -p vmdisks clone vmdisks/coreos_2023@base vmdisks/coreos00.disk
The base image has the one
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2019 Jun 21
0
Intermittent live migration hang with ceph RBD attached volume
Software in use:
*Source hypervisor:* *Qemu:* stable-2.12 branch *Libvirt*: v3.2-maint
branch *OS*: CentOS 6
*Destination hypervisor: **Qemu:* stable-2.12 branch *Libvirt*: v4.9-maint
branch *OS*: CentOS 7
I'm experiencing an intermittent live migration hang of a virtual machine
(KVM) with a ceph RBD volume attached.
At the high level what I see is that when this does happen, the virtual
2020 Jan 23
0
Re: qemu hook: event for source host too
So, how likely is it possible to get this feature (two new events for
the qemu hook)?
Le 22/01/2020 à 10:56, Guy Godfroy a écrit :
> That's right, I need also that second hook event.
>
> For your information, for now I manage locks manually or via Ansible.
> To make hook manage locks, I still need to find out a secure way to
> run LVM commands from a non-root account, but
2020 Jan 22
2
Re: qemu hook: event for source host too
That's right, I need also that second hook event.
For your information, for now I manage locks manually or via Ansible. To make hook manage locks, I still need to find out a secure way to run LVM commands from a non-root account, but this is another problem.
Le 22 janvier 2020 10:24:53 GMT+01:00, Michal Privoznik <mprivozn@redhat.com> a écrit :
>On 1/22/20 9:23 AM, Guy Godfroy