similar to: external snapshot is missing object secrets

Displaying 20 results from an estimated 500 matches similar to: "external snapshot is missing object secrets"

2012 Sep 18
1
libvirt 0.10 and cephx
Hello, Current version adding 'auth_supported=none' at the end of disk path and failing to do an authentication with specified cephx key: <source protocol='rbd' name='rbd/vmxxxxxxxxx:id=qemukvm:key=[cut]==:auth_supported=cephx'> <host name='10.0.0.13' port='6789'/> <host name='10.0.0.10' port='6789'/>
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2019 Apr 01
2
Re: guestfish Remote Images IPv6 Support
I believe the bug lies in libguestfs. Taking out the commands being sent to QEMU and using qemu-img info I can recreate the error: # qemu-img info "rbd:images/CentOS-7-x86_64-GenericCloud-1901:mon_host=[fd00::cefc:1]\:6789:auth_supported=none" qemu-img: Could not open 'rbd:images/CentOS-7-x86_64-GenericCloud-1901:mon_host=[fd00::cefc:1]\:6789:auth_supported=none': invalid
2019 Apr 01
1
Re: guestfish Remote Images IPv6 Support
This worked wonderfully!  What are the odds of getting this upstream in the near future?  I'd rather not build from source in production. # ./run guestfish --format=raw --ro -a rbd://[fd00::cefc:1]:6789/images/CentOS-7-x86_64-GenericCloud-1901 libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: trace: set_tmpdir "/root/libguestfs/tmp" libguestfs:
2013 Nov 05
2
Problem using virt-sysprep with RBD images
Hello, I'm having a problem when trying to use virt-sysprep against vms that have rbd disk images. When I run virt-sysprep I get the following error: >> root@kitt:~/libguestfs-1.22.4# virt-sysprep -d server-clone-test --firstboot firstboot.sh >> Examining the guest ... >> Fatal error: exception Guestfs.Error("rbd: image name must begin with a '/'") My
2019 Apr 01
2
Re: guestfish Remote Images IPv6 Support
Unfortunately I do need to use the address explicitly as opposed to hostnames because the source of the data fed here is Ceph's monmap which returns the addresses explicitly. I've tried all the common ways to escape the : in the v6 address to no avail.  I definitely agree that the problem looks to be it parsing the colons as if the port comes next and then everything after that is
2013 Nov 25
4
Problem Connecting to RBD images using Sys::Guestfs Perl Module
Hello, I'm having trouble connecting to rbd images. It seems like somewhere the name is getting chewed up. I wonder if this is related to my previous troubles [1] [2] with rbd images. I'm trying to add an rbd image, but when I launch the guestfs object I get an error: >> libguestfs: trace: launch = -1 (error) I'm adding a single RBD >> libguestfs: trace: add_drive
2019 Mar 29
2
guestfish Remote Images IPv6 Support
I have scoured the web and can't find anything on the topic: Is IPv6 supported for remote image targets? For example: guestfish --format=raw --ro -a rbd://[fd00::cefc:1]:6789/images/CentOS-7-x86_64-GenericCloud-1901 Does not work citing the following: libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: create: flags = 0, handle = 0x5560231bdfb0, program =
2013 Nov 25
2
Re: Problem Connecting to RBD images using Sys::Guestfs Perl Module
On Mon, Nov 25, 2013 at 09:58:50PM +0000, Richard W.M. Jones wrote: > On Mon, Nov 25, 2013 at 12:52:21PM -0700, Jon wrote: > > Hello, > > > > I'm having trouble connecting to rbd images. It seems like somewhere the > > name is getting chewed up. I wonder if this is related to my previous > > troubles [1] [2] with rbd images. > > > > I'm
2013 Nov 06
3
Re: Problem using virt-sysprep with RBD images
Hello Richard, Haha, ok, here's a good one: I commented that if statement out at line 300, applied your patch (I see you updated the github of this code, perhaps that's the best place to grab the code from), and when I run virt-sysprep, I get the following parameter for my disk drive: >> qemu-system-x86_64: -drive
2013 Nov 06
2
Re: Problem using virt-sysprep with RBD images
Hello Richard, Thanks for the reply. Indeed this behaviour exists in 1.25.6. Grepping through the source [1], there are a number of files in "./po/*.po[t]?" that contain this message, but I think it's ./src/drives.c where the fail condition is actually detected / set. On line 300 there is an if statement that checks the first character is a slash: if (exportname[0] !=
2018 Feb 27
2
Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
Hello Everyone, My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10 ALL-in-One. I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path). However, I enable libvirt debug log , the log as below: "2018-02-26 13:09:13.638+0000: 50524: debug :
2015 Oct 05
0
Storage statistics
Hi all, I'd like to get a few storage statistics from a running vm. I'd like to know read and write data about QCOW2 devices and about RBD volumes. I did try to execute a command similar to this: virsh qemu-monitor-command vm-vnc '{ "execute": "query-block"}' as described here:
2013 May 07
7
[PATCH 0/5] rbd improvements
This series improves ceph rbd support in libguestfs. It uses the servers list, adds support for a custom username, and starts to add support for custom secret.
2018 Feb 27
1
Reply: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
Dear Michal After I fix the local libvirt master branch follow your patch, and build rpm for CentOS 7.4. virDomainUpdateDeviceFlags as bellow: ================================================ 2018-02-27 09:27:43.782+0000: 16656: debug : virDomainUpdateDeviceFlags:8326 : dom=0x7f2084000c50, (VM: name=6ec499397d594e f2a64fcfc938f38225, uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), xml=<disk
2015 Jan 08
0
Libvirt guest can't boot up when use ceph as storage backend with Selinux enabled
Hi there, I met one problem that guest fail to boot up when Selinux is enabled with guest storage based on ceph. However, I can boot the guest with qemu directly. I also can boot it up with Selinux disabled. Not sure it is a libvirt bug or wrong use case. 1. Enable Selinux # getenforce && iptables -L Enforcing Chain INPUT (policy ACCEPT) target prot opt source destination
2018 Mar 05
2
Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
On Fri, Mar 02, 2018 at 15:32:44 -0500, John Ferlan wrote: > > > On 03/02/2018 08:28 AM, Peter Krempa wrote: > > On Tue, Feb 27, 2018 at 09:53:00 +0100, Michal Privoznik wrote: > >> On 02/27/2018 03:06 AM, Star Guo wrote: > >>> Hello Everyone, > >>> > >>> > >>> > >>> My pc run in CentOS 7.4 and install
2020 Oct 23
2
RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
Hi there, Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the virtual guests don't get their RBD disks made available to them. On openSUSE 15.1 (which includes libvirt 5.1.0) that worked fine. The XML is as follows: <domain type='xen' id='7'>   <name>mytwotel-a</name>   <uuid>a56daa5d-c095-49d5-ae1b-00b38353614e</uuid>  
2020 Oct 26
1
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
It's QEMU 4.2.1-lp152.9.6.1. I've tried updating it from the Open Build Service repos but there's too many version conflicts. Marcel On 26/10/20 9:02 pm, Ján Tomko wrote: > On a Friday in 2020, Marcel Juffermans wrote: >> Hi there, >> >> Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the >> virtual guests don't get their RBD disks
2016 May 18
2
[PATCH v2 0/2] lib: qemu: Memoize qemu feature detection.
v1 -> v2: - Rebase on top of Pino's version work. Two patches went upstream, these are the two remaining patches. Note the generation number is still inside the qemu.stat file. We could put it in the filename, I have no particular preference. Rich.