similar to: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)

Displaying 20 results from an estimated 600 matches similar to: "RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)"

2020 Oct 26
1
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
It's QEMU 4.2.1-lp152.9.6.1. I've tried updating it from the Open Build Service repos but there's too many version conflicts. Marcel On 26/10/20 9:02 pm, Ján Tomko wrote: > On a Friday in 2020, Marcel Juffermans wrote: >> Hi there, >> >> Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the >> virtual guests don't get their RBD disks
2020 Oct 27
0
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
Thanks Jim, Looking at the logs for the working and non-working setups, the command line for QEMU is identical and the qmp commands almost the same: they both do "query-chardev" and "query-vnc" and the working setup does an additional "qmp_capabilities" which is likely not relevant. So I guess it must be in QEMU - I'll head over to the bug tracker. Thanks
2020 Oct 26
0
Re: RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
On a Friday in 2020, Marcel Juffermans wrote: >Hi there, > >Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the >virtual guests don't get their RBD disks made available to them. On >openSUSE 15.1 (which includes libvirt 5.1.0) that worked fine. The XML >is as follows: > [...] >I tried to strace libvirtd. The results are as follows: > >On
2012 Sep 18
1
libvirt 0.10 and cephx
Hello, Current version adding 'auth_supported=none' at the end of disk path and failing to do an authentication with specified cephx key: <source protocol='rbd' name='rbd/vmxxxxxxxxx:id=qemukvm:key=[cut]==:auth_supported=cephx'> <host name='10.0.0.13' port='6789'/> <host name='10.0.0.10' port='6789'/>
2017 Aug 25
1
external snapshot is missing object secrets
Hello, I have virtual machines running with a ceph storage backend. When creating an external qcow2 snapshot with a libvirt version without support for the new object secret passing, the backing file info would list the ceph secret in plain,e.g. # virsh snapshot-create-as vm-123 --no-metadata --disk-only --diskspec sda,file=/var/lib/libvirt/qemu/snapshot/vm-123-wrapper.qcow2 # qemu-img info
2018 May 27
1
Using libvirt to access Ceph RBDs with Xen
Hi everybody, my background: I'm doing Xen since 10++ years, many years with DRBD for high availability, since some time I'm using preferable GlusterFS with FUSE as replicated storage, where I place the image-files for the vms. In my current project we started (successfully) with Xen/GlusterFS too, but because the provider, where we placed the servers, uses widely CEPH, we decided to
2014 Feb 11
2
Can you verify currently defined libvirt secret provides valid Cephx auth?
As the subject suggests, I am wondering if its possible to verify if the currently defined libvirt secret provides valid authentication via Cephx to a Ceph cluster? I ask because ideally I would like to verify that the given cephx credentials in my libvirt secret are valid before I even attempt the virsh attach-device on the domain. I tried searching for a solution to this, but I can't seem
2015 Jan 08
0
Libvirt guest can't boot up when use ceph as storage backend with Selinux enabled
Hi there, I met one problem that guest fail to boot up when Selinux is enabled with guest storage based on ceph. However, I can boot the guest with qemu directly. I also can boot it up with Selinux disabled. Not sure it is a libvirt bug or wrong use case. 1. Enable Selinux # getenforce && iptables -L Enforcing Chain INPUT (policy ACCEPT) target prot opt source destination
2015 Oct 05
0
Storage statistics
Hi all, I'd like to get a few storage statistics from a running vm. I'd like to know read and write data about QCOW2 devices and about RBD volumes. I did try to execute a command similar to this: virsh qemu-monitor-command vm-vnc '{ "execute": "query-block"}' as described here:
2018 Feb 27
2
Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
Hello Everyone, My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10 ALL-in-One. I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path). However, I enable libvirt debug log , the log as below: "2018-02-26 13:09:13.638+0000: 50524: debug :
2018 Mar 02
2
Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
On Tue, Feb 27, 2018 at 09:53:00 +0100, Michal Privoznik wrote: > On 02/27/2018 03:06 AM, Star Guo wrote: > > Hello Everyone, > > > > > > > > My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph > > 10.2.10 ALL-in-One. > > > > > > > > I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml,
2014 Feb 11
0
Re: Can you verify currently defined libvirt secret provides valid Cephx auth?
I know I would still have to provide the Mon addresses and Cephx user to go with the secret UUID, that's fine. The primary goal is so I can just see if I can open a RADOS connection to the cluster before I try a virsh attach-device. On Tue, Feb 11, 2014 at 9:37 AM, Scott Sullivan < scottgregorysullivan@gmail.com> wrote: > As the subject suggests, I am wondering if its possible to
2018 Feb 19
2
Migration from 3.6.25-0ubuntu0.12.04.10 to 4.x with passdb backend = ldapsam
Migration from 3.6.25-0ubuntu0.12.04.10 to 4.x with passdb backend = ldapsam Hi. I tried to migrate my storage(smb) server to more newer version, but faced with 'segfaults", after(in progress) client authenticating, when samba tries to start a new smbd instance (as i understand). I saw client authentication success, which interrupts in following places: In case with
2014 Jul 19
1
Re: i686 guest failing to start at 50a2c45 (and earlier versions of 2.1-rc) with pc-i440fx-2.1
On Fri, Jul 18, 2014 at 11:58 PM, Andrey Korolyov <andrey@xdel.ru> wrote: > Hello, > > 2.0 model works fine > > 2.1 crashes with following: > > /tmp/buildd/qemu-2.0.92+rev1/hw/i386/smbios.c:825: smbios_get_tables: > Assertion `smbios_smp_sockets >= 1' failed > > Not sure if bisect will help much, but the commit which introduced > this platform works
2018 Feb 20
2
Migration from 3.6.25-0ubuntu0.12.04.10 to 4.x with passdb backend = ldapsam
Sure. ``` [global] workgroup = EXAMPLE server string = dns proxy = no interfaces = eth0 bind interfaces only = yes log file = /var/log/samba/log.%m max log size = 1000 # new options log level = 5 netbios name = FILES #panic action = /usr/share/samba/panic-action %d server role = STANDALONE SERVER local master = no security = user encrypt passwords =
2018 Mar 05
2
Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
On Fri, Mar 02, 2018 at 15:32:44 -0500, John Ferlan wrote: > > > On 03/02/2018 08:28 AM, Peter Krempa wrote: > > On Tue, Feb 27, 2018 at 09:53:00 +0100, Michal Privoznik wrote: > >> On 02/27/2018 03:06 AM, Star Guo wrote: > >>> Hello Everyone, > >>> > >>> > >>> > >>> My pc run in CentOS 7.4 and install
2018 Feb 02
1
failed to update cdrom device with rbd disk
Hello, I'm trying to using virsh update-device to update the CDROM from type='file' to ceph rbd iso with type='network'. But I always get error: Failed to update device from disk error: internal error: unable to execute QEMU command 'change': error connecting: Operation not supported I'm using libvirt-libs-3.10.0-1.el7.x86_64 with centos7.4 my original cdrom xml:
2013 Oct 04
1
Retry interval for attempting to set up a tunnel
Hi, We set up tinc tunnels over 3G when the connection becomes available. It is a mobile environment so connections come and go frequently. We send tinc an ALRM signal to retry a connection, but somehow this fails once in a while. Is there a way to influence the retry interval for connections that are down and the interval increment? We would like to be able to set this to 1 minute after ALRM.
2012 Jun 26
1
Segmentation fault with latest 1.1 revision
Hello, I am trying 1.1 branch and I experience a segmentation fault upon ALRM signal. This looks like a race condition. I have my tincd daemon instantiated manually in if-up.d/jmuchemb (without IF_TINC_NET) and when if-up.d/tinc runs, it sends a ALRM signal that makes tincd crash. It fails here: Core was generated by `tincd -D -n jmuchemb -d -o ConnectTo srv -o srv.Address 81.x.y.z -o
2013 May 07
7
[PATCH 0/5] rbd improvements
This series improves ceph rbd support in libguestfs. It uses the servers list, adds support for a custom username, and starts to add support for custom secret.