similar to: couple of questions

Displaying 20 results from an estimated 3000 matches similar to: "couple of questions"

2020 Aug 17
0
Re: couple of questions
On Sun, Aug 16, 2020 at 22:43:30 -0700, Vjaceslavs Klimovs wrote: > Hey folks, > I've been experimenting with native NBD live migration w/ TLS and have > a couple of questions. > > 1) It appears that in some cases modified default_tls_x509_cert_dir > from qemu.conf is not respected, seems like virsh always expects a > default location and does not check
2020 Nov 19
1
unable to migrate when TLS is used
With libvirt 6.9.0, qemu 5.1.0, and following configurations: libvirt: key_file = "/etc/ssl/libvirt/server.lan.key" cert_file = "/etc/ssl/libvirt/server.lan.crt" ca_file = "/etc/ssl/libvirt/ca.crt" log_filters="3:remote 4:event 3:util.json 3:rpc 1:*" log_outputs="1:file:/var/log/libvirt/libvirtd.log" qemu: default_tls_x509_cert_dir =
2020 Aug 18
1
Re: unable to migrate non shared storage in tunneled mode
On Mon, Aug 17, 2020 at 3:24 AM Peter Krempa <pkrempa@redhat.com> wrote: > > On Sat, Aug 15, 2020 at 15:38:19 -0700, Vjaceslavs Klimovs wrote: > > Hey all, > > With libvirt 6.5.0 and qemu 5.1.0 migration of non shared disks in > > tunneled mode does not work for me: > > > > virsh # migrate alpinelinux3.8 qemu+tls://ratchet.lan/system --live > >
2020 Oct 12
3
unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error: internal error: Failed to reserve port" error is received and migration does not succeed: virsh # migrate cartridge qemu+tls://ratchet.lan/system --live --persistent --undefinesource --copy-storage-all --verbose error: internal error: Failed to reserve port 49153 virsh # On target host with debug logs, nothing
2020 Aug 15
2
unable to migrate non shared storage in tunneled mode
Hey all, With libvirt 6.5.0 and qemu 5.1.0 migration of non shared disks in tunneled mode does not work for me: virsh # migrate alpinelinux3.8 qemu+tls://ratchet.lan/system --live --persistent --undefinesource --copy-storage-all --tunneled --p2p error: internal error: qemu unexpectedly closed the monitor: Receiving block device images Error unknown block device 2020-08-15T21:21:48.995016Z
2020 Oct 26
1
Re: unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
On 10/26/20 9:39 AM, Michal Privoznik wrote: > On 10/12/20 4:46 AM, Vjaceslavs Klimovs wrote: >> On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error: >> internal error: Failed to reserve port" error is received and >> migration does not succeed: >> >> virsh # migrate cartridge qemu+tls://ratchet.lan/system --live >> --persistent
2020 Aug 17
0
Re: unable to migrate non shared storage in tunneled mode
On Sat, Aug 15, 2020 at 15:38:19 -0700, Vjaceslavs Klimovs wrote: > Hey all, > With libvirt 6.5.0 and qemu 5.1.0 migration of non shared disks in > tunneled mode does not work for me: > > virsh # migrate alpinelinux3.8 qemu+tls://ratchet.lan/system --live > --persistent --undefinesource --copy-storage-all --tunneled --p2p > error: internal error: qemu unexpectedly closed the
2020 Oct 26
0
Re: unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
On 10/12/20 4:46 AM, Vjaceslavs Klimovs wrote: > On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error: > internal error: Failed to reserve port" error is received and > migration does not succeed: > > virsh # migrate cartridge qemu+tls://ratchet.lan/system --live > --persistent --undefinesource --copy-storage-all --verbose > error: internal error:
2018 Jun 25
2
[PATCH nbdkit] tls: Implement Pre-Shared Keys (PSK) authentication.
This is ready for review but needs a bit more real-world testing before I'd be happy about it going upstream. It also needs tests. It does interoperate with qemu, at least in my limited tests. Rich.
2018 Jun 25
1
[PATCH v2 nbdkit] tls: Implement Pre-Shared Keys (PSK)
v2: * Improved documentation. * Added a test (interop with qemu client).
2023 Sep 10
2
Question about encryption and tls
(Posted few days ago on qemu group but no reactions) Do I understand correctly that ssl shoudl be configured independently for libvirt and each hypervisor? I asked because I configured libvirt connection as qemu+tls://bambus.kjonca/system?pkipath=... (and on bambus in /etc/libvirt/libvirtd.conf) I set key_file = ... cert_file = ... ca_file = ... But after connect and lauching (on bambus) vm I
2015 Mar 31
0
Re: couple of ceph/rbd questions
On 03/31/2015 11:47 AM, Brian Kroth wrote: > Hi, I've recently been working on setting up a set of libvirt compute > nodes that will be using a ceph rbd pool for storing vm disk image > files. I've got a couple of issues I've run into. > > First, per the standard ceph documentation examples [1], the way to add > a disk is to create a block in the VM definition XML
2018 Jun 14
4
[PATCH nbdkit 0/2] Fix a couple of problems found by Coverity.
There are a few other issues that Coverity found, but I believe all can be ignored ... except one: We don't set umask anywhere inside nbdkit. Coverity complains that this is a problem where we create temporary files, since the result of mkstemp depends implicitly on the umask value. I think we might consider setting umask anyway (eg. to 022) just to make plugin behaviour more predictable.
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2017 Nov 03
4
corrupted db after upgrading to 4.7
Hi Maxence, > Fyi, i've updated to 4.7.1, the dbcheck still not fix the broken links, > is the fix you talk about planned for a future release ? > > Our customer reported me, some users have issues when their logon server > is DC1 but not when it's DC2. > > On DC1 some users have access to all shares, some doesn't have any > access at all. actually this last
2019 Feb 10
2
virsh migrate --copy-storage-inc
Hello, I use libvirt on machines without shared storage. My VM's have all one qcow2-disk, with the same name as the VM. When I want to migrate a VM, I check if there is an qcow2 image on the other host with that name. When that's not the case, I copy the image using rsync first. If the image excist, I don't do that, and I think that "--copy-storage-inc" will do it. But I
2015 Feb 24
1
libvirt 1.2.12 + xen 4.4 wont migrate
Hi, We have been trying to get live migration working between two xen boxes running libvirt 1.2.12, However i seem to be getting the following error root@libvirt-xen1:~# virsh migrate --live trusty-image qemu+ssh://192.168.13.9/system --copy-storage-all --verbose --persistent --undefinesource root@192.168.13.9's password: error: unsupported flags (0x48) in function
2014 Jan 19
3
Should domain be undefined after migration, or not?
I have been running a lab using libvirt under Debian Wheezy (libvirt 0.9.12.3-1, qemu-kvm 1.1.2+dfsg-6, virt-manager 0.9.1-4). There are a number of machines as front-end servers and an nbd shared storage backend. When I live-migrate a domain from one machine to another, normally I observe that afterwards the domain remains on the source host (but in "shutdown" state), as well as on
2017 Nov 06
2
corrupted db after upgrading to 4.7
On Mon, 6 Nov 2017 11:39:50 +0100 (CET) Maxence SARTIAUX via samba <samba at lists.samba.org> wrote: > Hello. > > To follow-up this issue, since the upgrade, when i do a named reload > it crash, look like there's duplicated zones. > > Here's the log when i trigger a reload > > > nov 05 03:09:02 data.contoso.com named[2807]: received control >
2016 Sep 21
6
PHP vulnerability CVE-2016-4073
Hello, My server with CentOS 6.8 just failed PCI scan, so I'm looking into vulnerable packages. PHP 5.3.3 have multiple vulnerabilities, some of them are fixed/patched or have some kind of workaround. But I can't find a way to fix this one. Red Hat state: under investigation. https://access.redhat.com/security/cve/cve-2016-4073 This CVE is 6 months old, and it doesn't look like it