Star Guo
2018-Feb-27 02:06 UTC
[libvirt-users] Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
Hello Everyone,
My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph
10.2.10 ALL-in-One.
I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml,
libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path).
However, I enable libvirt debug log , the log as below:
"2018-02-26 13:09:13.638+0000: 50524: debug : virDomainLookupByName:412 :
conn=0x7f7278000aa0, name=6ec499397d594ef2a64fcfc938f38225
2018-02-26 13:09:13.638+0000: 50515: debug : virDomainGetInfo:2431 :
dom=0x7f726c000c30, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), info=0x7f72b9059b20
2018-02-26 13:09:13.638+0000: 50515: debug : qemuGetProcessInfo:1479 : Got
status for 71205/0 user=14674 sys=3627 cpu=5 rss=105105
2018-02-26 13:09:13.644+0000: 50519: debug : virDomainGetXMLDesc:2572 :
dom=0x7f7280002f20, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), flags=0x0
2018-02-26 13:09:13.653+0000: 50516: debug : virDomainUpdateDeviceFlags:8326
: dom=0x7f7274000b90, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), xml=<disk
device="cdrom"
type="network"><source
name="zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b
1f" protocol="rbd"><host name="10.0.229.181"
port="6789" /></source><auth
username="zstack"><secret type="ceph"
uuid="9b06bb70-dc13-4338-88fd-b0c72d5ab9e9"
/></auth><target bus="ide"
dev="hdc" /><readonly /></disk>, flags=0x1
2018-02-26 13:09:13.653+0000: 50516: debug :
qemuDomainObjBeginJobInternal:4778 : Starting job: modify (vm=0x7f7294100af0
name=6ec499397d594ef2a64fcfc938f38225, current job=none async=none)
2018-02-26 13:09:13.653+0000: 50516: debug :
qemuDomainObjBeginJobInternal:4819 : Started job: modify (async=none
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.660+0000: 50516: debug : virQEMUCapsCacheLookup:5443 :
Returning caps 0x7f7294126ac0 for /usr/libexec/qemu-kvm
2018-02-26 13:09:13.664+0000: 50516: debug : virQEMUCapsCacheLookup:5443 :
Returning caps 0x7f7294126ac0 for /usr/libexec/qemu-kvm
2018-02-26 13:09:13.667+0000: 50516: debug : qemuSetupImageCgroupInternal:91
: Not updating cgroups for disk path
'08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f',
type:
network
2018-02-26 13:09:13.667+0000: 50516: debug :
qemuDomainObjEnterMonitorInternal:5048 : Entering monitor
(mon=0x7f728c07f260 vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.667+0000: 50516: debug : qemuMonitorEjectMedia:2487 :
dev_name=drive-ide0-1-0 force=0
2018-02-26 13:09:13.667+0000: 50516: debug : qemuMonitorEjectMedia:2489 :
mon:0x7f728c07f260 vm:0x7f7294100af0 json:1 fd:24
2018-02-26 13:09:13.667+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:301 : Send command
'{"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false},"i
d":"libvirt-78"}' for write with FD -1
2018-02-26 13:09:13.667+0000: 50516: info : qemuMonitorSend:1079 :
QEMU_MONITOR_SEND_MSG: mon=0x7f728c07f260
msg={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-78"}
fd=-1
2018-02-26 13:09:13.667+0000: 50514: info : qemuMonitorIOWrite:553 :
QEMU_MONITOR_IO_WRITE: mon=0x7f728c07f260
buf={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-78"}
len=93 ret=93 errno=0
2018-02-26 13:09:13.669+0000: 50514: debug :
qemuMonitorJSONIOProcessLine:193 : Line [{"return": {},
"id": "libvirt-78"}]
2018-02-26 13:09:13.669+0000: 50514: info : qemuMonitorJSONIOProcessLine:213
: QEMU_MONITOR_RECV_REPLY: mon=0x7f728c07f260 reply={"return": {},
"id":
"libvirt-78"}
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:306 : Receive command reply ret=0
rxObject=0x5561b7c6abc0
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuDomainObjExitMonitorInternal:5071 : Exited monitor (mon=0x7f728c07f260
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuDomainObjEnterMonitorInternal:5048 : Entering monitor
(mon=0x7f728c07f260 vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.669+0000: 50516: debug : qemuMonitorEjectMedia:2487 :
dev_name=drive-ide0-1-0 force=0
2018-02-26 13:09:13.669+0000: 50516: debug : qemuMonitorEjectMedia:2489 :
mon:0x7f728c07f260 vm:0x7f7294100af0 json:1 fd:24
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:301 : Send command
'{"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false},"i
d":"libvirt-79"}' for write with FD -1
2018-02-26 13:09:13.669+0000: 50516: info : qemuMonitorSend:1079 :
QEMU_MONITOR_SEND_MSG: mon=0x7f728c07f260
msg={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-79"}
fd=-1
2018-02-26 13:09:13.669+0000: 50514: info : qemuMonitorIOWrite:553 :
QEMU_MONITOR_IO_WRITE: mon=0x7f728c07f260
buf={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-79"}
len=93 ret=93 errno=0
2018-02-26 13:09:13.670+0000: 50514: debug :
qemuMonitorJSONIOProcessLine:193 : Line [{"return": {},
"id": "libvirt-79"}]
2018-02-26 13:09:13.670+0000: 50514: info : qemuMonitorJSONIOProcessLine:213
: QEMU_MONITOR_RECV_REPLY: mon=0x7f728c07f260 reply={"return": {},
"id":
"libvirt-79"}
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:306 : Receive command reply ret=0
rxObject=0x5561b7c6a080
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuDomainObjExitMonitorInternal:5071 : Exited monitor (mon=0x7f728c07f260
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuDomainObjEnterMonitorInternal:5048 : Entering monitor
(mon=0x7f728c07f260 vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.670+0000: 50516: debug : qemuMonitorChangeMedia:2504 :
dev_name=drive-ide0-1-0
newmedia=rbd:zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f64
9ee166b1f:auth_supported=none:mon_host=10.0.229.181\:6789 format=raw
2018-02-26 13:09:13.670+0000: 50516: debug : qemuMonitorChangeMedia:2506 :
mon:0x7f728c07f260 vm:0x7f7294100af0 json:1 fd:24
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:301 : Send command
'{"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd:zs
tack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:auth_
supported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-80"}
' for write with FD -1
2018-02-26 13:09:13.670+0000: 50516: info : qemuMonitorSend:1079 :
QEMU_MONITOR_SEND_MSG: mon=0x7f728c07f260
msg={"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd
:zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:au
th_supported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-8
0"}
fd=-1
2018-02-26 13:09:13.670+0000: 50514: info : qemuMonitorIOWrite:553 :
QEMU_MONITOR_IO_WRITE: mon=0x7f728c07f260
buf={"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd
:zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:au
th_supported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-8
0"}
len=229 ret=229 errno=0
2018-02-26 13:09:13.678+0000: 50514: debug :
qemuMonitorJSONIOProcessLine:193 : Line [{"id":
"libvirt-80", "error":
{"class": "GenericError", "desc": "error
connecting: Operation not
supported"}}]
2018-02-26 13:09:13.678+0000: 50514: info : qemuMonitorJSONIOProcessLine:213
: QEMU_MONITOR_RECV_REPLY: mon=0x7f728c07f260 reply={"id":
"libvirt-80",
"error": {"class": "GenericError",
"desc": "error connecting: Operation not
supported"}}
2018-02-26 13:09:13.678+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:306 : Receive command reply ret=0
rxObject=0x5561b7c88f40
2018-02-26 13:09:13.678+0000: 50516: debug : qemuMonitorJSONCheckError:381 :
unable to execute QEMU command
{"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd:zst
ack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:auth_s
upported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-80"}:
{"id":"libvirt-80","error":{"class":"GenericError","desc":"error
connecting:
Operation not supported"}}
2018-02-26 13:09:13.678+0000: 50516: error : qemuMonitorJSONCheckError:392 :
internal error: unable to execute QEMU command 'change': error
connecting:
Operation not supported
2018-02-26 13:09:13.678+0000: 50516: debug :
qemuDomainObjExitMonitorInternal:5071 : Exited monitor (mon=0x7f728c07f260
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.678+0000: 50516: debug : qemuTeardownImageCgroup:123 :
Not updating cgroups for disk path
'08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f',
type:
network
2018-02-26 13:09:13.682+0000: 50516: debug : qemuDomainObjEndJob:4979 :
Stopping job: modify (async=none vm=0x7f7294100af0
name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.983+0000: 50520: debug : virDomainLookupByName:412 :
conn=0x7f7278000aa0, name=6ec499397d594ef2a64fcfc938f38225
2018-02-26 13:09:13.990+0000: 50518: debug : virDomainGetInfo:2431 :
dom=0x7f72700009b0, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), info=0x7f72b7856b20
2018-02-26 13:09:13.990+0000: 50518: debug : qemuGetProcessInfo:1479 : Got
status for 71205/0 user=14675 sys=3628 cpu=0 rss=105119
2018-02-26 13:09:13.991+0000: 50515: debug : virDomainGetXMLDesc:2572 :
dom=0x7f726c000c30, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), flags=0x0"
I see the flow is virDomainUpdateDeviceFlags -> qemuMonitorChangeMedia, but
the cephx auth is drop, so make update error. Anybody meet this error?
Best Regards,
Star Guo
Michal Privoznik
2018-Feb-27 08:53 UTC
Re: [libvirt-users] Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
On 02/27/2018 03:06 AM, Star Guo wrote:> Hello Everyone, > > > > My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph > 10.2.10 ALL-in-One. > > > > I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml, > libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path). > However, I enable libvirt debug log , the log as below: > > <snip/> > > I see the flow is virDomainUpdateDeviceFlags -> qemuMonitorChangeMedia, but > the cephx auth is drop, so make update error. Anybody meet this error?Yes, this is a libvirt bug. I think this fixes the issue: diff --git i/src/qemu/qemu_driver.c w/src/qemu/qemu_driver.c index 96454c17c..0e5ad9971 100644 --- i/src/qemu/qemu_driver.c +++ w/src/qemu/qemu_driver.c @@ -7842,6 +7842,8 @@ qemuDomainChangeDiskLive(virDomainObjPtr vm, virQEMUDriverPtr driver, bool force) { + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); + qemuDomainObjPrivatePtr priv = vm->privateData; virDomainDiskDefPtr disk = dev->data.disk; virDomainDiskDefPtr orig_disk = NULL; virDomainDeviceDef oldDev = { .type = dev->type }; @@ -7850,6 +7852,9 @@ qemuDomainChangeDiskLive(virDomainObjPtr vm, if (virDomainDiskTranslateSourcePool(disk) < 0) goto cleanup; + if (qemuDomainPrepareDiskSource(disk, priv, cfg) < 0) + goto cleanup; + if (qemuDomainDetermineDiskChain(driver, vm, disk, false, true) < 0) goto cleanup; @@ -7898,6 +7903,7 @@ qemuDomainChangeDiskLive(virDomainObjPtr vm, ret = 0; cleanup: + virObjectUnref(cfg); return ret; } Can you check and confirm? Michal
Peter Krempa
2018-Mar-02 13:28 UTC
Re: [libvirt-users] Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
On Tue, Feb 27, 2018 at 09:53:00 +0100, Michal Privoznik wrote:> On 02/27/2018 03:06 AM, Star Guo wrote: > > Hello Everyone, > > > > > > > > My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph > > 10.2.10 ALL-in-One. > > > > > > > > I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml, > > libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path). > > However, I enable libvirt debug log , the log as below: > > > > <snip/> > > > > I see the flow is virDomainUpdateDeviceFlags -> qemuMonitorChangeMedia, but > > the cephx auth is drop, so make update error. Anybody meet this error? > > Yes, this is a libvirt bug. I think this fixes the issue: > > diff --git i/src/qemu/qemu_driver.c w/src/qemu/qemu_driver.c > index 96454c17c..0e5ad9971 100644 > --- i/src/qemu/qemu_driver.c > +++ w/src/qemu/qemu_driver.c > @@ -7842,6 +7842,8 @@ qemuDomainChangeDiskLive(virDomainObjPtr vm, > virQEMUDriverPtr driver, > bool force) > { > + virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver); > + qemuDomainObjPrivatePtr priv = vm->privateData; > virDomainDiskDefPtr disk = dev->data.disk; > virDomainDiskDefPtr orig_disk = NULL; > virDomainDeviceDef oldDev = { .type = dev->type }; > @@ -7850,6 +7852,9 @@ qemuDomainChangeDiskLive(virDomainObjPtr vm, > if (virDomainDiskTranslateSourcePool(disk) < 0) > goto cleanup; > > + if (qemuDomainPrepareDiskSource(disk, priv, cfg) < 0) > + goto cleanup;It's not that easy. At this point you also need to hotplug the 'secret' object. Without that the command will fail as the secret object referenced by the storage source definition will not be present. There should be a upstream bugzilla tracking this and I'm planing to fix this during my work on using the new blockdev stuff in qemu.
Maybe Matching Threads
- Reply: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
- Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
- Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
- Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
- Re: Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)