Richard W.M. Jones
2016-Jan-13 10:18 UTC
[Libguestfs] Quantifying libvirt errors in launching the libguestfs appliance
As people may know, we frequently encounter errors caused by libvirt when running the libguestfs appliance. I wanted to find out exactly how frequently these happen and classify the errors, so I ran the 'virt-df' tool overnight 1700 times. This tool runs several parallel qemu:///session libvirt connections both creating a short-lived appliance guest. Note that I have added Cole's patch to fix https://bugzilla.redhat.com/1271183 "XML-RPC error : Cannot write data: Transport endpoint is not connected" Results: The test failed 538 times (32% of the time), which is pretty dismal. To be fair, virt-df is aggressive about how it launches parallel libvirt connections. Most other virt-* tools use only a single libvirt connection and are consequently more reliable. Of the failures, 518 (96%) were of the form: process exited while connecting to monitor: qemu: could not load kernel '/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel': Permission denied which is https://bugzilla.redhat.com/921135 or maybe https://bugzilla.redhat.com/1269975. It's not clear to me if these bugs have different causes, but if they do then potentially we're seeing a mix of both since my test has no way to distinguish them. 19 of the failures (4%) were of the form: process exited while connecting to monitor: fread() failed which I believe is a previously unknown bug. I have filed it as https://bugzilla.redhat.com/1298122 Finally there was 1 failure: Unable to read from monitor: Connection reset by peer which I believe is also a new bug. I have filed it as https://bugzilla.redhat.com/1298124 I would be good if libvirt could routinely test the case of multiple parallel launches of qemu:///session, since it still contains bugs even after Cole's fixes. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Justin Clift
2016-Jan-13 12:28 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On 2016-01-13 10:18, Richard W.M. Jones wrote: <snip>> I would be good if libvirt could routinely test the case of multiple > parallel launches of qemu:///session, since it still contains bugs > even after Cole's fixes.Sounds like this testing script would be useful as a (weekly?) cronjob or similar. :) + Justin
Martin Kletzander
2016-Jan-13 15:25 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On Wed, Jan 13, 2016 at 10:18:42AM +0000, Richard W.M. Jones wrote:>As people may know, we frequently encounter errors caused by libvirt >when running the libguestfs appliance. > >I wanted to find out exactly how frequently these happen and classify >the errors, so I ran the 'virt-df' tool overnight 1700 times. This >tool runs several parallel qemu:///session libvirt connections both >creating a short-lived appliance guest. > >Note that I have added Cole's patch to fix https://bugzilla.redhat.com/1271183 >"XML-RPC error : Cannot write data: Transport endpoint is not connected" > >Results: > >The test failed 538 times (32% of the time), which is pretty dismal. >To be fair, virt-df is aggressive about how it launches parallel >libvirt connections. Most other virt-* tools use only a single >libvirt connection and are consequently more reliable. > >Of the failures, 518 (96%) were of the form: > > process exited while connecting to monitor: qemu: could not load kernel '/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel': Permission denied > >which is https://bugzilla.redhat.com/921135 or maybe >https://bugzilla.redhat.com/1269975. It's not clear to me if these >bugs have different causes, but if they do then potentially we're >seeing a mix of both since my test has no way to distinguish them. >It looks to me as the same problem. And as the same problem we were talking about bunch of time and, apparently, didn't get to a conclusion. For each of the kernels, libvirt labels them (with both DAC and selinux labels), then proceeds to launching qemu. If this is done parallel, the race is pretty obvious. Could you remind me why you couldn't use <seclabel model='none'/> or <seclabel relabel='no'/> or something that would mitigate this? If we cannot use this, then we need to implement the <seclabel/> element for kernel and initrd.>19 of the failures (4%) were of the form: > > process exited while connecting to monitor: fread() failed > >which I believe is a previously unknown bug. I have filed it as >https://bugzilla.redhat.com/1298122 >I think even this one might be the case, maybe selinux stops qemu from reading the kernel/initrd.>Finally there was 1 failure: > > Unable to read from monitor: Connection reset by peer > >which I believe is also a new bug. I have filed it as >https://bugzilla.redhat.com/1298124 >This, I believe, means QEMU exited (as in the previous one), just at different point in time.>I would be good if libvirt could routinely test the case of multiple >parallel launches of qemu:///session, since it still contains bugs >even after Cole's fixes. > >Rich. > >-- >Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones >Read my programming and virtualization blog: http://rwmj.wordpress.com >virt-top is 'top' for virtual machines. Tiny program with many >powerful monitoring features, net stats, disk stats, logging, etc. >http://people.redhat.com/~rjones/virt-top > >-- >libvir-list mailing list >libvir-list@redhat.com >https://www.redhat.com/mailman/listinfo/libvir-list
Richard W.M. Jones
2016-Jan-13 15:50 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On Wed, Jan 13, 2016 at 04:25:14PM +0100, Martin Kletzander wrote:> For each of the kernels, libvirt labels them (with both DAC and selinux > labels), then proceeds to launching qemu. If this is done parallel, the > race is pretty obvious. Could you remind me why you couldn't use > <seclabel model='none'/> or <seclabel relabel='no'/> or something that > would mitigate this?We value having sVirt :-) However I'm just about to rerun the tests with <seclabel type='none'/> to see if the problem goes away. Will let you know tomorrow once they have run again.> If we cannot use this, then we need to implement > the <seclabel/> element for kernel and initrd.Right, that could work for us I think.> >19 of the failures (4%) were of the form: > > > > process exited while connecting to monitor: fread() failed > > > >which I believe is a previously unknown bug. I have filed it as > >https://bugzilla.redhat.com/1298122 > > > > I think even this one might be the case, maybe selinux stops qemu from > reading the kernel/initrd. > > >Finally there was 1 failure: > > > > Unable to read from monitor: Connection reset by peer > > > >which I believe is also a new bug. I have filed it as > >https://bugzilla.redhat.com/1298124 > > > > This, I believe, means QEMU exited (as in the previous one), just at > different point in time.OK - let's see if they go away when we get the kernel thing fixed. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top
Cole Robinson
2016-Jan-13 23:45 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On 01/13/2016 05:18 AM, Richard W.M. Jones wrote:> As people may know, we frequently encounter errors caused by libvirt > when running the libguestfs appliance. > > I wanted to find out exactly how frequently these happen and classify > the errors, so I ran the 'virt-df' tool overnight 1700 times. This > tool runs several parallel qemu:///session libvirt connections both > creating a short-lived appliance guest. > > Note that I have added Cole's patch to fix https://bugzilla.redhat.com/1271183 > "XML-RPC error : Cannot write data: Transport endpoint is not connected" > > Results: > > The test failed 538 times (32% of the time), which is pretty dismal. > To be fair, virt-df is aggressive about how it launches parallel > libvirt connections. Most other virt-* tools use only a single > libvirt connection and are consequently more reliable. > > Of the failures, 518 (96%) were of the form: > > process exited while connecting to monitor: qemu: could not load kernel '/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel': Permission denied > > which is https://bugzilla.redhat.com/921135 or maybe > https://bugzilla.redhat.com/1269975. It's not clear to me if these > bugs have different causes, but if they do then potentially we're > seeing a mix of both since my test has no way to distinguish them. >I just experimented with this, I think it's the issue I suggested at: https://bugzilla.redhat.com/show_bug.cgi?id=1269975#c4 I created two VMs, kernel1 and kernel2, just booting off a kernel in $HOME/session-kernel/vmlinuz. Then I added this patch: diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index f083f3f..5d9f0fa 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -4901,6 +4901,13 @@ qemuProcessLaunch(virConnectPtr conn, incoming ? incoming->path : NULL) < 0) goto cleanup; + if (STREQ(vm->def->name, "kernel1")) { + for (int z = 0; z < 30; z++) { + printf("kernel1: sleeping %d of 30\n", z + 1); + sleep(1); + } + } + /* Security manager labeled all devices, therefore * if any operation from now on fails, we need to ask the caller to * restore labels. Which is right after selinux labels are set on VM startup. This is then easy to reproduce with: virsh start kernel1 (sleeps) virsh start kernel2 && virsh destroy kernel2 The shared vmlinuz is reset to user_home_t after kernel2 is shut down, so kernel1 fails to start after the patch's timeout When we detect similar issues with <disk> devices, like when the media already has the expected label, we encode 'relabel=no' in the disk XML, which tells libvirt not to run restorecon on the disks path when the VM is shutdown. However kernel/initrd XML doesn't have support for this XML, so it won't work there. Adding that could be one fix. But I think there's longer term plans for this type of issue by using ACLs, or virtlockd or something, Michal had patches but I don't know the specifics. Unfortunately even hardlinks share selinux labels so I don't think there's any workaround on the libguestfs side short of using a separate copy of the appliance kernel for each VM - Cole
Cole Robinson
2016-Jan-13 23:48 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On 01/13/2016 06:45 PM, Cole Robinson wrote:> On 01/13/2016 05:18 AM, Richard W.M. Jones wrote: >> As people may know, we frequently encounter errors caused by libvirt >> when running the libguestfs appliance. >> >> I wanted to find out exactly how frequently these happen and classify >> the errors, so I ran the 'virt-df' tool overnight 1700 times. This >> tool runs several parallel qemu:///session libvirt connections both >> creating a short-lived appliance guest. >> >> Note that I have added Cole's patch to fix https://bugzilla.redhat.com/1271183 >> "XML-RPC error : Cannot write data: Transport endpoint is not connected" >> >> Results: >> >> The test failed 538 times (32% of the time), which is pretty dismal. >> To be fair, virt-df is aggressive about how it launches parallel >> libvirt connections. Most other virt-* tools use only a single >> libvirt connection and are consequently more reliable. >> >> Of the failures, 518 (96%) were of the form: >> >> process exited while connecting to monitor: qemu: could not load kernel '/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel': Permission denied >> >> which is https://bugzilla.redhat.com/921135 or maybe >> https://bugzilla.redhat.com/1269975. It's not clear to me if these >> bugs have different causes, but if they do then potentially we're >> seeing a mix of both since my test has no way to distinguish them. >> > > I just experimented with this, I think it's the issue I suggested at: > > https://bugzilla.redhat.com/show_bug.cgi?id=1269975#c4 > > I created two VMs, kernel1 and kernel2, just booting off a kernel in > $HOME/session-kernel/vmlinuz. Then I added this patch: > > diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c > index f083f3f..5d9f0fa 100644 > --- a/src/qemu/qemu_process.c > +++ b/src/qemu/qemu_process.c > @@ -4901,6 +4901,13 @@ qemuProcessLaunch(virConnectPtr conn, > incoming ? incoming->path : NULL) < 0) > goto cleanup; > > + if (STREQ(vm->def->name, "kernel1")) { > + for (int z = 0; z < 30; z++) { > + printf("kernel1: sleeping %d of 30\n", z + 1); > + sleep(1); > + } > + } > + > /* Security manager labeled all devices, therefore > * if any operation from now on fails, we need to ask the caller to > * restore labels. > > > Which is right after selinux labels are set on VM startup. This is then easy > to reproduce with: > > virsh start kernel1 (sleeps) > virsh start kernel2 && virsh destroy kernel2 > > The shared vmlinuz is reset to user_home_t after kernel2 is shut down, so > kernel1 fails to start after the patch's timeout > > When we detect similar issues with <disk> devices, like when the media already > has the expected label, we encode 'relabel=no' in the disk XML, which tells > libvirt not to run restorecon on the disks path when the VM is shutdown. > However kernel/initrd XML doesn't have support for this XML, so it won't work > there. Adding that could be one fix. > > But I think there's longer term plans for this type of issue by using ACLs, or > virtlockd or something, Michal had patches but I don't know the specifics. > > Unfortunately even hardlinks share selinux labels so I don't think there's any > workaround on the libguestfs side short of using a separate copy of the > appliance kernel for each VM >Whoops, should have checked my libvirt mail first, you guys already came to this conclusion elsewhere in the thread :) - Cole
Yaniv Kaul
2016-Jan-14 08:07 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On Wed, Jan 13, 2016 at 2:28 PM, Justin Clift <justin@postgresql.org> wrote:> On 2016-01-13 10:18, Richard W.M. Jones wrote: > <snip> > >> I would be good if libvirt could routinely test the case of multiple >> parallel launches of qemu:///session, since it still contains bugs >> even after Cole's fixes. >> > > Sounds like this testing script would be useful as a (weekly?) cronjob or > similar. :)I've actually seen such issues while running Lago[1]. It launches multiple (5 or so) virt-* tools. [1] https://github.com/oVirt/lago Y.> > > + Justin >
Jiri Denemark
2016-Jan-14 09:51 UTC
Re: [Libguestfs] [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
On Wed, Jan 13, 2016 at 16:25:14 +0100, Martin Kletzander wrote:> On Wed, Jan 13, 2016 at 10:18:42AM +0000, Richard W.M. Jones wrote: > >As people may know, we frequently encounter errors caused by libvirt > >when running the libguestfs appliance. > > > >I wanted to find out exactly how frequently these happen and classify > >the errors, so I ran the 'virt-df' tool overnight 1700 times. This > >tool runs several parallel qemu:///session libvirt connections both > >creating a short-lived appliance guest. > > > >Note that I have added Cole's patch to fix https://bugzilla.redhat.com/1271183 > >"XML-RPC error : Cannot write data: Transport endpoint is not connected" > > > >Results: > > > >The test failed 538 times (32% of the time), which is pretty dismal. > >To be fair, virt-df is aggressive about how it launches parallel > >libvirt connections. Most other virt-* tools use only a single > >libvirt connection and are consequently more reliable. > > > >Of the failures, 518 (96%) were of the form: > > > > process exited while connecting to monitor: qemu: could not load kernel '/home/rjones/d/libguestfs/tmp/.guestfs-1000/appliance.d/kernel': Permission denied > > > >which is https://bugzilla.redhat.com/921135 or maybe > >https://bugzilla.redhat.com/1269975. It's not clear to me if these > >bugs have different causes, but if they do then potentially we're > >seeing a mix of both since my test has no way to distinguish them. > > > > It looks to me as the same problem. And as the same problem we were > talking about bunch of time and, apparently, didn't get to a conclusion. > > For each of the kernels, libvirt labels them (with both DAC and selinux > labels), then proceeds to launching qemu. If this is done parallel, the > race is pretty obvious. Could you remind me why you couldn't use > <seclabel model='none'/> or <seclabel relabel='no'/> or something that > would mitigate this? If we cannot use this, then we need to implement > the <seclabel/> element for kernel and initrd.Hmm, can't we just label kernel and initrd files the same way we label <shareable/> disk images, i.e., non-exclusive label so that all QEMU process can access them and avoid removing the label once a domain disappears? Jirka
Maybe Matching Threads
- Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
- Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
- Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
- Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance
- Re: [libvirt] Quantifying libvirt errors in launching the libguestfs appliance