Hi, I wonder if someone can provide pointers on what might be wrong with my setup. I'm running a kvm host Fedora23 with rawhide kernel on top of a skylake cpu - Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz (family: 0x6, model: 0x4e, stepping: 0x3) As part of an experiment, I have enabled Nested virt support on the host and created a fedora23 guest with virt-manager using host-passthrough as the cpu model. The guest comes up fine and was updated without issues. Now if I install VirtualBox 5.0 and try to start a vm (fedora 23 again) I end up with a protection fault in the console (below) and the vm does not boot. A similar fault appeared when trying with a centos7.2 guest and a ubuntu vm but in that case just after the fault the guest reboots itself. What could be the problem here? Any help is very appreciated. Thanks, Daniel [ 77.538312] general protection fault: 0000 [#1] SMP [ 77.539235] Modules linked in: ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 xt_conntrack ip_set nfnetlink vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) ebtable_broute bridge stp llc ebtable_filter ebtable_nat ebtables ip6table_mangle ip6table_security ip6table_raw ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_filter ip6_tables iptable_mangle iptable_security iptable_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack snd_hda_codec_generic vboxdrv(OE) iosf_mbi kvm_intel snd_hda_intel kvm snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device joydev snd_pcm ppdev irqbypass crct10dif_pclmul crc32_pclmul snd_timer snd acpi_cpufreq virtio_balloon parport_pc tpm_tis soundcore tpm i2c_piix4 parport nfsd auth_rpcgss nfs_acl lockd grace sunrpc xfs libcrc32c virtio_console virtio_net virtio_blk qxl drm_kms_helper ttm drm crc32c_intel serio_raw virtio_pci virtio_ring virtio ata_generic pata_acpi [ 77.548011] CPU: 0 PID: 1991 Comm: EMT Tainted: G OE 4.4.9-300.fc23.x86_64 #1 [ 77.548719] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.8.2-20150714_191134- 04/01/2014 [ 77.549715] task: ffff8800b4e8dd00 ti: ffff8800b4fdc000 task.ti: ffff8800b4fdc000 [ 77.550320] RIP: 0010:[<ffffffffa05ee506>] [<ffffffffa05ee506>] 0xffffffffa05ee506 [ 77.551053] RSP: 0018:ffff8800b4fdfd70 EFLAGS: 00050206 [ 77.551548] RAX: 00000000003406f0 RBX: 00000000ffffffdb RCX: 000000000000009b [ 77.552193] RDX: 0000000000000000 RSI: ffff8800b4fdfd00 RDI: ffff8800b4fdfcc8 [ 77.552952] RBP: ffff8800b4fdfd90 R08: 0000000000000004 R09: 00000000003406f0 [ 77.553498] R10: 0000000049656e69 R11: 000000000f8bfbff R12: 0000000000000020 [ 77.554042] R13: 0000000000000000 R14: ffffc90001c0207c R15: ffffffffa04c5220 [ 77.554594] FS: 00007fc70f128700(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000 [ 77.555208] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 77.555650] CR2: 00007fc70eecb000 CR3: 00000000ba695000 CR4: 00000000003406f0 [ 77.556195] Stack: [ 77.556360] 0000000000000000 ffffffff00000000 0000000000000000 0000000000000002 [ 77.556985] ffff8800b4fdfdb0 ffffffffa0603db1 ffffc90001c02010 ffff88013822f6d0 [ 77.557597] ffff8800b4fdfe30 ffffffffa048b2f6 ffff8800b4ebc5c0 ffff8800b4fdc000 [ 77.558210] Call Trace: [ 77.558421] [<ffffffffa048b2f6>] ? supdrvIOCtl+0x2d36/0x3250 [vboxdrv] [ 77.558964] [<ffffffff813c2e45>] ? copy_user_enhanced_fast_string+0x5/0x10 [ 77.559526] [<ffffffffa04845b0>] ? VBoxDrvLinuxIOCtl_5_0_20+0x150/0x250 [vboxdrv] [ 77.560133] [<ffffffff81241648>] ? do_vfs_ioctl+0x298/0x480 [ 77.560593] [<ffffffff81338393>] ? security_file_ioctl+0x43/0x60 [ 77.561081] [<ffffffff812418a9>] ? SyS_ioctl+0x79/0x90 [ 77.561503] [<ffffffff817a0f2e>] ? entry_SYSCALL_64_fastpath+0x12/0x71 [ 77.562033] Code: 88 e4 fc ff ff b9 3a 00 00 00 0f 32 48 c1 e2 20 89 c0 48 09 d0 48 89 05 f9 db 0e 00 0f 20 e0 b9 9b 00 00 00 48 89 05 d2 db 0e 00 <0f> 32 48 c1 e2 20 89 c0 b9 80 00 00 c0 48 09 d0 48 89 05 cb db [ 77.564260] RIP [<ffffffffa05ee506>] 0xffffffffa05ee506 [ 77.564701] RSP <ffff8800b4fdfd70> [ 77.565103] ---[ end trace 9cf480524482767b ]---
Daniel Sanabria
2016-May-16 13:18 UTC
Re: [libvirt-users] Nested Virtualization not working
I wonder if it has to do with this https://www.virtualbox.org/ticket/14965 Anybody here have experienced a similar issue? On 15 May 2016 at 20:05, Daniel Sanabria <sanabria.d@gmail.com> wrote:> Hi, > > I wonder if someone can provide pointers on what might be wrong with > my setup. I'm running a kvm host Fedora23 with rawhide kernel on top > of a skylake cpu - Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz (family: > 0x6, model: 0x4e, stepping: 0x3) > > As part of an experiment, I have enabled Nested virt support on the > host and created a fedora23 guest with virt-manager using > host-passthrough as the cpu model. The guest comes up fine and was > updated without issues. > > Now if I install VirtualBox 5.0 and try to start a vm (fedora 23 > again) I end up with a protection fault in the console (below) and the > vm does not boot. A similar fault appeared when trying with a > centos7.2 guest and a ubuntu vm but in that case just after the fault > the guest reboots itself. > > What could be the problem here? Any help is very appreciated. > > Thanks, > > Daniel > > [ 77.538312] general protection fault: 0000 [#1] SMP > [ 77.539235] Modules linked in: ip6t_rpfilter ip6t_REJECT > nf_reject_ipv6 xt_conntrack ip_set nfnetlink vboxpci(OE) > vboxnetadp(OE) vboxnetflt(OE) ebtable_broute bridge stp llc > ebtable_filter ebtable_nat ebtables ip6table_mangle ip6table_security > ip6table_raw ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 > ip6table_filter ip6_tables iptable_mangle iptable_security iptable_raw > iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat > nf_conntrack snd_hda_codec_generic vboxdrv(OE) iosf_mbi kvm_intel > snd_hda_intel kvm snd_hda_codec snd_hda_core snd_hwdep snd_seq > snd_seq_device joydev snd_pcm ppdev irqbypass crct10dif_pclmul > crc32_pclmul snd_timer snd acpi_cpufreq virtio_balloon parport_pc > tpm_tis soundcore tpm i2c_piix4 parport nfsd auth_rpcgss nfs_acl lockd > grace sunrpc xfs libcrc32c virtio_console virtio_net virtio_blk qxl > drm_kms_helper ttm drm crc32c_intel serio_raw virtio_pci virtio_ring > virtio ata_generic pata_acpi > [ 77.548011] CPU: 0 PID: 1991 Comm: EMT Tainted: G OE > 4.4.9-300.fc23.x86_64 #1 > [ 77.548719] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), > BIOS 1.8.2-20150714_191134- 04/01/2014 > [ 77.549715] task: ffff8800b4e8dd00 ti: ffff8800b4fdc000 task.ti: > ffff8800b4fdc000 > [ 77.550320] RIP: 0010:[<ffffffffa05ee506>] [<ffffffffa05ee506>] > 0xffffffffa05ee506 > [ 77.551053] RSP: 0018:ffff8800b4fdfd70 EFLAGS: 00050206 > [ 77.551548] RAX: 00000000003406f0 RBX: 00000000ffffffdb RCX: > 000000000000009b > [ 77.552193] RDX: 0000000000000000 RSI: ffff8800b4fdfd00 RDI: > ffff8800b4fdfcc8 > [ 77.552952] RBP: ffff8800b4fdfd90 R08: 0000000000000004 R09: > 00000000003406f0 > [ 77.553498] R10: 0000000049656e69 R11: 000000000f8bfbff R12: > 0000000000000020 > [ 77.554042] R13: 0000000000000000 R14: ffffc90001c0207c R15: > ffffffffa04c5220 > [ 77.554594] FS: 00007fc70f128700(0000) GS:ffff88013fc00000(0000) > knlGS:0000000000000000 > [ 77.555208] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 77.555650] CR2: 00007fc70eecb000 CR3: 00000000ba695000 CR4: > 00000000003406f0 > [ 77.556195] Stack: > [ 77.556360] 0000000000000000 ffffffff00000000 0000000000000000 > 0000000000000002 > [ 77.556985] ffff8800b4fdfdb0 ffffffffa0603db1 ffffc90001c02010 > ffff88013822f6d0 > [ 77.557597] ffff8800b4fdfe30 ffffffffa048b2f6 ffff8800b4ebc5c0 > ffff8800b4fdc000 > [ 77.558210] Call Trace: > [ 77.558421] [<ffffffffa048b2f6>] ? supdrvIOCtl+0x2d36/0x3250 [vboxdrv] > [ 77.558964] [<ffffffff813c2e45>] ? > copy_user_enhanced_fast_string+0x5/0x10 > [ 77.559526] [<ffffffffa04845b0>] ? > VBoxDrvLinuxIOCtl_5_0_20+0x150/0x250 [vboxdrv] > [ 77.560133] [<ffffffff81241648>] ? do_vfs_ioctl+0x298/0x480 > [ 77.560593] [<ffffffff81338393>] ? security_file_ioctl+0x43/0x60 > [ 77.561081] [<ffffffff812418a9>] ? SyS_ioctl+0x79/0x90 > [ 77.561503] [<ffffffff817a0f2e>] ? entry_SYSCALL_64_fastpath+0x12/0x71 > [ 77.562033] Code: 88 e4 fc ff ff b9 3a 00 00 00 0f 32 48 c1 e2 20 > 89 c0 48 09 d0 48 89 05 f9 db 0e 00 0f 20 e0 b9 9b 00 00 00 48 89 05 > d2 db 0e 00 <0f> 32 48 c1 e2 20 89 c0 b9 80 00 00 c0 48 09 d0 48 89 05 > cb db > [ 77.564260] RIP [<ffffffffa05ee506>] 0xffffffffa05ee506 > [ 77.564701] RSP <ffff8800b4fdfd70> > [ 77.565103] ---[ end trace 9cf480524482767b ]--- >
Seemingly Similar Threads
- [PATCH] drm: disable vblank only if it got previously enabled
- [PATCH net-next] virito-net: set queues after reset during xdp_set
- [PATCH net-next] virito-net: set queues after reset during xdp_set
- [PATCH] drm: disable vblank only if it got previously enabled
- [PATCH 1/2] virtio: Reduce BUG if total_sg > virtqueue size to WARN.