________________________________
From: centos-virt-bounces at centos.org <centos-virt-bounces at
centos.org> on behalf of Laurentiu Soica <laurentiu at soica.ro>
Sent: Sunday, August 14, 2016 10:17 AM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue
More details on the subject:
I suppose it is a nested KVM issue because it raised after I enabled the nested
KVM feature. Without it, anyway, the second level VMs are unusable in terms of
performance.
I am using CentOS 7 with:
kernel: 3.10.0-327.22.2.el7.x86_64
qemu-kvm:1.5.3-105.el7_2.4
libvirt:1.2.17-13.el7_2.5
on both the baremetal and the compute VM.
Please, post
1) # virsh dumpxml  VM-L1  ( where on L1 level you expect nested KVM to appear)
2) Login into VM-L1 and run :-
    # lsmod | grep kvm
3) I need outputs from VM-L1 ( in case it is Compute Node )
# cat /etc/nova/nova.conf | grep virt_type
# cat /etc/nova/nova.conf | grep  cpu_mode
Boris.
The only workaround now is to shutdown the compute VM and start it back from
baremetal with virsh start.
A simple restart of the compute node doesn't help. It looks like the
qemu-kvm process corresponding to the compute VM is the problem.
Laurentiu
?n dum., 14 aug. 2016 la 00:19, Laurentiu Soica <laurentiu at
soica.ro<mailto:laurentiu at soica.ro>> a scris:
Hello,
I have an OpenStack setup in virtual environment on CentOS 7.
The baremetal has nested KVM enabled and 1 compute node as a VM.
Inside the compute node I have multiple VMs running.
After about every 3 days the VMs get inaccessible and the compute node reports
high CPU usage. The qemu-kvm process for each VM inside the compute node reports
full CPU usage.
Please help me with some hints to debug this issue.
Thanks,
Laurentiu
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.centos.org/pipermail/centos-virt/attachments/20160814/e0b83536/attachment-0002.html>
Hello,
1. <domain type='kvm' id='6'>
  <name>baremetalbrbm_1</name>
  <uuid>534e9b54-5e4c-4acb-adcf-793f841551a7</uuid>
  <memory unit='KiB'>104857600</memory>
  <currentMemory unit='KiB'>104857600</currentMemory>
  <vcpu placement='static'>36</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64'
machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='host-passthrough'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'
cache='unsafe'/>
      <source
file='/var/lib/libvirt/images/baremetalbrbm_1.qcow2'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0'
target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0'
model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x04'
function='0x0'/>
    </controller>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x01'
function='0x2'/>
    </controller>
    <controller type='pci' index='0'
model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x05'
function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:f1:15:20:c5:46'/>
      <source network='brbm' bridge='brbm'/>
      <virtualport type='openvswitch'>
        <parameters
interfaceid='654ad04f-fa0a-41dd-9d30-b84e702462fe'/>
      </virtualport>
      <target dev='vnet5'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x03'
function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:d3:c9:24'/>
      <source bridge='br57'/>
      <target dev='vnet6'/>
      <model type='rtl8139'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x07'
function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5903' autoport='yes'
listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384'
heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x02'
function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x06'
function='0x0'/>
    </memballoon>
  </devices>
</domain>
2.
[root at overcloud-novacompute-0 ~]# lsmod | grep kvm
kvm_intel             162153  70
kvm                   525409  1 kvm_intel
[root at overcloud-novacompute-0 ~]# cat /etc/nova/nova.conf | grep
virt_type|grep -v '^#'
virt_type=kvm
[root at overcloud-novacompute-0 ~]#  cat /etc/nova/nova.conf | grep
 cpu_mode|grep -v '^#'
cpu_mode=host-passthrough
Thanks,
Laurentiu
?n dum., 14 aug. 2016 la 21:44, Boris Derzhavets <bderzhavets at
hotmail.com>
a scris:
>
>
>
> ------------------------------
> *From:* centos-virt-bounces at centos.org <centos-virt-bounces at
centos.org>
> on behalf of Laurentiu Soica <laurentiu at soica.ro>
> *Sent:* Sunday, August 14, 2016 10:17 AM
> *To:* Discussion about the virtualization on CentOS
> *Subject:* Re: [CentOS-virt] Nested KVM issue
>
> More details on the subject:
>
> I suppose it is a nested KVM issue because it raised after I enabled the
> nested KVM feature. Without it, anyway, the second level VMs are unusable
> in terms of performance.
>
> I am using CentOS 7 with:
>
> kernel: 3.10.0-327.22.2.el7.x86_64
> qemu-kvm:1.5.3-105.el7_2.4
> libvirt:1.2.17-13.el7_2.5
>
> on both the baremetal and the compute VM.
>
> *Please, post*
>
> 1) # virsh dumpxml  VM-L1  ( where on L1 level you expect nested KVM to
> appear)
> 2) Login into VM-L1 and run :-
>     # lsmod | grep kvm
> 3) I need outputs from VM-L1 ( in case it is Compute Node )
>
> # cat /etc/nova/nova.conf | grep virt_type
> # cat /etc/nova/nova.conf | grep  cpu_mode
>
> Boris.
>
>
>
>
>
> The only workaround now is to shutdown the compute VM and start it back
> from baremetal with virsh start.
> A simple restart of the compute node doesn't help. It looks like the
> qemu-kvm process corresponding to the compute VM is the problem.
>
> Laurentiu
>
> ?n dum., 14 aug. 2016 la 00:19, Laurentiu Soica <laurentiu at
soica.ro> a
> scris:
>
>> Hello,
>>
>> I have an OpenStack setup in virtual environment on CentOS 7.
>>
>> The baremetal has *nested KVM* enabled and 1 compute node as a VM.
>>
>> Inside the compute node I have multiple VMs running.
>>
>> After about every 3 days the VMs get inaccessible and the compute node
>> reports high CPU usage. The qemu-kvm process for each VM inside the
compute
>> node reports full CPU usage.
>>
>> Please help me with some hints to debug this issue.
>>
>> Thanks,
>> Laurentiu
>>
> _______________________________________________
> CentOS-virt mailing list
> CentOS-virt at centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.centos.org/pipermail/centos-virt/attachments/20160814/5fb6e1be/attachment-0002.html>
Reports  posted look good for me.  Config should provide the best available
performance
for cloud VM (L2) on Compute Node.
 1.  Please, remind me what goes  wrong  from your standpoint ?
 2. Which CPU is installed on Compute Node && how much RAM ?
     Actually , my concern is :-
    Number_of_ Cloud_VMs  versus Number_CPU_Cores ( not threads)
    Please, check `top`  report   in regards of swap area size.
Thanks.
Boris.
________________________________
From: centos-virt-bounces at centos.org <centos-virt-bounces at
centos.org> on behalf of Laurentiu Soica <laurentiu at soica.ro>
Sent: Sunday, August 14, 2016 3:06 PM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue
Hello,
1. <domain type='kvm' id='6'>
  <name>baremetalbrbm_1</name>
  <uuid>534e9b54-5e4c-4acb-adcf-793f841551a7</uuid>
  <memory unit='KiB'>104857600</memory>
  <currentMemory unit='KiB'>104857600</currentMemory>
  <vcpu placement='static'>36</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64'
machine='pc-i440fx-rhel7.0.0'>hvm</type>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='host-passthrough'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'
cache='unsafe'/>
      <source
file='/var/lib/libvirt/images/baremetalbrbm_1.qcow2'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0'
target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0'
model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0'
model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:f1:15:20:c5:46'/>
      <source network='brbm' bridge='brbm'/>
      <virtualport type='openvswitch'>
        <parameters
interfaceid='654ad04f-fa0a-41dd-9d30-b84e702462fe'/>
      </virtualport>
      <target dev='vnet5'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:d3:c9:24'/>
      <source bridge='br57'/>
      <target dev='vnet6'/>
      <model type='rtl8139'/>
      <alias name='net1'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x07' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5903' autoport='yes'
listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384'
heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00'
slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>
2.
[root at overcloud-novacompute-0 ~]# lsmod | grep kvm
kvm_intel             162153  70
kvm                   525409  1 kvm_intel
[root at overcloud-novacompute-0 ~]# cat /etc/nova/nova.conf | grep
virt_type|grep -v '^#'
virt_type=kvm
[root at overcloud-novacompute-0 ~]#  cat /etc/nova/nova.conf | grep 
cpu_mode|grep -v '^#'
cpu_mode=host-passthrough
Thanks,
Laurentiu
?n dum., 14 aug. 2016 la 21:44, Boris Derzhavets <bderzhavets at
hotmail.com<mailto:bderzhavets at hotmail.com>> a scris:
________________________________
From: centos-virt-bounces at centos.org<mailto:centos-virt-bounces at
centos.org> <centos-virt-bounces at
centos.org<mailto:centos-virt-bounces at centos.org>> on behalf of
Laurentiu Soica <laurentiu at soica.ro<mailto:laurentiu at
soica.ro>>
Sent: Sunday, August 14, 2016 10:17 AM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue
More details on the subject:
I suppose it is a nested KVM issue because it raised after I enabled the nested
KVM feature. Without it, anyway, the second level VMs are unusable in terms of
performance.
I am using CentOS 7 with:
kernel: 3.10.0-327.22.2.el7.x86_64
qemu-kvm:1.5.3-105.el7_2.4
libvirt:1.2.17-13.el7_2.5
on both the baremetal and the compute VM.
Please, post
1) # virsh dumpxml  VM-L1  ( where on L1 level you expect nested KVM to appear)
2) Login into VM-L1 and run :-
    # lsmod | grep kvm
3) I need outputs from VM-L1 ( in case it is Compute Node )
# cat /etc/nova/nova.conf | grep virt_type
# cat /etc/nova/nova.conf | grep  cpu_mode
Boris.
The only workaround now is to shutdown the compute VM and start it back from
baremetal with virsh start.
A simple restart of the compute node doesn't help. It looks like the
qemu-kvm process corresponding to the compute VM is the problem.
Laurentiu
?n dum., 14 aug. 2016 la 00:19, Laurentiu Soica <laurentiu at
soica.ro<mailto:laurentiu at soica.ro>> a scris:
Hello,
I have an OpenStack setup in virtual environment on CentOS 7.
The baremetal has nested KVM enabled and 1 compute node as a VM.
Inside the compute node I have multiple VMs running.
After about every 3 days the VMs get inaccessible and the compute node reports
high CPU usage. The qemu-kvm process for each VM inside the compute node reports
full CPU usage.
Please help me with some hints to debug this issue.
Thanks,
Laurentiu
_______________________________________________
CentOS-virt mailing list
CentOS-virt at centos.org<mailto:CentOS-virt at centos.org>
https://lists.centos.org/mailman/listinfo/centos-virt
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.centos.org/pipermail/centos-virt/attachments/20160814/63c50950/attachment-0002.html>
Hello, The issue reproduced again and it doesn't look like a swap problem. Some details: on the baremetal, from top: top - 08:08:52 up 5 days, 16:43, 3 users, load average: 36.19, 36.05, 36.05 Tasks: 493 total, 1 running, 492 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.5 us, 87.9 sy, 0.0 ni, 8.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 12357451+total, 14296000 free, 65634428 used, 43644088 buff/cache KiB Swap: 4194300 total, 4073868 free, 120432 used. 56953888 avail Mem 19158 qemu 20 0 0.098t 0.041t 10476 S 3650 35.6 13048:24 qemu-kvm The compute node has 36 CPUs and the usage is now 100%. There are more than 50 GB of memory still available on the baremetal. The swap is barely used, 120 MB. On compute node, from top: top - 05:11:58 up 1 day, 15:08, 2 users, load average: 40.46, 40.49, 40.74 %Cpu(s): 99.1 us, 0.7 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.1 si, 0.1 st KiB Mem : 10296246+total, 78079936 free, 23671360 used, 1211160 buff/cache KiB Swap: 0 total, 0 free, 0 used. 78939968 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6032 qemu 20 0 10.601g 1.272g 12964 S 400.0 1.3 588:40.39 qemu-kvm 5673 qemu 20 0 10.602g 1.006g 13020 S 399.7 1.0 1161:47 qemu-kvm 5998 qemu 20 0 10.601g 1.192g 13028 S 367.9 1.2 1544:30 qemu-kvm 5951 qemu 20 0 10.601g 1.246g 13020 S 348.3 1.3 1547:38 qemu-kvm 5750 qemu 20 0 10.599g 990136 13060 S 339.1 1.0 1152:25 qemu-kvm 5752 qemu 20 0 10.598g 1.426g 13040 S 313.9 1.5 663:13.65 qemu-kvm .... There are more than 70 GB of memory available on the compute node. All VMs are using 100% their CPUs and they are not accessible anymore. Laurentiu ?n dum., 14 aug. 2016 la 21:44, Boris Derzhavets <bderzhavets at hotmail.com> a scris:> > > > ------------------------------ > *From:* centos-virt-bounces at centos.org <centos-virt-bounces at centos.org> > on behalf of Laurentiu Soica <laurentiu at soica.ro> > *Sent:* Sunday, August 14, 2016 10:17 AM > *To:* Discussion about the virtualization on CentOS > *Subject:* Re: [CentOS-virt] Nested KVM issue > > More details on the subject: > > I suppose it is a nested KVM issue because it raised after I enabled the > nested KVM feature. Without it, anyway, the second level VMs are unusable > in terms of performance. > > I am using CentOS 7 with: > > kernel: 3.10.0-327.22.2.el7.x86_64 > qemu-kvm:1.5.3-105.el7_2.4 > libvirt:1.2.17-13.el7_2.5 > > on both the baremetal and the compute VM. > > *Please, post* > > 1) # virsh dumpxml VM-L1 ( where on L1 level you expect nested KVM to > appear) > 2) Login into VM-L1 and run :- > # lsmod | grep kvm > 3) I need outputs from VM-L1 ( in case it is Compute Node ) > > # cat /etc/nova/nova.conf | grep virt_type > # cat /etc/nova/nova.conf | grep cpu_mode > > Boris. > > > > > > The only workaround now is to shutdown the compute VM and start it back > from baremetal with virsh start. > A simple restart of the compute node doesn't help. It looks like the > qemu-kvm process corresponding to the compute VM is the problem. > > Laurentiu > > ?n dum., 14 aug. 2016 la 00:19, Laurentiu Soica <laurentiu at soica.ro> a > scris: > >> Hello, >> >> I have an OpenStack setup in virtual environment on CentOS 7. >> >> The baremetal has *nested KVM* enabled and 1 compute node as a VM. >> >> Inside the compute node I have multiple VMs running. >> >> After about every 3 days the VMs get inaccessible and the compute node >> reports high CPU usage. The qemu-kvm process for each VM inside the compute >> node reports full CPU usage. >> >> Please help me with some hints to debug this issue. >> >> Thanks, >> Laurentiu >> > _______________________________________________ > CentOS-virt mailing list > CentOS-virt at centos.org > https://lists.centos.org/mailman/listinfo/centos-virt >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos-virt/attachments/20160816/24998b4b/attachment-0002.html>
Sorry,
How you trigger the problem ?
B.
________________________________
From: centos-virt-bounces at centos.org <centos-virt-bounces at
centos.org> on behalf of Laurentiu Soica <laurentiu at soica.ro>
Sent: Tuesday, August 16, 2016 3:28 AM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue
Hello,
The issue reproduced again and it doesn't look like a swap problem. Some
details:
on the baremetal, from top:
top - 08:08:52 up 5 days, 16:43,  3 users,  load average: 36.19, 36.05, 36.05
Tasks: 493 total,   1 running, 492 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.5 us, 87.9 sy,  0.0 ni,  8.6 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 12357451+total, 14296000 free, 65634428 used, 43644088 buff/cache
KiB Swap:  4194300 total,  4073868 free,   120432 used. 56953888 avail Mem
19158 qemu      20   0  0.098t 0.041t  10476 S  3650 35.6  13048:24 qemu-kvm
The compute node has 36 CPUs and the usage is now 100%. There are more than 50
GB of memory still available on the baremetal. The swap is barely used, 120 MB.
On compute node, from top:
top - 05:11:58 up 1 day, 15:08,  2 users,  load average: 40.46, 40.49, 40.74
%Cpu(s): 99.1 us,  0.7 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.1 si,  0.1 st
KiB Mem : 10296246+total, 78079936 free, 23671360 used,  1211160 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 78939968 avail Mem
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 6032 qemu      20   0 10.601g 1.272g  12964 S 400.0  1.3 588:40.39 qemu-kvm
 5673 qemu      20   0 10.602g 1.006g  13020 S 399.7  1.0   1161:47 qemu-kvm
 5998 qemu      20   0 10.601g 1.192g  13028 S 367.9  1.2   1544:30 qemu-kvm
 5951 qemu      20   0 10.601g 1.246g  13020 S 348.3  1.3   1547:38 qemu-kvm
 5750 qemu      20   0 10.599g 990136  13060 S 339.1  1.0   1152:25 qemu-kvm
 5752 qemu      20   0 10.598g 1.426g  13040 S 313.9  1.5 663:13.65 qemu-kvm
....
There are more than 70 GB of memory available on the compute node. All VMs are
using 100% their CPUs and they are not accessible anymore.
Laurentiu
?n dum., 14 aug. 2016 la 21:44, Boris Derzhavets <bderzhavets at
hotmail.com<mailto:bderzhavets at hotmail.com>> a scris:
________________________________
From: centos-virt-bounces at centos.org<mailto:centos-virt-bounces at
centos.org> <centos-virt-bounces at
centos.org<mailto:centos-virt-bounces at centos.org>> on behalf of
Laurentiu Soica <laurentiu at soica.ro<mailto:laurentiu at
soica.ro>>
Sent: Sunday, August 14, 2016 10:17 AM
To: Discussion about the virtualization on CentOS
Subject: Re: [CentOS-virt] Nested KVM issue
More details on the subject:
I suppose it is a nested KVM issue because it raised after I enabled the nested
KVM feature. Without it, anyway, the second level VMs are unusable in terms of
performance.
I am using CentOS 7 with:
kernel: 3.10.0-327.22.2.el7.x86_64
qemu-kvm:1.5.3-105.el7_2.4
libvirt:1.2.17-13.el7_2.5
on both the baremetal and the compute VM.
Please, post
1) # virsh dumpxml  VM-L1  ( where on L1 level you expect nested KVM to appear)
2) Login into VM-L1 and run :-
    # lsmod | grep kvm
3) I need outputs from VM-L1 ( in case it is Compute Node )
# cat /etc/nova/nova.conf | grep virt_type
# cat /etc/nova/nova.conf | grep  cpu_mode
Boris.
The only workaround now is to shutdown the compute VM and start it back from
baremetal with virsh start.
A simple restart of the compute node doesn't help. It looks like the
qemu-kvm process corresponding to the compute VM is the problem.
Laurentiu
?n dum., 14 aug. 2016 la 00:19, Laurentiu Soica <laurentiu at
soica.ro<mailto:laurentiu at soica.ro>> a scris:
Hello,
I have an OpenStack setup in virtual environment on CentOS 7.
The baremetal has nested KVM enabled and 1 compute node as a VM.
Inside the compute node I have multiple VMs running.
After about every 3 days the VMs get inaccessible and the compute node reports
high CPU usage. The qemu-kvm process for each VM inside the compute node reports
full CPU usage.
Please help me with some hints to debug this issue.
Thanks,
Laurentiu
_______________________________________________
CentOS-virt mailing list
CentOS-virt at centos.org<mailto:CentOS-virt at centos.org>
https://lists.centos.org/mailman/listinfo/centos-virt
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.centos.org/pipermail/centos-virt/attachments/20160816/b50de9f5/attachment-0002.html>