Displaying 20 results from an estimated 10000 matches similar to: "Cputune causing VM to show usage as hardware interrupts"
2023 May 12
1
Question regaring correct usage of CPU shares
Hi there,
I have a question regarding the shares option of the cputune section. I
want to illustrate my question with the following example. Let's assume
I have two virtual machines like the following on four dedicated core
with two threads each:
VM1:
<cputune>
<shares>512</shares>
<vcpupin vcpu="0" cpuset="0"/>
<vcpupin
2013 Dec 03
0
cputune shares with multiple cpu and pinning
Hi,
I have found the cpu time partitioning based on cpu shares weights not
very intuitive.
On RHEL64, I deployed two qemu/kvm VMs
VM1 with 1 vcpu and 512 cpu shares
VM2 with 2 vcpus and 1024 cpu shares
I pinned their vcpus to specific host pcpus:
VM1 vcpu 0 to host pcpu1
VM2 vcpu 0 to host pcpu1, VM2 vcpu 1 to host pcpu2
I executed inside the VMs a simple process that consume all
2017 Apr 27
1
Does lxc support cputune/vcpusched option
2011 Apr 04
0
Release of libvirt-0.9.0
As scheduled, libvirt 0.9.0 was tagged and pushed today, it's
available from FTP at:
ftp://libvirt.org/libvirt/
This is a large release w.r.t. the amount of features and changes,
and well worth bumping the middle version number. We are also getting
closer to a 1.0.0 release !
Features:
- Support cputune cpu usage tuning (Osier Yang and Nikunj A. Dadhania)
- Add public APIs for storage
2018 Sep 17
0
Re: NUMA issues on virtualized hosts
Hello,
so the current domain configuration:
<cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11' memory='62000000' /><cell cpus='12-15'
2012 Oct 17
0
cgroup blkio.weight working, but not for KVM guests
I'm running libvirt 0.10.2 and qemu-kvm-1.2.0, both compiled from source, on
CentOS 6. I've got a working blkio cgroup hierarchy which I'm attaching
guests to using the following XML guest configs:
VM1 (foreground):
<cputune>
<shares>2048</shares>
</cputune>
<blkiotune>
<weight>1000</weight>
</blkiotune>
2018 Sep 14
0
Re: NUMA issues on virtualized hosts
Hello,
ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
with iozone remains the same.
The spec is running, however, it runs slower than 1-NUMA case.
The corrected XML looks like follows:
<cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3'
2013 Mar 25
1
Failed to boot lxc with libvirt 1.0.3:2013-03-25 06:54:17.620+0000: 1: error : lxcContainerMountBasicFS:563 : Failed to mount /selinux on /selinux type selinuxfs flags=e opts=(null): No such device
hi all, I am using lxc with libvirt. But I can't boot lxc container by libvirt 1.0.3(libvirt 0.9.8 works) . Below is my environment. Do I miss something?
lxc1.xml:<domain type='lxc'> <name>lxc1</name> <memory>1024000</memory> <cputune> <shares>100</shares> </cputune> <os> <type>exe</type>
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again,
when the iozone writes slow. This is how slabtop looks like:
62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head
1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node
132184 125911 0% 0.03K 1066 124 4264K kmalloc-32
118496 118224 0% 0.12K 3703 32 14812K kmalloc-node
73206 56467 0% 0.19K 3486 21
2016 Nov 08
3
Sharing network namespace between containers
Hello
Based on the lxc driver documentation, I am trying to create an xml to
share an existing network namespace with another container. I am running
libvirt 1.2.15.
Here is the xml:
<domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0'>
<name>nt</name>
<uuid>43c00192-e114-4e29-8ce7-4b5487f60a75</uuid>
<memory
2012 Apr 13
3
Guests can't connect to each other
Hi,
I'm using libvirt and qemu on Debian Wheezy. I'm having a strange
behavior. Guests can't connect to each other when they're on the same
host.
On the host I'm using bonding (in active / backup mode) and vlan. It
looks like this :
eth0 \ / macvtap0
bond0 --- vlan222
eth1 / \ macvtap1
So I've got two guests, let's say A and B. When
2013 Jul 31
0
Re: start lxc container on fedora 19
On Wed, Jul 31, 2013 at 12:46:58PM +0530, Aarti Sawant wrote:
> hello,
>
> i am new to lxc, i have created a lxc container on fedora 19
> i created a container rootfs of fedora 19 by using
> yum --installroot=/containers/test1 --releasever=19 install openssh
>
> test1.xml file for container test1
> <domain type="lxc">
> <name>test1</name>
2016 Nov 08
0
Re: Sharing network namespace between containers
On Tue, Nov 08, 2016 at 09:01:34AM +0530, Harish Vishwanath wrote:
>Hello
>
>Based on the lxc driver documentation, I am trying to create an xml to
>share an existing network namespace with another container. I am running
>libvirt 1.2.15.
>
>Here is the xml:
>
><domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0'>
>
>
2016 Nov 08
1
Re: Sharing network namespace between containers
Thank you. It looks like after I 'ignore', nothing is persisted in xml for
the app. Any idea what is the minimum version of libvirt required for this
feature?
Regards,
Harish
On Tue, Nov 8, 2016 at 1:36 PM, Martin Kletzander <mkletzan@redhat.com>
wrote:
> On Tue, Nov 08, 2016 at 09:01:34AM +0530, Harish Vishwanath wrote:
>
>> Hello
>>
>> Based on the lxc
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote:
> Hello,
>
> ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue
> with iozone remains the same.
>
> The spec is running, however, it runs slower than 1-NUMA case.
>
> The corrected XML looks like follows:
[Reformated XML for better reading]
<cpu mode="host-passthrough">
2016 Jul 26
2
How can I run command in containers on the host?
How can I run command in containers on the host? Just like the lxc command
lxc-attach.
I run :
virsh -c lxc:/// lxc-enter-namespace fedora2 --noseclabel /bin/ls
but get error:
libvirt: error : Expected at least one file descriptor
error: internal error: Child process (14930) unexpected exit status 125
Here is my libvirt.xml
<domain type='lxc'>
<name>fedora2</name>
2014 Feb 12
2
Re: Help? Running into problems with migrateToURI2() and virDomainDefCheckABIStability()
On 02/11/2014 04:45 PM, Cole Robinson wrote:
> On 02/10/2014 06:46 PM, Chris Friesen wrote:
>> Hi,
>>
>> We've run into a problem with libvirt 1.1.2 and are looking for some comments
>> on whether this is a bug or design intent.
>>
>> We're trying to use migrateToURI() but we're using a few things (numatune,
>> vcpu mask, etc.) that may need
2013 Jul 31
2
start lxc container on fedora 19
hello,
i am new to lxc, i have created a lxc container on fedora 19
i created a container rootfs of fedora 19 by using
yum --installroot=/containers/test1 --releasever=19 install openssh
test1.xml file for container test1
<domain type="lxc">
<name>test1</name>
<vcpu placement="static">1</vcpu>
<cputune>
2017 Apr 26
0
Re: Tunnelled migrate Windows7 VMs halted
On Wed, Apr 26, 2017 at 08:51:39AM -0500, Eric Blake wrote:
> >
> > I migrated a Windows 7 VM with libvirtd tunnelled, the VM halted
> > on the target although the status is running.
What do you mean by halted ? The guest OS has shutdown, or QEMU
has crashed, or something else ?
> >
> >
> > [root@test15 ~]# virsh migrate --live --p2p --tunnelled
2012 Jun 18
1
How to set cpu limits to xen domU in libvirt
Hello,
I looked in the the documents. Looks like the <cputune> period and
quota will only support qemu driver. Of for xen user, how to set the
hard cpu limit?
--
Tony