Displaying 20 results from an estimated 300 matches similar to: "cputune shares with multiple cpu and pinning"
2013 Jun 27
2
qemu-img convert to "sparse" LV
Apologies as this is is not a specific libvirt question.
Is qemu-img convert compatible with thin-provisioned LVs as targets ?
I wanted to convert a file-based image to a LV image
where the file-based image has a capacity much larger than the actual data
it contains, so it has a small footprint on disk
(either a qcow2 or a raw, but sparse, image).
If I use qemu-img convert -O raw ... with a
2013 Jun 14
0
can virsh set the cpuset attribute of <vcpu ..> (CPU Allocation) ?
Is it possible to use virsh to set the cpuset attribute of the CPU
Allocation element in a domain ?
<domain>
...
<vcpu placement='static' cpuset="1-4,^3,6" current="1">2</vcpu>
...
</domain>
I have seen that virsh vcpupin and virsh emulatorpin can be used to query
and set
the cpusets of the <vcpupin> and <emulatorpin>
2014 Jun 08
0
transient domain in virsh list as shut off
<font size=2 face="sans-serif">We have just seen libvirt ending up in
an inconsistent state : a shut off transient domain.</font>
<br><font size=2 face="sans-serif"> </font>
<br><font size=2 face="sans-serif">Has anyone seen this before ?</font>
<br>
<br><font size=2
2012 Sep 21
0
picking a NUMA cell for pinning using virsh freecell
Hi
I'd want to pin the vcpu of a guest to a pcpu.
the docs clearly say
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/ch09s04.html
"Locking a guest to a particular NUMA node offers no benefit if that node
does not have sufficient free memory for that guest. libvirt stores
information on the free memory available on
2014 Aug 27
0
Cputune causing VM to show usage as hardware interrupts
Hey All,
What is the correct method for using cputune to use a percentage of a host
core? From testing I currently have period set to 10,000 and quota set to
3,500. This gets me 35% of a used core on the host (36 3.4 2:42.20
qemu-system-x86) however when running stress inside the VM I am showing
this usage in top;w
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 35.2 hi, 0.0 si,
64.8
2017 Apr 27
1
Does lxc support cputune/vcpusched option
2019 Aug 29
0
[libvirtd] qemu_process: reset CPU affinity to all enabled CPUs, when runs in custom cpuset
Hello All,
Since 4.5.0-23.el7 version (Red Hat 7.7), when I launch pinned VM,
libvirtd reset CPU affinity to all enabled in host CPUs, if it runs in
custom cpuset.
I can't reproduce this behavior with 4.5.0-10.el7_6.12 with the same
kernel version (Red Hat 7.7).
Libvirt runs in a custom cpuset 'libvirt', where the number of
available cpus is restricted to 0,2,4,6,8.
And this
2006 Sep 29
1
[Xen-ia64-devel] RE: IPF/Xen VTI domain testing report for Xen 3.0.3 RC1
>5. LTP testing might run very slow in SMP VTI Domain with credit scheduler. If binding VTI
>and Xen0 vcpu, this bug won''t be there.
Hi keir,
In credit scheduler, two vcpus in the same domain may be scheduled on the same CPU. For instance, vcpu0 and vcpu1 are running on the same CPU, vcpu0 is doing spin_lock in guest, then time slice is due, vcpu0 is scheduled out before doing
2010 Sep 22
1
Question on CPU pinning in Python
Looking at the python API, once I have a domain object I can call domain.pinVcpu to pin a specific vcpu to a physical CPU. I found http://www.mail-archive.com/libvir-list at redhat.com/msg04562.html which mentioned some changes to the C API in the Python implementation, and was wondering if my understanding is correct.
Say that I have a host system with 16 logical CPU's, 0-15. If I wanted to
2015 Oct 29
2
How to retrieve legacy cgroups location ?
Hi,
As told in "Control Groups Resource Management" libvirt page :
Legacy cgroups layout
Prior to libvirt 1.0.5, the cgroups layout created by libvirt was different from that described above, and did not allow for administrator customization. Libvirt used a fixed, 3-level hierarchy libvirt/{qemu,lxc}/$VMNAME which was rooted at the point in the hierarchy where libvirtd itself was
2015 Oct 29
0
Re: How to retrieve legacy cgroups location ?
On Thu, Oct 29, 2015 at 10:40:44AM +0000, Jean-Pierre Ribeauville wrote:
>Hi,
>
>As told in "Control Groups Resource Management" libvirt page :
>Legacy cgroups layout
>Prior to libvirt 1.0.5, the cgroups layout created by libvirt was different from that described above, and did not allow for administrator customization. Libvirt used a fixed, 3-level hierarchy
2013 May 07
1
[PATCH v3] xen/gic: EOI irqs on the right pcpu
We need to write the irq number to GICC_DIR on the physical cpu that
previously received the interrupt, but currently we are doing it on the
pcpu that received the maintenance interrupt. As a consequence if a
vcpu is migrated to a different pcpu, the irq is going to be EOI''ed on
the wrong pcpu.
This covers the case where dom0 vcpu0 is running on pcpu1 for example
(you can test this
2013 Nov 05
2
syslinux.efi pxeboot across multiple subnets
The same client was used for syslinux.efi (both success on same subnet and
failure on different subnet) and grub.efi. The DHCP host block is setup
like:
host testing {
hardware ethernet {mac} ;
next-server 10.16.195.178 ;
filename "rhel64/syslinux.efi" ;
}
I'll pull a tcpdump filtering by the IP tomorrow when I get back to the
systems.
On Mon, Nov 4, 2013 at 6:41 PM,
2005 Aug 08
1
[PATCH] Fix domain CPU time calculation to count all VCPU times correctly
Currently, the getdomaininfo function (used to fill in a
dom0_getdomaininfo_t for a domain) calculates a domain''s total CPU time
from its VCPU times using the code:
if ( v->cpu_time > cpu_time )
cpu_time += v->cpu_time;
This causes a VCPU''s time to only be counted if it is greater than the
current total; so if VCPU0 has 10 seconds and VCPU1 has 5, the
2005 Sep 20
1
timer interrupts, virqs, irq balance questions
I''ve been looking into bug [1]#195 and I have a couple of questions on
how timer interrupts and virqs are handled. Is it possible for dom0
linux to irq balance timer interrupts to different cpus? That is, if
xen sends a VIRQ_TIMER to vcpu0, backed by cpu0, is it possible for that
interrupt to be handled by vcpu1, backed by cpu1 ?
After putting in some debug code in to timer_interrupt
2013 Apr 22
1
failure creating a snapshot volume within a lvm-based pool
Hi
I have defined a logical pool and a volume within it
# virsh vol-create-as images_lvm myvol 2G
Vol myvol created
# virsh vol-list images_lvm
Name Path
-----------------------------------------
myvol /dev/libvirt_images_vg/myvol
if I try to create another volume using the previous one as backing-vol,
the creation fails with what looks like an incorrect
2013 Nov 05
0
syslinux.efi pxeboot across multiple subnets
Op 2013-11-04 om 20:26 schreef Jason Matthews:
> The same client was used for syslinux.efi (both success on same subnet and
> failure on different subnet) and grub.efi. The DHCP host block is setup
> like:
>
> host testing {
> hardware ethernet {mac} ;
> next-server 10.16.195.178 ;
> filename "rhel64/syslinux.efi" ;
> }
>
> I'll pull a
2013 Jun 28
0
Re: qemu-img convert to "sparse" LV
The 27/06/13, Edoardo Comar wrote:
> Apologies as this is is not a specific libvirt question.
>
> Is qemu-img convert compatible with thin-provisioned LVs as targets ?
>
> I wanted to convert a file-based image to a LV image
> where the file-based image has a capacity much larger than the actual data
> it contains, so it has a small footprint on disk
> (either a qcow2 or
2013 Nov 05
2
syslinux.efi pxeboot across multiple subnets
Sorry. Here are the tcpdumps on pastebin:
Filtered by IP taken on tftp server: http://pastebin.com/NgesF5p9
Taken from mirrored port: http://pastebin.com/kuw22GF2
On Tue, Nov 5, 2013 at 12:21 PM, Geert Stappers <stappers at stappers.nl>wrote:
> Op 2013-11-04 om 20:26 schreef Jason Matthews:
> > The same client was used for syslinux.efi (both success on same subnet
> and
>
2011 Mar 14
0
cgroups limitations on Virtual machines
I have 2 VMs launched by : 'virsh create <xml file>' . Both VMs get 2
vcpus (out of total 2 cores of the host)
I then try bias their cpu cycle quota by manipulating the cpu_shares (
virsh schedinfo --set cpu_shares=<value> vm1/2 ) so that VM1 will get 3
times the cpu cycles VM2 gets.
(e.g : VM1 cpu_shares = 150 . VM2 cpu_shares = 50) .
There are no other VMs defined or