search for: maxcpus

Displaying 20 results from an estimated 89 matches for "maxcpus".

Did you mean: max_cpus
2014 Jan 13
2
how to detect if qemu supports live disk snapshot
...;/doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-i386</emulator> <machine canonical='pc-i440fx-1.6' maxCpus='255'>pc</machine> <machine maxCpus='255'>pc-q35-1.4</machine> <machine maxCpus='255'>pc-q35-1.5</machine> <machine canonical='pc-q35-1.6' maxCpus='255'>q35</machine> <machine maxCpus=...
2019 Mar 27
6
[PATCH 0/2] Limit number of hw queues by nr_cpu_ids for virtio-blk and virtio-scsi
...s the number of hw queues by nr_cpu_ids. No matter how many hw queues are use by virtio-blk/virtio-scsi, as they both have (tag_set->nr_maps == 1), they can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk/virtio-scsi would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, they fall back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch set limits the number of hw queues used by nr...
2018 Jun 19
0
Re: [PATCH] v2v: Set machine type explicitly for outputs which support it (RHBZ#1581428).
...d down to libvirt. However libvirt > capabilities doesn't advertise these machine types exactly, but > something more like "pc-q35-2.6". Does libvirt map "q35" to something > intelligent? It'll report both - one as an alias of the other eg <machine maxCpus='255'>pc-i440fx-2.11</machine> <machine canonical='pc-i440fx-2.11' maxCpus='255'>pc</machine> <machine maxCpus='1'>isapc</machine> <machine maxCpus='255'>pc-i440fx-2.9</machine> <machine...
2018 Jun 19
2
Re: [PATCH] v2v: Set machine type explicitly for outputs which support it (RHBZ#1581428).
On Tue, Jun 19, 2018 at 11:43:38AM +0100, Daniel P. Berrangé wrote: > I'd encourage apps to check the capabilities XML to see what > machine types are available. One issue is we don't always have access to the target hypervisor. For example in the Glance case we have to write something which will be picked up by Nova much later: > > + "hw_machine_type", >
2014 Jul 10
2
How to config qga to support dompmsuspend
...ype='qemu'>+0:+0</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='ppc'> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-ppc</emulator> <machine maxCpus='1'>g3beige</machine> <machine maxCpus='32'>ppce500</machine> <machine maxCpus='1'>mac99</machine> <machine maxCpus='15'>mpc8544ds</machine> <machine maxCpus='1'>taihu</machine>...
2008 Feb 15
4
Pin CPU of dom0
...any cpu Domain-0 0 7 - --p 0.6 any cpu dom1 1 0 4 -b- 7.7 4 dom2 2 0 6 -b- 7.6 6 So, dom0 is actually using CPU 1,2,4,5 instead of 0,1,2,3 Then I added "maxcpus=4" in the grub file kernel /boot/xen.gz console=vga maxcpus=4 After reboot, sudo xm vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 r-- 8.0 any cpu Domain-0 0...
2013 Nov 20
3
Arinc653 does not run VM
Hi, I am using Xen 4.2.3 installed on an Intel Atom with a Debian Dom0 and 3 Debian domU''s installed. I am trying to run some benchmarks using the Arinc653 scheduler. I have edited my grub options to boot with ''maxcpus=1 sched=arinc653'' options. I can boot the dom0 and verify that the scheduler is enabled. However when I xl create, the VM is created but I cannot connect a console to it. Running xl list shows the vm but it has no state (------) and Time equals 0.0 . The scheduler is clearly not alloca...
2012 Feb 07
1
Grub options for dom0
...ead that is recommended restricting one cpu for dom0. In the xen wiki there''s a list of grub parameters http://wiki.xen.org/wiki/Xen_Hypervisor_Boot_Options but it seems deprecated? http://groups.google.com/group/ganeti/browse_thread/thread/a18979bdd00f6461 Do you guys use this option (maxcpus) as shown here as well? http://www.indiangnu.org/2009/how-to-disable-cores-of-cpu/ Do you need to modify /etc/xen/xend-config.sxp accordingly to what''s been set on grub? What other options related with grub shall I consider for my dom0? Also, where to get an updated list of parameters...
2019 Mar 27
0
[PATCH 1/2] virtio-blk: limit number of hw queues by nr_cpu_ids
...the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cp...
2019 Mar 27
0
[PATCH 2/2] scsi: virtio_scsi: limit number of hw queues by nr_cpu_ids
...he block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-scsi, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num_queues' specified by qemu is more than maxcpus, virtio-scsi would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-scsi by nr_...
2019 Apr 27
0
[PATCH AUTOSEL 5.0 65/79] virtio-blk: limit number of hw queues by nr_cpu_ids
...the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cp...
2019 Apr 27
0
[PATCH AUTOSEL 4.19 44/53] virtio-blk: limit number of hw queues by nr_cpu_ids
...the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cp...
2019 Apr 27
0
[PATCH AUTOSEL 4.14 26/32] virtio-blk: limit number of hw queues by nr_cpu_ids
...the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cp...
2019 Apr 27
0
[PATCH AUTOSEL 4.9 13/16] virtio-blk: limit number of hw queues by nr_cpu_ids
...the block layer limits the number of hw queues by nr_cpu_ids. No matter how many hw queues are used by virtio-blk, as it has (tag_set->nr_maps == 1), it can use at most nr_cpu_ids hw queues. In addition, specifically for pci scenario, when the 'num-queues' specified by qemu is more than maxcpus, virtio-blk would not be able to allocate more than maxcpus vectors in order to have a vector for each queue. As a result, it falls back into MSI-X with one vector for config and one shared for queues. Considering above reasons, this patch limits the number of hw queues used by virtio-blk by nr_cp...
2011 Sep 07
0
[PATCH] libxl: vcpu_avail is a bitmask, use it as such
...* support a bitmask of available cpus but it supports a + * number of available cpus lower than the maximum number of + * cpus. Let''s do that for now. */ if (info->vcpu_avail) - flexarray_append(dm_args, libxl__sprintf(gc, "%d,maxcpus=%d", info->vcpus, info->vcpu_avail)); + flexarray_append(dm_args, libxl__sprintf(gc, "%d,maxcpus=%d", + __builtin_popcount(info->vcpu_avail), info->vcpus)); else flexarray_append(dm_args, libxl__sprin...
2011 Mar 09
0
[PATCH 04/11] x86: cleanup mpparse.c
...y.h> #include <xen/sched.h> -#include <asm/mc146818rtc.h> #include <asm/bitops.h> #include <asm/smp.h> #include <asm/acpi.h> @@ -34,36 +33,31 @@ #include <bios_ebda.h> /* Have we found an MP table */ -int smp_found_config; -unsigned int __devinitdata maxcpus = NR_CPUS; +bool_t __initdata smp_found_config; /* * Various Linux-internal data structures created from the * MP-table. */ -int apic_version [MAX_APICS]; -int mp_bus_id_to_type [MAX_MP_BUSSES]; -int mp_bus_id_to_node [MAX_MP_BUSSES]; -int mp_bus_id_to_local [MAX_MP_BUSSES]; -int mp_bus_id...
2007 Nov 29
6
PCI Passthrough to HVM on xen-unstable
...kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol01 crashkernel=128M@16M initrd /initrd-2.6.18-8.el5.img #1 title Red Hat Enterprise Linux Server (2.6.18-8.el5-up) root (hd0,0) kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol01 crashkernel=128M@16M maxcpus=1 initrd /initrd-2.6.18-8.el5.img #2 title RHEL5-XEN311-RC2 root (hd0,0) kernel /xen311/xen-3.1.1-rc2.gz dom0_mem=1300M loopback.nloopbacks=16 module /xen311/vmlinuz-2.6.18-xen-311 root=/dev/VolGroup00/LogVol01 ro showopts console=tty0 module /xen311/initrd-2.6.18-xen-311....
2007 Mar 26
12
System time monotonicity
It seems that VCPU system time isn''t monotonic (using 3.0.4). It seems it might be correlated to when a VCPU is switched across real CPUs but I haven''t conclusively proved that. But e.g.: { old = { time = { version = 0x4ec pad0 = 0xe8e0 tsc_timestamp = 0x22cc8398b7194 system_time =
2007 Nov 28
1
network-bridge does not create veth or peth devices
...0 21 update-rc.d xendomains defaults 21 20 nano -w /boot/grub/menu.lst # edited the following: ## Xen hypervisor options to use with the default Xen boot option # xenhopt=dom0_mem=512M com1=9600,8n1 vcpus=1 ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 maxcpus=1 console=ttyS0,9600n8 update-grub ----- after a reboot everything looks just fine, except: ifconfig shows only a xenbr0, no peth or veth.. The network works, creating a guest with xen-tools works, however the bridging is weird: ----- $brctl show bridge name bridge id STP ena...
2005 Dec 05
11
Xen 3.0 and Hyperthreading an issue?
Just gave 3.0 a spin. Had been running 2.0.7 for the past 3 months or so without problems (aside from intermittent failure during live migration). Anyway, 3.0 seems to have an issue with my machine. It starts up the 4 domains that I''ve got defined (was running 6 user domains with 2.0.7, but two of those were running 2.4 kernels which I can''t seem to build with Xen 3.0 yet, and