Displaying 20 results from an estimated 30000 matches similar to: "Changing CPU cache size of guest"
2020 Jun 15
1
Reintroduce modern CPU in model selection
Hi list,
in virt-manager ver. 2.2.1 (fully upgraded CentOS 8.1), the CPU model
list only shows ancient CPU (the most recent is Nehalem-IBRS).
On the other hand, in virt-manager 1.5.x (fully upgraded CentOS 7.8) we
have a rich selection of CPU (as recent as Icelake).
Why was the list in newer virt-manager so much trimmed? Is it possible
to enlarge it?
Thanks.
--
Danti Gionatan
Supporto
2020 Feb 28
1
kvm presenting wrong CPU Topology for cache
Folks,
I am having major performance issue with my Erlang application running
on openstack KVM hypervisor and after so many test i found something
wrong with my KVM guest CPU Topology
This is KVM host - http://paste.openstack.org/show/790120/
This is KVM guest - http://paste.openstack.org/show/790121/
If you carefully observe output of both host and guest you can see
guest machine threads has
2010 Oct 27
2
Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When
switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed
that the NUMA info as shown by the Xen ''u'' debug-key is different.
More specifically, the CPU to node mapping is alternating for 4.0.2
and grouped sequentially for 4.1. This difference affects the
allocation (wrt node/socket) of pinned VCPUs to the
2012 Jul 22
0
Preferred CPU model not allowed by hypervisor
Hi, all. I posted this message to libvir-list last night, but just realized
that was geared toward development rather than support. Apologies to those
who are subscribed to both for the dupe.
I'm having a weird problem where libvirt/qemu/kvm won't let me use the model
processor I have defined in my domain's config file. Instead, I get the
error message in libvirtd.log that:
2016 Sep 19
2
How to set QEMU qcow2 l2-cache-size using libvirt xml?
QEMU's default qcow2 L2 cache size is too small for large images (and small cluster sizes), resulting in very bad performance.
https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/
shows huge performance hit for a 20GB qcow2 with default 64kB cluster size:
L2 Cache, MiB Average IOPS
1 (default) 5100
1.5
2009 May 27
3
Xen 3.2 with Ubuntu 8.04 (64-Bit) on Intel Nehalem (i7)
Hallo list,
i am testing a Dell M610 with two quadcore Intel Nehalem CPUs at the
moment, so with Hyper-Threading 16 cores in total. As mentioned in the
subject I am using Ubuntu 8.04 64Bit and Xen 3.2 from Ubuntu package.
During these tests I hit on some issues, which are not clear to me. I
would be thankful for any comments/hints/thoughts on the following topics:
1. I recognized that
2010 May 11
0
[LLVMdev] How does SSEDomainFix work?
On May 10, 2010, at 9:07 PM, NAKAMURA Takumi wrote:
> Hello. This is my 1st post.
ようこそ!
> I have tried SSE execution domain fixup pass.
> But I am not able to see any improvements.
Did you actually measure runtime, or did you look at assembly?
> I expect for the example below to use MOVDQA, PAND &c.
> (On nehalem, ANDPS is extremely slower than PAND)
Are you sure? The
2010 Apr 13
2
HVM DomU with Kernel 2.6.27-19 on CentOS 5 hangs with 100% CPU at Linux bootup on Xen 3.2.1
Dear Xen Community,
today we tried to upgrade our base server from a Xeon E5430 to a Xeon
X5570 (Nehalem). This worked fine so far, but our DomUs with Kernel
2.6.27 no longer boot - they start up until the following message appears:
[..]
Mount-cache hash table entries: 256
Initializing cgroup subsys ns
Initializing cgroup subsys cpuacct
Initializing cgroup subsys memory
Initializing cgroup
2010 May 11
2
[LLVMdev] How does SSEDomainFix work?
Hello. This is my 1st post.
I have tried SSE execution domain fixup pass.
But I am not able to see any improvements.
I expect for the example below to use MOVDQA, PAND &c.
(On nehalem, ANDPS is extremely slower than PAND)
Please tell me if something would be wrong for me.
Thank you.
Takumi
Host: i386-mingw32
Build: trunk at 103373
foo.ll:
define <4 x i32> @foo(<4 x i32> %x,
2010 Mar 14
1
Grant table corruption with HVM guest
I''m experiencing total grant table corruption on a system and I''m
hoping my symptoms will ring a bell with a member of the Xen developer
community. The setup is Xen 4.0.0-RC2 (OpenSUSE 11.2 package) on
a Nehalem system. The sole guest instance is 64bit FreeBSD running
in HVM mode, a single vcpu, and a PCI passed-through LSI Logic 1068e
SAS controller. FreeBSD is running
2018 Feb 08
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
>> In short: there is no (live) migration support for nested VMX yet. So as
>> soon as your guest is using VMX itself ("nVMX"), this is not expected to
>> work.
>
> Hi David, thanks for getting back to us on this.
Hi Florian,
(sombeody please correct me if I'm wrong)
>
> I see your point, except the issue Kashyap and I are describing does
> not
2018 Feb 07
5
Re: Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
On 07.02.2018 16:31, Kashyap Chamarthy wrote:
> [Cc: KVM upstream list.]
>
> On Tue, Feb 06, 2018 at 04:11:46PM +0100, Florian Haas wrote:
>> Hi everyone,
>>
>> I hope this is the correct list to discuss this issue; please feel
>> free to redirect me otherwise.
>>
>> I have a nested virtualization setup that looks as follows:
>>
>> - Host:
2009 Sep 04
4
DRIVER_IRQL_NOT_LESS_OR_EQUAL by xennet.sys, Server 2008 x64
Hey guys,
I''m trying to get a gentoo dom0 running on some new Nehalem boxes.
Our dozen or so existing servers run a xenified 2.6.21, but we can''t get the
new Intel network chipset working with it.
So, I''m giving a pv_ops dom0 from xen-tip/master a go, and Xen 3.4.1 to go
with it. rebase/master is a bit too bleeding edge for me. So far, it''s working
great.
2015 Nov 27
2
Re: How to disable kvm_steal_time feature
27 нояб. 2015 г. 14:15 пользователь "Martin Kletzander" <mkletzan@redhat.com>
написал:
>
> On Fri, Nov 20, 2015 at 04:31:56PM +0100, Piotr Rybicki wrote:
>>
>> Hi.
>>
>> I would like to workaround a bug, when after live-migration of KVM
>> guest, there is a 100% steal time shown in guest.
>>
>> I've read, that disabling
2010 Jun 22
4
New kernel causes hardware error?
I have recently upgraded to 2.6.18-194.3.1.el5 and within several days
the machine crashed with the following error (repeating in mcelog):
MCE 0
HARDWARE ERROR. This is *NOT* a software problem!
Please contact your hardware vendor
CPU 2 BANK 8 MISC 41
MCG status:
MCi status:
Error overflow
Uncorrected error
MCi_MISC register valid
Processor context corrupt
MCA: MEMORY CONTROLLER AC_CHANNEL0_ERR
2018 Feb 06
2
Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
Hi everyone,
I hope this is the correct list to discuss this issue; please feel
free to redirect me otherwise.
I have a nested virtualization setup that looks as follows:
- Host: Ubuntu 16.04, kernel 4.4.0 (an OpenStack Nova compute node)
- L0 guest: openSUSE Leap 42.3, kernel 4.4.104-39-default
- Nested guest: SLES 12, kernel 3.12.28-4-default
The nested guest is configured with
2007 Apr 20
2
question re multiple backends and the 'guest' backend
An inherited system has the following configuration:
passdb backend = ldapsam:ldap://10.10.10.10 smbpasswd guest
What is the purpose of using multiple backends? The smb.conf man page
simply states that each backend will be searched in turn but why would
one ever use such a setup?
Secondly, the man page does not mention the 'guest' backend. To me
such a backend implies that an
2009 May 03
1
[LLVMdev] L1, L2 Cache line sizes in TargetData?
Hello,
Is there any way for a pass to determine the L1 or L2 cacheline size
of the target before the IR is lowered to machine instructions?
Thanks,
--
Nick Johnson
2017 Oct 02
2
NUMA split mode?
John R Pierce <pierce at hogranch.com> writes:
> On 10/1/2017 8:38 AM, hw wrote:
>> HP says that what they call "NUMA split mode" should be disabled in the
>> BIOS of the Z800 workstation when running Linux. They are reasoning
>> that Linux kernels do not support this feature and even might not boot
>> if it?s enabled.
>
> hmm, that workstation is
2010 Mar 26
14
Xen on Intel i7 Nehalem?
Hello,
i´ve got a new Intel Nehalem server with an i7-920 CPU.
I installed a Debian Lenny OS ...
Now, i cannot find any images for hypervisor or kernel that seem to be runable on that hardware.
:-O
All of that is AMD:
apt-cache search hypervisor
linux-image-2.6-xen-amd64 - Linux 2.6 image on AMD64, oldstyle Xen support
linux-image-xen-amd64 - Linux image on AMD64, oldstyle Xen support