Displaying 5 results from an estimated 5 matches for "2sockets".
Did you mean:
sockets
2023 May 08
1
Windows Guest on KVM running "single core" after windows update
Hey all,
I have a Windows 10 pro (64bit) long time running as a libvirt/KVM guest that I think Windows Update finally narfed.
The hardware is a supermicro motherboard with dual Intel E5-2640 CPUs for a total of 40 threads and 64GB
The guest is allocated 2sockets, 5cores, 2 threads and 32GB of RAM. (I've also tried 1 socket, 20 cores, 1 thread and 1s10c2t -- no help)
In an SSH session to the linux host (CentOS7 - up to date), htop shows 1 core pegged and abnormal distribution of load (like it used to) even with fun little tools in Windows like the Powe...
2011 Feb 10
0
Problem with Memory Throughput Difference between Two Nodes(sockets)
Hi all,
I installed xen4.0.1-rc3 & 2.6.18.8 (dom0) on my machine (INTEL Xeon X5650,
Westemere, 12cores, 6cores per socket, 2sockets, 12MB L3,.. )
I figured out after running SPECCPU 2006 Libquantum benchmark that two nodes
have different throughput.
I set up 6vm on each node, and ran the workload in each VM.
VM in node1 got 1500sec exec time while VM in node2 got 1990sec exec time.
Previously, I also instal...
2023 May 09
2
Windows Guest on KVM running "single core" after windows update
...I have a Windows 10 pro (64bit) long time running as a libvirt/KVM guest
>> that I think Windows Update finally narfed.
>>
>> The hardware is a supermicro motherboard with dual Intel E5-2640 CPUs
>> for a total of 40 threads and 64GB
>>
>> The guest is allocated 2sockets, 5cores, 2 threads and 32GB of RAM.
>> (I've also tried 1 socket, 20 cores, 1 thread and 1s10c2t -- no help)
>>
>> In an SSH session to the linux host (CentOS7 - up to date), htop shows 1
>> core pegged and abnormal distribution of load (like it used to) even
>> wi...
2014 May 30
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
On Fri, May 30, 2014 at 11:19 AM, Jens Axboe <axboe at kernel.dk> wrote:
> On 2014-05-29 20:49, Ming Lei wrote:
>>
>> Firstly, it isn't necessary to hold lock of vblk->vq_lock
>> when notifying hypervisor about queued I/O.
>>
>> Secondly, virtqueue_notify() will cause world switch and
>> it may take long time on some hypervisors(such as,
2014 May 30
3
[PATCH] block: virtio_blk: don't hold spin lock during world switch
On Fri, May 30, 2014 at 11:19 AM, Jens Axboe <axboe at kernel.dk> wrote:
> On 2014-05-29 20:49, Ming Lei wrote:
>>
>> Firstly, it isn't necessary to hold lock of vblk->vq_lock
>> when notifying hypervisor about queued I/O.
>>
>> Secondly, virtqueue_notify() will cause world switch and
>> it may take long time on some hypervisors(such as,