search for: 2vcpu

Displaying 9 results from an estimated 9 matches for "2vcpu".

Did you mean: vcpu
2012 Mar 07
8
Puppet Agent on Windows - High CPU Usage
...s actually being launched by the puppet-agent service in Windows. The CPU on the host was pegged around 50% all day long. When I shut down the puppet-agent it went down to a reasonable level...hovering in the low single digits most of the time. Start puppet-agent and CPU Usage (on both CPUs in a 2vCPU host) immediately pegs to 40%-60%....stop puppet-agent and it drops to near zero. Is this a known issue? Puppet-agent eating up tons of CPU time (via the ruby interpreter)? This host is a Windows 2008 VMware VM (VSphere 5) with 2vCPUs and 4GB of RAM assigned. It has no CPU or Memory resource li...
2009 Dec 27
3
windows xp domU consuming 200% cpu (dom0 is quad-core) at idle
What have I done wrong? I installed an xp (service pack 2) domU using the virt-install example from the man page, on a zfs volume. I noticed this behavior on build 129, and I replicated the behavior on build 130: root@opensolaris:/tmp# virsh version Compiled against library: libvir 0.7.0 Using library: libvir 0.7.0 Using API: Xen 3.0.1 Running hypervisor: Xen 3.4 root@opensolaris:/tmp#
2014 Mar 09
2
Syslinux EFI + TFTPBOOT Support
...e choking the VM. Have you considered >> just 1 socket and 1 core per socket? 1) My assumption would be that the VMware virtualized AMD 79C970A (PCNet32 driver; vlance virtualDev) lacks proper EFI64 support. 2) I have 0 speed issues using your VMX. If you only have two real cores for this 2vCPU VM, you're choking it as the host needs time to run. If you choke it, you mess with timers. If you mess with timers, interface polling and a HUGE slew of items (including a guest OS's clock) will be slowed to a crawl. In my experience (both on ESXi and VMware Workstation), gGeneral sizin...
2014 Mar 10
0
Syslinux EFI + TFTPBOOT Support
On 2014/3/10 ?? 05:48, Gene Cumm wrote: > 1) My assumption would be that the VMware virtualized AMD 79C970A > (PCNet32 driver; vlance virtualDev) lacks proper EFI64 support. > > 2) I have 0 speed issues using your VMX. If you only have two real > cores for this 2vCPU VM, you're choking it as the host needs time to > run. If you choke it, you mess with timers. If you mess with timers, > interface polling and a HUGE slew of items (including a guest OS's > clock) will be slowed to a crawl. > > In my experience (both on ESXi and VMware Work...
2014 Mar 08
4
Syslinux EFI + TFTPBOOT Support
On Mar 8, 2014 10:08 AM, "Gene Cumm" <gene.cumm at gmail.com> wrote: > > On Mar 8, 2014 9:27 AM, "Steven Shiau" <steven at nchc.org.tw> wrote: > > > > > > > > On 03/08/2014 10:06 PM, Gene Cumm wrote: > > >> Hi Gene, > > >> > Thanks. As you suggested, I did a test about 6.03-pre6, and I still got > >
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
...2.40GHz, 8 cores 2 numa nodes - Two directed connected 82599 - Host/Guest kenrel: net-next with the mq virtio-net patches and mq tuntap patches Pktgen test: - Local host generate 64 byte UDP packet to guest. - average of 20 runs 20 runs #q #vcpu kpps +improvement 1q 1vcpu: 264kpps +0% 2q 2vcpu: 451kpps +70% 3q 3vcpu: 661kpps +150% 4q 4vcpu: 941kpps +250% Netperf Local VM to VM test: - VM1 and its vcpu/vhost thread in numa node 0 - VM2 and its vcpu/vhost thread in numa node 1 - a script is used to lauch the netperf with demo mode and do the postprocessing to measure the aggreagte...
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
...2.40GHz, 8 cores 2 numa nodes - Two directed connected 82599 - Host/Guest kenrel: net-next with the mq virtio-net patches and mq tuntap patches Pktgen test: - Local host generate 64 byte UDP packet to guest. - average of 20 runs 20 runs #q #vcpu kpps +improvement 1q 1vcpu: 264kpps +0% 2q 2vcpu: 451kpps +70% 3q 3vcpu: 661kpps +150% 4q 4vcpu: 941kpps +250% Netperf Local VM to VM test: - VM1 and its vcpu/vhost thread in numa node 0 - VM2 and its vcpu/vhost thread in numa node 1 - a script is used to lauch the netperf with demo mode and do the postprocessing to measure the aggreagte...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...mp and calculate the aggregate performance. available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 node 0 size: 8175 MB node 0 free: 7359 MB node 1 cpus: 4 5 6 7 node 1 size: 8192 MB node 1 free: 7731 MB node distances node 0 1 0: 10 20 1: 20 10 Host/Guest kernel: net-next with mq patches 2.1 2vcpu 2q vs 1q: ping guest vcpu and vhost thread in the same numa node TCP_RR test: size|session|+thu%|+normalize% 1| 1| 0%| -2% 1| 20| +23%| +2% 1| 50| +9%| -1% 1| 100| +2%| -7% 64| 1| 0%| +1% 64| 20| +17%| -1% 64| 50| +6%| -4%...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...mp and calculate the aggregate performance. available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 node 0 size: 8175 MB node 0 free: 7359 MB node 1 cpus: 4 5 6 7 node 1 size: 8192 MB node 1 free: 7731 MB node distances node 0 1 0: 10 20 1: 20 10 Host/Guest kernel: net-next with mq patches 2.1 2vcpu 2q vs 1q: ping guest vcpu and vhost thread in the same numa node TCP_RR test: size|session|+thu%|+normalize% 1| 1| 0%| -2% 1| 20| +23%| +2% 1| 50| +9%| -1% 1| 100| +2%| -7% 64| 1| 0%| +1% 64| 20| +17%| -1% 64| 50| +6%| -4%...