Displaying 20 results from an estimated 410 matches for "overcommits".
Did you mean:
overcommit
2010 Apr 21
5
Xen 4.0 memory overcommitment
Hello,
I am running a functional Xen 4.0 platform. The objective is the project I
am currently running is to replace 4 VMWare ESX by 4 Xen 4.0 hypervisors. I
have been able to reproduce all the core VMWare features in Xen (but better
of course =D) except memory overcommitment.
I know this feature has been included since Xen 3.3 but I have found almost
no information about it.
When I try to
2009 Jan 15
3
overcommiting vcpus
My test box has an AMD dual core processor in it, giving me 2 physical
cpu''s.
If I overcommit the vcpu''s (eg vcpus=4) to simulate a 4 cpu machine, how
accurate a simulation should it be (apart from performance sucking
badly)?
James
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2011 Mar 14
3
Swap space for kvm virtual host
I have a kvm virtual host running on what will become CentOS 6 with 12GB
of memory and a Quad Xeon X5560 2.8Ghz . The store for virtual
machines will be a software raid 6 array of 6 disks with an LVM layered
on top. I'm not initially planning any major overcommitment of
resources, though there could be a need for some overcommitment with a
light workload on the guests.
In recent years
2014 May 07
0
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
2) Both VMs are active and they share the same 20 physical CPUs
(200% overcommit)
3) Both VMs are
2009 Sep 17
1
Memory and CPU Overcommit
Hi.
Xen 3.3.1
I create limits: memory - in directive memory in start,
xm sched-credit - to limts cpu.
But I cant overcommit cpu and memory. Is it possible?
--
Best Regards,
alex.faq8@gmail.com
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2009 Sep 08
3
xen with dynamic slices
Hi.
I want use xen with dynamic slices. For example, I have 20 domU based on
FreeBSD, xen hypervisor 3.3.1, Debian Lenny dom0 system.
All domUs have 80Gb LVM partitions, but realy they use 20 of this 80Gb and I
want to create more domU''s.
How can I do it? I know that some virtualisation have possibility to do
dynamic slices(4 example Virtul box)
--
Best Regards,
alex.faq8@gmail.com
2012 Apr 08
4
[PATCH] Revert "Btrfs: increase the global block reserve estimates"
This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf.
We had numerous reports of premature ENOSPC that were bisected to this
patch. Reverting will not break things but a warning in ''use_block_rsv''
may show up in the syslog.
There''s no alternative fix in sight and the ENOSPC problem affects all
3.3 btrfs users during normal filesystem use.
CC:
2014 Oct 29
0
[PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
2) Both VMs are active and they share the same 20 physical CPUs
(200% overcommit)
The tests run
2014 Oct 29
0
[PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
2) Both VMs are active and they share the same 20 physical CPUs
(200% overcommit)
The tests run
2003 Jul 23
10
malloc does not return null when out of memory
We have a little soekris box running freebsd that uses racoon for key
management. It's used for setting up an ipsec tunnel. I noticed that
one of these devices lost the tunnel this morning. I looked in the
log and saw this
Jul 23 01:37:57 m0n0wall /kernel: pid 80 (racoon), uid 0, was killed: out of swap space
I reproduced this problem using this code.
#include <stdlib.h>
int
2008 Nov 12
0
MEMORY Overcommitment with XEN-3.x
Hi list,
i have successfully written a solution for memory overcommitment (for my
private management software).
I can migrate this technology to a standard xen source host.
If anyone are interestet, you can send me a email to my private address
michael.schmidt =at= xvirt.net
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
2008 Jul 17
5
Memory Ballooning / Overcommitting
I have a xen server setup that I want to install a lot of vms on if
possible. The vms will have no utilization for the most part; and brief
utilization when active. The server has 2GB of RAM.
We were running into a problem were the dom0 would not let us add anymore
vms because we were out of physical memory.
I have started to read about memory ballooning and hoping someone could
point me
2005 Dec 09
5
Memory overcommit
I have been using Xen on a daily basis on a production (but not
critical) machine for a number of months now. It''s looking really good.
One thing that I have not yet seen anyone mention as a feature that I
would really like to see is the ability to overcommit memory. I have 2G
of RAM in my machine. I would like to give a developer his own virtual
domain to sandbox his application
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Thu, Mar 13, 2014 at 02:16:06PM +0100, Paolo Bonzini wrote:
> Il 13/03/2014 11:54, David Vrabel ha scritto:
> >On 12/03/14 18:54, Waiman Long wrote:
> >>Locking is always an issue in a virtualized environment as the virtual
> >>CPU that is waiting on a lock may get scheduled out and hence block
> >>any progress in lock acquisition even when the lock has been
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Thu, Mar 13, 2014 at 02:16:06PM +0100, Paolo Bonzini wrote:
> Il 13/03/2014 11:54, David Vrabel ha scritto:
> >On 12/03/14 18:54, Waiman Long wrote:
> >>Locking is always an issue in a virtualized environment as the virtual
> >>CPU that is waiting on a lock may get scheduled out and hence block
> >>any progress in lock acquisition even when the lock has been
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer
2009 Dec 11
1
New to the community - a few questions
Hello everyone,
I´m new to the list and I´m doing some research on all virtualization
products and solutions avaiable, and I have two questions about Xen:
- Does it use have memory overcommitment techniques (ballooning, TMPS,
swapping)?
- Does it do software thin provisioning?
- I read it has a very strong point when using paravirtualization with
Linux systems. Does it have any other kind of
2016 Aug 18
2
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
> 2016-08-18 12:39 GMT-04:00 correomm <correomm at gmail.com>:
>
>> This bug is reported only on the VM's with CentOS 7 running on on VMware
>> ESXi 5.1.
>> The vSphere performance graph shows high CPU consume and disk activity only
>> on VM's with CentOS 7. Sometimes I can not connect remotely with ssh
>> (timeout error).
>>
> I'm
2014 Mar 19
0
[PATCH v7 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
This patch adds the necessary KVM specific code to allow KVM to support
the sleeping and CPU kicking operations needed by the queue spinlock PV
code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing. With only one KVM guest powered on (no overcommit), the
disk workload of the AIM7 benchmark was run on both ext4 and xfs RAM
disks at 3000 users on a 3.14-rc6 based