search for: overcommit

Displaying 20 results from an estimated 410 matches for "overcommit".

2010 Apr 21
5
Xen 4.0 memory overcommitment
Hello, I am running a functional Xen 4.0 platform. The objective is the project I am currently running is to replace 4 VMWare ESX by 4 Xen 4.0 hypervisors. I have been able to reproduce all the core VMWare features in Xen (but better of course =D) except memory overcommitment. I know this feature has been included since Xen 3.3 but I have found almost no information about it. When I try to start a domain with more RAM then is available on the server I get the following message : *VmError: I need 1433600 KiB, but dom0_min_mem is 200704 and shrinking to 200704 KiB...
2009 Jan 15
3
overcommiting vcpus
My test box has an AMD dual core processor in it, giving me 2 physical cpu''s. If I overcommit the vcpu''s (eg vcpus=4) to simulate a 4 cpu machine, how accurate a simulation should it be (apart from performance sucking badly)? James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011 Mar 14
3
Swap space for kvm virtual host
I have a kvm virtual host running on what will become CentOS 6 with 12GB of memory and a Quad Xeon X5560 2.8Ghz . The store for virtual machines will be a software raid 6 array of 6 disks with an LVM layered on top. I'm not initially planning any major overcommitment of resources, though there could be a need for some overcommitment with a light workload on the guests. In recent years people seem to configure a wide range of different swap allocations. I was thinking initially to spread swap across seperate non-raid partitions on 4 of these disks, but th...
2014 May 07
0
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
...the CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) 3) Both VMs are active and they shares 30 physical CPUs (10 delicated and 10 shared - 133% overcommit) The tests run included the disk workload of the AIM7 benchmark on both ext4 and xfs RAM disks at 3000 users on a 3.15-rc1 based kernel. The "ebizzy -m" test was was also run and i...
2009 Sep 17
1
Memory and CPU Overcommit
Hi. Xen 3.3.1 I create limits: memory - in directive memory in start, xm sched-credit - to limts cpu. But I cant overcommit cpu and memory. Is it possible? -- Best Regards, alex.faq8@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2009 Sep 08
3
xen with dynamic slices
Hi. I want use xen with dynamic slices. For example, I have 20 domU based on FreeBSD, xen hypervisor 3.3.1, Debian Lenny dom0 system. All domUs have 80Gb LVM partitions, but realy they use 20 of this 80Gb and I want to create more domU''s. How can I do it? I know that some virtualisation have possibility to do dynamic slices(4 example Virtul box) -- Best Regards, alex.faq8@gmail.com
2012 Apr 08
4
[PATCH] Revert "Btrfs: increase the global block reserve estimates"
This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf. We had numerous reports of premature ENOSPC that were bisected to this patch. Reverting will not break things but a warning in ''use_block_rsv'' may show up in the syslog. There''s no alternative fix in sight and the ENOSPC problem affects all 3.3 btrfs users during normal filesystem use. CC:
2014 Oct 29
0
[PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
...he CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) The tests run included the disk workload of the AIM7 benchmark on both ext4 and xfs RAM disks at 3000 users on a 3.17 based kernel. The "ebizzy -m" test and futextest was was also run and its performance data were recorded. With two VMs running, the "idle=poll" kernel option...
2014 Oct 29
0
[PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
...he CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) The tests run included the disk workload of the AIM7 benchmark on both ext4 and xfs RAM disks at 3000 users on a 3.17 based kernel. The "ebizzy -m" test and futextest was was also run and its performance data were recorded. With two VMs running, the "idle=poll" kernel option...
2003 Jul 23
10
malloc does not return null when out of memory
We have a little soekris box running freebsd that uses racoon for key management. It's used for setting up an ipsec tunnel. I noticed that one of these devices lost the tunnel this morning. I looked in the log and saw this Jul 23 01:37:57 m0n0wall /kernel: pid 80 (racoon), uid 0, was killed: out of swap space I reproduced this problem using this code. #include <stdlib.h> int
2008 Nov 12
0
MEMORY Overcommitment with XEN-3.x
Hi list, i have successfully written a solution for memory overcommitment (for my private management software). I can migrate this technology to a standard xen source host. If anyone are interestet, you can send me a email to my private address michael.schmidt =at= xvirt.net _______________________________________________ Xen-users mailing list Xen-users@lists.x...
2008 Jul 17
5
Memory Ballooning / Overcommitting
I have a xen server setup that I want to install a lot of vms on if possible. The vms will have no utilization for the most part; and brief utilization when active. The server has 2GB of RAM. We were running into a problem were the dom0 would not let us add anymore vms because we were out of physical memory. I have started to read about memory ballooning and hoping someone could point me
2005 Dec 09
5
Memory overcommit
I have been using Xen on a daily basis on a production (but not critical) machine for a number of months now. It''s looking really good. One thing that I have not yet seen anyone mention as a feature that I would really like to see is the ability to overcommit memory. I have 2G of RAM in my machine. I would like to give a developer his own virtual domain to sandbox his application development without having to dedicate a whole piece of hardware to just him. But I know he won''t really log in and use it all that often. If I give him 512M of my 2G...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...CPU to be rescheduled). > > Actually, I think the unfair version should be automatically > selected if running on a hypervisor. Per-hypervisor pvops can > choose to enable the fair one. > > Lock unfairness may be particularly evident on a virtualized guest > when the host is overcommitted, but problems with fair locks are > even worse. > > In fact, RHEL/CentOS 6 already uses unfair locks if > X86_FEATURE_HYPERVISOR is set. The patch was rejected upstream in > favor of pv ticketlocks, but pv ticketlocks do not cover all > hypervisors so perhaps we could revisit...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...CPU to be rescheduled). > > Actually, I think the unfair version should be automatically > selected if running on a hypervisor. Per-hypervisor pvops can > choose to enable the fair one. > > Lock unfairness may be particularly evident on a virtualized guest > when the host is overcommitted, but problems with fair locks are > even worse. > > In fact, RHEL/CentOS 6 already uses unfair locks if > X86_FEATURE_HYPERVISOR is set. The patch was rejected upstream in > favor of pv ticketlocks, but pv ticketlocks do not cover all > hypervisors so perhaps we could revisit...
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer
2009 Dec 11
1
New to the community - a few questions
Hello everyone, I´m new to the list and I´m doing some research on all virtualization products and solutions avaiable, and I have two questions about Xen: - Does it use have memory overcommitment techniques (ballooning, TMPS, swapping)? - Does it do software thin provisioning? - I read it has a very strong point when using paravirtualization with Linux systems. Does it have any other kind of virtualization method, like full? For now that´s it. I thank you a lot for the attention. Hen...
2016 Aug 18
2
BUG: soft lockup - CPU#0 stuck for 36s! [swapper/0:0]
...imeout error). >> > I'm also seeing those errors in several servers, running under 5.5. > Currently investigating if this > <https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009996> > has anything to do (the resource overcommit bit). Does this happen (only) while taking or consolidating snapshots? The VM is suspended during these operations and the OS isn't too crazy about it, especially if you have slow storage. Jack
2014 Mar 19
0
[PATCH v7 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
This patch adds the necessary KVM specific code to allow KVM to support the sleeping and CPU kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing. With only one KVM guest powered on (no overcommit), the disk workload of the AIM7 benchmark was run on both ext4 and xfs RAM disks at 3000 users on a 3.14-rc6 based kernel. The JPM (jobs/minute) data of the test run were: kernel XFS FS %change ext4 FS %change ------ ------ ------- ------- ------...