similar to: locking domain memory

Displaying 20 results from an estimated 6000 matches similar to: "locking domain memory"

2016 Nov 22
1
Re: locking domain memory
On 21.11.2016 17:05, Michal Privoznik wrote: > On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote: >> Hi, >> is there a way to lock a guests memory so it doesn't get swapped out? I >> now there is memoryBacking->locked but that says it requires >> memtune->hard_limit and the description of that basically "don't ever do >> this" rendering the
2019 Apr 03
1
SEV machines and memory pinning
Hello, I am working on implementing SEV support in OpenStack. There are some questions that came up in the discussion of the spec [0] [0] https://review.openstack.org/#/c/641994/ As far as i understand, the memory for SEV machines need to be pinned so that it doesn't migrate to swap and page migration. ROMS, UEFI pflash and video RAM should be pinned too. Initially we planned to use
2016 Nov 21
0
Re: locking domain memory
On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote: > Hi, > is there a way to lock a guests memory so it doesn't get swapped out? I > now there is memoryBacking->locked but that says it requires > memtune->hard_limit and the description of that basically "don't ever do > this" rendering the locked element kind of pointless. > How can I prevent the guests
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
On 09.02.2015 18:19, G. Richard Bellamy wrote: > First I'll quickly summarize my understanding of how to configure numa... > > In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to > use hugepages for the guest, and to get those hugepages from a > particular host NUMA node. No, @nodeset refers to guest NUMA nodes. > > In
2013 Jul 31
2
start lxc container on fedora 19
hello, i am new to lxc, i have created a lxc container on fedora 19 i created a container rootfs of fedora 19 by using yum --installroot=/containers/test1 --releasever=19 install openssh test1.xml file for container test1 <domain type="lxc"> <name>test1</name> <vcpu placement="static">1</vcpu> <cputune>
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed. Just after create and start inside LXC container present cgroups. Example for memory: [root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/ total 0 drwxr-xr-x 2 root root 0 Sep 15 17:14 . drwxr-xr-x 12 root root 280 Sep 15 17:14 .. -rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children --w--w--w- 1 root root 0 Sep 15
2015 Apr 08
4
Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
Hi all. I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12 My container has bridged network: # virsh dumpxml test1 <domain type='lxc'> <name>test1</name> <uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory>
2018 Aug 01
3
LXC Memory Limits wont work
Hello, iam currently trying to run LXC Containers with libvirt but the memory limit doesn't want to work in the container i see the full 32GB from the Host OS iam pretty sure that iam missing a configline in the xml lxc-template ~ # free -m total used free shared buff/cache available Mem: 32108 626 31396 249 85
2016 Jan 26
2
Re: starting a domain only when you have enough resources
On Tue, Jan 26, 2016 at 12:39 PM, Michal Privoznik <mprivozn@redhat.com> wrote: > On 26.01.2016 12:30, Andrei Perietanu wrote: > > Hi all, > > > > I am running KVM on a 3.18 kernel. The system runs and Atom processor > with > > 2Gb RAM. > > > > Using KVM you obviously can over allocate your resources: say you have 4 > > guests each configured
2016 Jan 26
2
Re: starting a domain only when you have enough resources
On Tue, Jan 26, 2016 at 1:51 PM, Michal Privoznik <mprivozn@redhat.com> wrote: > On 26.01.2016 14:35, Andrei Perietanu wrote: > > On Tue, Jan 26, 2016 at 12:39 PM, Michal Privoznik <mprivozn@redhat.com> > > wrote: > > > >> On 26.01.2016 12:30, Andrei Perietanu wrote: > >>> Hi all, > >>> > >>> I am running KVM on a 3.18
2016 Jan 26
2
starting a domain only when you have enough resources
Hi all, I am running KVM on a 3.18 kernel. The system runs and Atom processor with 2Gb RAM. Using KVM you obviously can over allocate your resources: say you have 4 guests each configured with 1GB ram. Running all four at the same time, depending on the workload, can crash the system - I get a kernel trace when this happens. But let's consider a simpler case: one guest with 1.5 Gb RAM,
2015 Feb 23
1
Re: HugePages - can't start guest that requires them
On 20.02.2015 21:32, G. Richard Bellamy wrote: > <snip/> > > I've modified my config [1] based on my understanding, and am running > into a new error. Basically I'm hitting the oom-killer [2] even though > the hard_limit [3] of memtune is below the total number of hugepages > set for that NUMA nodeset. > Just drop the hard_limit. It's a blackbox we should
2016 Mar 18
3
Incorrect memory usage returned from virsh
When I run `virsh dominfo <domain>` I get the following: Id: 455 Name: instance-000047e0 UUID: 50722aa0-d5c6-4a68-b4ef-9b27beba48aa OS Type: hvm State: running CPU(s): 4 CPU time: 123160.4s Max memory: 33554432 KiB Used memory: 33554432 KiB Persistent: yes Autostart: disable Managed save: no Security model:
2014 Jan 30
2
Re: Dynamically setting permanent memory libvirt-lxc
Eric, thank you for your response. Virsh memtune, setmaxmem and setmem won't survive a reboot. I'm hoping to find  a solution that can survive reboot.   On Thursday, January 30, 2014 11:36 AM, Eric Blake <eblake@redhat.com> wrote: On 01/30/2014 10:11 AM, mallu mallu wrote: > I'm trying to permanently change memory allocation for a libvirt-lxc domain. So far I tried
2017 Apr 19
2
virsh error: driver is not whitelisted
Hi, I'm using virsh to instance a VM in my environment, but I'm running on some issues. I created the following domain file: <domain type='kvm'> <name>demovm</name> <uuid>4a9b3f53-fa2a-47f3-a757-dd87720d9d1d</uuid> <memory unit='KiB'>4194304</memory> <currentMemory
2015 Feb 20
0
Re: HugePages - can't start guest that requires them
On Tue, Feb 10, 2015 at 1:14 AM, Michal Privoznik <mprivozn@redhat.com> wrote: > On 09.02.2015 18:19, G. Richard Bellamy wrote: >> First I'll quickly summarize my understanding of how to configure numa... >> >> In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to >> use hugepages for the guest, and to get those hugepages from a
2014 Sep 28
2
Re: what is the xml fomat about memory-backend-file
On 2014/9/25 2:05, Eric Blake wrote: > On 09/24/2014 02:05 AM, Linhaifeng wrote: >> Hi, >> >> I want to use virsh to create a VM with the qemu parameter '-object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem'. > > Looking at tests/qemuxml2argvdata/qemuxml2argv-hugepages-pages.args, I > see several instances of
2014 Sep 24
2
what is the xml fomat about memory-backend-file
Hi, I want to use virsh to create a VM with the qemu parameter '-object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem'. How to write the XML file?
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
*facepalm* Now that I'm re-reading the documentation it's obvious that <page/> and @nodeset are for the guest, "This tells the hypervisor that the guest should have its memory allocated using hugepages instead of the normal native page size." Pretty clear there. Thank you SO much for the guidance, I'll return to my tweaking. I'll report back here with my results.
2015 Feb 04
2
Re: HugePages - can't start guest that requires them
As I mentioned, I got the instances to launch... but they're only taking HugePages from "Node 0", when I believe my setup should pull from both nodes. [atlas] http://sprunge.us/FSEf [prometheus] http://sprunge.us/PJcR 2015-02-03 16:51:48 root@eanna i ~ # virsh start atlas Domain atlas started 2015-02-03 16:51:58 root@eanna i ~ # virsh start prometheus Domain prometheus started