similar to: LXC guest memory recycling under RHEL6

Displaying 20 results from an estimated 1000 matches similar to: "LXC guest memory recycling under RHEL6"

2014 Sep 15
0
Re: cgroups inside LXC containers losts memory limits after some time
HI all >After unpredictable time passed (1-5 day ?), cgroups inside LXC >magicaly removed. virsh dumpxml config look like this: <domain type='lxc' id='3566'> <name>puppet</name> <uuid>6d49b280-5686-4e3c-b048-1b5d362fb137</uuid> <memory unit='KiB'>8388608</memory> <currentMemory
2013 Jul 31
0
Re: start lxc container on fedora 19
On Wed, Jul 31, 2013 at 12:46:58PM +0530, Aarti Sawant wrote: > hello, > > i am new to lxc, i have created a lxc container on fedora 19 > i created a container rootfs of fedora 19 by using > yum --installroot=/containers/test1 --releasever=19 install openssh > > test1.xml file for container test1 > <domain type="lxc"> > <name>test1</name>
2013 Jul 31
2
start lxc container on fedora 19
hello, i am new to lxc, i have created a lxc container on fedora 19 i created a container rootfs of fedora 19 by using yum --installroot=/containers/test1 --releasever=19 install openssh test1.xml file for container test1 <domain type="lxc"> <name>test1</name> <vcpu placement="static">1</vcpu> <cputune>
2012 Oct 18
0
0.10.x incorrectly reporting currentMemory size
Hi, <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>1394380</currentMemory> <memtune> <hard_limit unit='KiB'>1594380</hard_limit> <soft_limit unit='KiB'>1494380</soft_limit> </memtune> results to: 0.10.x, .dominfo. or dumpxml | grep -i currentmemory Max memory: 16777216
2015 Apr 09
0
Re: Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
On 04/08/2015 09:38 AM, mxs kolo wrote: > Hi all. > > I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12 > My container has bridged network: > # virsh dumpxml test1 > <domain type='lxc'> > <name>test1</name> > <uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid> > <memory
2015 Apr 08
4
Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
Hi all. I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12 My container has bridged network: # virsh dumpxml test1 <domain type='lxc'> <name>test1</name> <uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory>
2019 Apr 03
1
SEV machines and memory pinning
Hello, I am working on implementing SEV support in OpenStack. There are some questions that came up in the discussion of the spec [0] [0] https://review.openstack.org/#/c/641994/ As far as i understand, the memory for SEV machines need to be pinned so that it doesn't migrate to swap and page migration. ROMS, UEFI pflash and video RAM should be pinned too. Initially we planned to use
2014 Sep 16
2
1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
HI all Centos 7, 3.10.0-123.6.3.el7.x86_64 libvirt 1.27, libvirt 1.2.8 builded from source with ./configure --prefix=/usr make && make install LXC with direct network failed to start: Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
2015 Feb 23
1
Re: HugePages - can't start guest that requires them
On 20.02.2015 21:32, G. Richard Bellamy wrote: > <snip/> > > I've modified my config [1] based on my understanding, and am running > into a new error. Basically I'm hitting the oom-killer [2] even though > the hard_limit [3] of memtune is below the total number of hugepages > set for that NUMA nodeset. > Just drop the hard_limit. It's a blackbox we should
2016 Nov 18
2
locking domain memory
Hi, is there a way to lock a guests memory so it doesn't get swapped out? I now there is memoryBacking->locked but that says it requires memtune->hard_limit and the description of that basically "don't ever do this" rendering the locked element kind of pointless. How can I prevent the guests memory from being swapped out without shooting myself in the foot? Regards,
2016 Nov 22
1
Re: locking domain memory
On 21.11.2016 17:05, Michal Privoznik wrote: > On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote: >> Hi, >> is there a way to lock a guests memory so it doesn't get swapped out? I >> now there is memoryBacking->locked but that says it requires >> memtune->hard_limit and the description of that basically "don't ever do >> this" rendering the
2016 Nov 21
0
Re: locking domain memory
On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote: > Hi, > is there a way to lock a guests memory so it doesn't get swapped out? I > now there is memoryBacking->locked but that says it requires > memtune->hard_limit and the description of that basically "don't ever do > this" rendering the locked element kind of pointless. > How can I prevent the guests
2014 Jan 30
2
Re: Dynamically setting permanent memory libvirt-lxc
Eric, thank you for your response. Virsh memtune, setmaxmem and setmem won't survive a reboot. I'm hoping to find  a solution that can survive reboot.   On Thursday, January 30, 2014 11:36 AM, Eric Blake <eblake@redhat.com> wrote: On 01/30/2014 10:11 AM, mallu mallu wrote: > I'm trying to permanently change memory allocation for a libvirt-lxc domain. So far I tried
2014 Jan 31
1
Re: Dynamically setting permanent memory libvirt-lxc
[please don't top-post on technical lists; also, it would be nice if you could convince your mailer to wrap long lines) > Eric, thank you for your response. Virsh memtune, setmaxmem and setmem won't survive a reboot. Ah, but they DO survive reboots, if you use the right options.  'virsh memtune --live --config' affects both the running guest and the next boot. -------
2015 Feb 20
0
Re: HugePages - can't start guest that requires them
On Tue, Feb 10, 2015 at 1:14 AM, Michal Privoznik <mprivozn@redhat.com> wrote: > On 09.02.2015 18:19, G. Richard Bellamy wrote: >> First I'll quickly summarize my understanding of how to configure numa... >> >> In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to >> use hugepages for the guest, and to get those hugepages from a
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed. Just after create and start inside LXC container present cgroups. Example for memory: [root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/ total 0 drwxr-xr-x 2 root root 0 Sep 15 17:14 . drwxr-xr-x 12 root root 280 Sep 15 17:14 .. -rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children --w--w--w- 1 root root 0 Sep 15
2014 Jan 30
0
Re: Dynamically setting permanent memory libvirt-lxc
On 01/30/2014 01:26 PM, mallu mallu wrote: [please don't top-post on technical lists; also, it would be nice if you could convince your mailer to wrap long lines) > Eric, thank you for your response. Virsh memtune, setmaxmem and setmem won't survive a reboot. Ah, but they DO survive reboots, if you use the right options. 'virsh memtune --live --config' affects both the
2018 Aug 01
3
LXC Memory Limits wont work
Hello, iam currently trying to run LXC Containers with libvirt but the memory limit doesn't want to work in the container i see the full 32GB from the Host OS iam pretty sure that iam missing a configline in the xml lxc-template ~ # free -m total used free shared buff/cache available Mem: 32108 626 31396 249 85
2014 Jan 30
0
Re: Dynamically setting permanent memory libvirt-lxc
On 01/30/2014 10:11 AM, mallu mallu wrote: > I'm trying to permanently change memory allocation for a libvirt-lxc domain. So far I tried changing memory in memory.limit_in_bytes under /cgroup/memory/libvirt/lxc/<container>/. This didn't help. It appears that libvirt is not reading changes in cgroup. > > My requirements are > > 1) Be able to dynamically change
2012 Nov 09
1
[LXC][Openstack] Clarifications needed on usage of libvirt-lxc for openstack
Hi everyone, I've some doubts regarding the usage/working of libvirt-lxc with openstack. I'm doing a project titled "Low density virtualization for Storage cloud" 1. Can i use libvirt for lxc with Openstack swift alone (excluding nova, glance and keystone)? If no what other openstack components should i use for virtualization? (Is it necessary to install openstack nova to do