similar to: LXC Memory Limits wont work

Displaying 20 results from an estimated 400 matches similar to: "LXC Memory Limits wont work"

2015 Apr 08
4
Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
Hi all. I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12 My container has bridged network: # virsh dumpxml test1 <domain type='lxc'> <name>test1</name> <uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory>
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed. Just after create and start inside LXC container present cgroups. Example for memory: [root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/ total 0 drwxr-xr-x 2 root root 0 Sep 15 17:14 . drwxr-xr-x 12 root root 280 Sep 15 17:14 .. -rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children --w--w--w- 1 root root 0 Sep 15
2013 Jul 31
2
start lxc container on fedora 19
hello, i am new to lxc, i have created a lxc container on fedora 19 i created a container rootfs of fedora 19 by using yum --installroot=/containers/test1 --releasever=19 install openssh test1.xml file for container test1 <domain type="lxc"> <name>test1</name> <vcpu placement="static">1</vcpu> <cputune>
2018 Aug 06
0
Re: LXC Memory Limits wont work
On Wed, Aug 01, 2018 at 12:53:58PM +0200, Markus Raps wrote: > Hello, > > iam currently trying to run LXC Containers with libvirt > but the memory limit doesn't want to work > > in the container i see the full 32GB from the Host OS > iam pretty sure that iam missing a configline in the xml > > lxc-template ~ # free -m > total used
2017 Nov 10
1
Some strange errors in logs
+1 "penny" :-) :-) for my thoughts.. :-p But yes, that should fix it. I also suggest, review your logs, you wil notice more messages. Rsyslog/apt/snmp/postfix/ntp need manual changes after the upgrade. At least these, maybe more but these are the one i've encountered. Greetz, Louis > -----Oorspronkelijk bericht----- > Van: samba [mailto:samba-bounces at
2017 Nov 10
0
Some strange errors in logs
Samba - General mailing list wrote > Hai, > > cat "/var/lib/samba/private/named.conf" also please. > And check if the correct bind9_dlz is enabled. > > dpkg -l | grep bind9 > Jessie, should be 9.9 > Stretch should be 9.10 > If this server was upgraded then you need to manualy adjust the file > above. > Looks to my bind9-dlz is enable in smb.conf
2014 Sep 15
0
Re: cgroups inside LXC containers losts memory limits after some time
HI all >After unpredictable time passed (1-5 day ?), cgroups inside LXC >magicaly removed. virsh dumpxml config look like this: <domain type='lxc' id='3566'> <name>puppet</name> <uuid>6d49b280-5686-4e3c-b048-1b5d362fb137</uuid> <memory unit='KiB'>8388608</memory> <currentMemory
2015 Apr 09
0
Re: Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
On 04/08/2015 09:38 AM, mxs kolo wrote: > Hi all. > > I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12 > My container has bridged network: > # virsh dumpxml test1 > <domain type='lxc'> > <name>test1</name> > <uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid> > <memory
2019 Jan 21
1
Slow ssh
Hello, i'm using libvirt/kvm/qemu on a stable Debian Linux KVM 4.9.0-6-amd64 . qemu-kvm : 1:2.8+dfsg-6+deb9u5 libvirt-daemon : 3.0.0-4+deb9u3 libvirt-clients : 3.0.0-4+deb9u3 virt-manager : 1:1.4.0-5 virt-viewer : 5.0-1 virtinst : 1:1.4.0-5 My guests are made with this kind of command line : virt-install --connect=qemu:///system --name=debian --ram=2048 --disk
2018 Jul 12
2
SSH Agent Forwarding Not Working
Hi, I know this might be the most asked question, so I've done anything possible to troubleshoot the problem myself, but still, my SSH Agent Forwarding is not working for me. The best troubleshooting guide that I found, and also the one I've been using, is the ssh forwarding guide on github - https://help.github.com/articles/using-ssh-agent-forwarding I've checked all things there,
2018 Nov 29
3
samba_dnsupdate REFUSED between Samba4 AD DC and Win 2008r2
Hi, I've some trouble in getting samba internal DNS server in sync with others DNS (Windows) of my AD domain. samba_dnsupdate returns: update failed: REFUSED Failed update of 1 entries I'm running samba Version 4.5.12-Debian root at mysamba4dc:~# dpkg -l | grep samba ii  python-samba                   2:4.5.12+dfsg-2+deb9u3 amd64        Python bindings for Samba ii 
2016 Nov 18
2
locking domain memory
Hi, is there a way to lock a guests memory so it doesn't get swapped out? I now there is memoryBacking->locked but that says it requires memtune->hard_limit and the description of that basically "don't ever do this" rendering the locked element kind of pointless. How can I prevent the guests memory from being swapped out without shooting myself in the foot? Regards,
2014 Sep 16
2
1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
HI all Centos 7, 3.10.0-123.6.3.el7.x86_64 libvirt 1.27, libvirt 1.2.8 builded from source with ./configure --prefix=/usr make && make install LXC with direct network failed to start: Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
On 09.02.2015 18:19, G. Richard Bellamy wrote: > First I'll quickly summarize my understanding of how to configure numa... > > In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to > use hugepages for the guest, and to get those hugepages from a > particular host NUMA node. No, @nodeset refers to guest NUMA nodes. > > In
2019 Apr 03
1
SEV machines and memory pinning
Hello, I am working on implementing SEV support in OpenStack. There are some questions that came up in the discussion of the spec [0] [0] https://review.openstack.org/#/c/641994/ As far as i understand, the memory for SEV machines need to be pinned so that it doesn't migrate to swap and page migration. ROMS, UEFI pflash and video RAM should be pinned too. Initially we planned to use
2020 Feb 05
2
ldb errors after upgrade, cause?
Hi! recently we upgraded a Debian jessie server to Debian buster, with Samba being upgraded from 4.5.12 (+dfsg-2+deb9u3) to 4.9.5 (+dfsg-5+deb10u1). A few hours later we saw these errors in syslog: smbd[26024]: [2020/02/03 11:13:13.631613, 0] ../lib/ldb-samba/ldb_wrap.c:79(ldb_wrap_debug) smbd[26024]: ldb: Failure during ltdb_lock_read(): Locking error ? Busy smbd[26024]:
2016 Nov 22
1
Re: locking domain memory
On 21.11.2016 17:05, Michal Privoznik wrote: > On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote: >> Hi, >> is there a way to lock a guests memory so it doesn't get swapped out? I >> now there is memoryBacking->locked but that says it requires >> memtune->hard_limit and the description of that basically "don't ever do >> this" rendering the
2019 Jan 13
2
winbind failed to reset devices.list was: samba.service is masked (Debian 9)
Am 13.01.2019 um 10:44 schrieb Rowland Penny via samba: > On Sun, 13 Jan 2019 08:09:52 +0100 > Anton Blau via samba <samba at lists.samba.org> wrote: > Am 12.01.2019 um 23:08 schrieb Rowland Penny via samba: >>> On Sat, 12 Jan 2019 22:04:50 +0100 >>> Anton Blau via samba <samba at lists.samba.org> wrote: >>> >>> Is this all you installed ? :
2017 Sep 05
1
error/crash on mount
Hi, I am having issues with libguestfs; I am unable to mount a disk either from cli or via python bindings. This was working previously, but in order to have access to python3.5 I am trying to upgrade from Debian 8/Jessie to Debian 9/Stretch. Unfortunately the libguestfs error is getting in the way. Output of libguestfs-test-tool is attached as per libguestfs.org help. My current environment:
2018 Oct 24
0
samba 4.8.0 Time Machine crashes on Mac
Hello Adam, have you tried to increase /run/lock tmpfs mount size on your server yet? On Debian it is 5MB by default that may be not enough for Samba. You can do it by adding the following line into /etc/fstab: none /run/lock tmpfs nodev,noexec,nosuid,size=10485760 0 0 and `sudo mount -o remount /run/lock' command (or just simply reboot). Hope it helps. Regards, Andriy On Mar 21