similar to: How do you set real time bandwidth for a container?

Displaying 20 results from an estimated 10000 matches similar to: "How do you set real time bandwidth for a container?"

2017 Feb 06
2
Real time threads don't work in libvirt containers under CentOS 7.3
We've been using libvirt based containers under CentOS 7 and everything has been working fine. One application we run in our containers is ctdb, which uses SCHED_FIFO (real time) threads. This has been working without problems until our recent upgrade to CentOS 7.3. For some reason, ctdb is no longer able to create real time threads, and I've tried a simple program myself that
2011 Feb 15
2
monitiring cpu usage via cgroup
Hi I was asking about the fedora 14 kernel if it is good enough for cgroup usage because I am trying to set a cgroup under cpu subsytem ( /dev/cgroup/cpu/group1/ ) that have /cpu.rt_runtime_us of 100000 while cpu.rt_period_us has a value of 1000000 i.e a ratio of 1/10 . still when I run a task (endless loop) in that group (cgexec -g cpu,cpuset:group1 ./test) it gets all the cpu core time
2015 Sep 24
1
Guest cpuacct counters and others location
Hi, My piece of code ( C langage) uses cgroups to retrieve counters related to cpu and memory usage related to KVM guests hosted by the host where this code runs. I noticed that depending on the O.S. running on the host , these counters are not found at the same location : CentOS 7 : ls /sys/fs/cgroup/cpuacct/machine.slice/machine-qemu\\x2drhel6.0.scope/vcpu0 cgroup.clone_children
2012 Dec 13
1
RHEL6 cgroup error after a few days of uptime
I have a RHEL6 that hosts many kvm virtual machines. It has been working fine for a couple years. I apply errata updates about once a week. In the last couple weeks, I've ran into a bug where the virtual machines start failing to start with a cgroup error message. If I reboot the host (very disruptive) then things start working normaly for a few days. Can I configure qemu/libvirt not to use
2017 Oct 18
2
Can we disable write to /sys/fs/cgroup tree inside container ?
Hi all Each lxc container on node have mounted tmpfs for cgroups tree: [root-inside-lxc@tst1 ~]# mount | grep cgroups cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on
2016 Apr 12
2
Re: Networking issues with lxc containers in AWS EC2
On 04/11/2016 11:33 AM, Laine Stump wrote: > Interesting. That functionality was moved out of the kernel's bridge > module into br_netfilter some time back, but that was done later than > the kernel 3.10 that is used by CentOS 7. Are you running some later > kernel version? > > If your kernel doesn't have a message in dmesg that looks like this: > > bridge:
2016 Mar 23
7
/proc/meminfo
Has anyone seen this issue? We're running containers under CentOS 7.2 and some of these containers are reporting incorrect memory allocation in /proc/meminfo. The output below comes from a system with 32G of memory and 84GB of swap. The values reported are completely wrong. # cat /proc/meminfo MemTotal: 9007199254740991 kB MemFree: 9007199224543267 kB MemAvailable: 12985680
2019 Oct 28
1
libvirt_lxc memory limit, emulator process part of the cgroup?
hi, I am currently investigating a bug with libvirt lxc. Whenever I do a systemctl daemon-reload on the host, my container loses his memory limit and then reports having access to 8 exabyte of memory. I have tracked the issue down to two parts: memory.limit_in_bytes jumps from the correct value to 9223372036854771712. libvirt lxc appears to set the memory limit in transient way without writing
2015 Oct 29
2
How to retrieve legacy cgroups location ?
Hi, As told in "Control Groups Resource Management" libvirt page : Legacy cgroups layout Prior to libvirt 1.0.5, the cgroups layout created by libvirt was different from that described above, and did not allow for administrator customization. Libvirt used a fixed, 3-level hierarchy libvirt/{qemu,lxc}/$VMNAME which was rooted at the point in the hierarchy where libvirtd itself was
2015 Aug 04
1
Does CTDB run under LXC containers?
I'm using libvirt_lxc and that has an XML based configuration. Based on what I've read, I think I need to add this to the ctdb container's config: <features> <capabilities policy='default'> <sys_nice state='on'/> </capabilities> </features> That didn't do the trick though. I need to figure out how to turn on all caps to
2016 Jan 29
2
Zombie processes being created when console buffer is full
We have been researching stuck zombie processes in our libvirt lxc containers. What we found was: 1) Each zombie’s parent was pid 1. init which symlinks to systemd. 2) In some cases, the zombies were launched by systemd, in others the zombie was inherited. 3) While the child is in the zombie state, the parent process (systemd) /proc/1/status shows no pending signals. 4) Attaching gdb to
2013 Jul 31
2
start lxc container on fedora 19
hello, i am new to lxc, i have created a lxc container on fedora 19 i created a container rootfs of fedora 19 by using yum --installroot=/containers/test1 --releasever=19 install openssh test1.xml file for container test1 <domain type="lxc"> <name>test1</name> <vcpu placement="static">1</vcpu> <cputune>
2016 Jul 10
1
lxc containers won't start in a f24 custom install - odd cgroup fs layout observed
Hi folks I use libvirt to programmatically spawn lxc containers I am facing an issue when migrating from fedora23 to fedora24 I use the stock kernel and libvirt version on both deployments, i.e.: f23: libvirt-1.2.18.3-2.fc23.x86_64 - kernel 4.5.7-202.fc23.x86_64 f24: libvirt-1.3.3.1-4.fc24.x86_64 - kernel 4.6.3-300.fc24.x86_64 First off, I need to outline that the host installation is done
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed. Just after create and start inside LXC container present cgroups. Example for memory: [root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/ total 0 drwxr-xr-x 2 root root 0 Sep 15 17:14 . drwxr-xr-x 12 root root 280 Sep 15 17:14 .. -rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children --w--w--w- 1 root root 0 Sep 15
2013 Jul 06
2
Permission problem with /dev/net/tun
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi lxc folks, the symptom my libvirt LXC container suffers from is: root@depot:/dev/net# ls -la total 0 drwxr-xr-x 2 root root 40 Jun 29 16:26 . drwxr-xr-x 5 root root 480 Jun 29 16:26 .. root@depot:/dev/net# mknod tun c 10 200 mknod: `tun': Operation not permitted The host is an up-to-date AMD64 Ubuntu raring on 3.8.0-25-generic that was
2016 Apr 26
0
Re: /proc/meminfo
Now reporduced with 100% 1) create contrainer with memory limit 1Gb 2) run inside simple memory test allocator: #include <malloc.h> #include <unistd.h> #include <memory.h> #define MB 1024 * 1024 int main() { int total = 0; while (1) { void *p = malloc( 100*MB ); memset(p,0, 100*MB ); total = total + 100; printf("Alloc %d Mb\n",total); sleep(1);
2015 Aug 04
3
Does CTDB run under LXC containers?
We're transitioning from a VM based environment to one that uses LXC based containers running under CentOS 7. CTDB runs fine under our CentOS 7 VMs. The same packages running under LXC however seem to have issues: # systemctl start ctdb.service Job for ctdb.service failed. See 'systemctl status ctdb.service' and 'journalctl -xn' for details. # systemctl status ctdb.service
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
Hi everyone, This is a new release of dm-ioband and bio-cgroup. With this release, the overhead of bio-cgroup is significantly reduced and the accuracy of block I/O tracking is much improved. These patches are for 2.6.28-rc2-mm1. Enjoy it! dm-ioband ========= Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
Hi everyone, This is a new release of dm-ioband and bio-cgroup. With this release, the overhead of bio-cgroup is significantly reduced and the accuracy of block I/O tracking is much improved. These patches are for 2.6.28-rc2-mm1. Enjoy it! dm-ioband ========= Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on
2008 Nov 13
6
[PATCH 0/8] I/O bandwidth controller and BIO tracking
Hi everyone, This is a new release of dm-ioband and bio-cgroup. With this release, the overhead of bio-cgroup is significantly reduced and the accuracy of block I/O tracking is much improved. These patches are for 2.6.28-rc2-mm1. Enjoy it! dm-ioband ========= Dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver, which gives specified bandwidth to each job running on