Displaying 20 results from an estimated 300 matches similar to: "Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode"
2015 Apr 09
0
Re: Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
On 04/08/2015 09:38 AM, mxs kolo wrote:
> Hi all.
>
> I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12
> My container has bridged network:
> # virsh dumpxml test1
> <domain type='lxc'>
> <name>test1</name>
> <uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid>
> <memory
2014 Sep 16
2
1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
HI all
Centos 7, 3.10.0-123.6.3.el7.x86_64
libvirt 1.27, libvirt 1.2.8 builded from source with
./configure --prefix=/usr
make && make install
LXC with direct network failed to start:
Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode
Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode
Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all
I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed.
Just after create and start inside LXC container present cgroups.
Example for memory:
[root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/
total 0
drwxr-xr-x 2 root root 0 Sep 15 17:14 .
drwxr-xr-x 12 root root 280 Sep 15 17:14 ..
-rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children
--w--w--w- 1 root root 0 Sep 15
2014 Sep 15
0
Re: cgroups inside LXC containers losts memory limits after some time
HI all
>After unpredictable time passed (1-5 day ?), cgroups inside LXC
>magicaly removed.
virsh dumpxml config look like this:
<domain type='lxc' id='3566'>
<name>puppet</name>
<uuid>6d49b280-5686-4e3c-b048-1b5d362fb137</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory
2018 Aug 01
3
LXC Memory Limits wont work
Hello,
iam currently trying to run LXC Containers with libvirt
but the memory limit doesn't want to work
in the container i see the full 32GB from the Host OS
iam pretty sure that iam missing a configline in the xml
lxc-template ~ # free -m
total used free shared buff/cache
available
Mem: 32108 626 31396 249 85
2013 Jul 31
2
start lxc container on fedora 19
hello,
i am new to lxc, i have created a lxc container on fedora 19
i created a container rootfs of fedora 19 by using
yum --installroot=/containers/test1 --releasever=19 install openssh
test1.xml file for container test1
<domain type="lxc">
<name>test1</name>
<vcpu placement="static">1</vcpu>
<cputune>
2012 Oct 18
0
0.10.x incorrectly reporting currentMemory size
Hi,
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>1394380</currentMemory>
<memtune>
<hard_limit unit='KiB'>1594380</hard_limit>
<soft_limit unit='KiB'>1494380</soft_limit>
</memtune>
results to:
0.10.x, .dominfo. or dumpxml | grep -i currentmemory
Max memory: 16777216
2013 Jul 31
0
Re: start lxc container on fedora 19
On Wed, Jul 31, 2013 at 12:46:58PM +0530, Aarti Sawant wrote:
> hello,
>
> i am new to lxc, i have created a lxc container on fedora 19
> i created a container rootfs of fedora 19 by using
> yum --installroot=/containers/test1 --releasever=19 install openssh
>
> test1.xml file for container test1
> <domain type="lxc">
> <name>test1</name>
2015 Apr 08
0
Re: Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
> And all was fine, before I accidentally upgraded Centos to 7.1.1503
> After upgrade LXC can't start with diagnostic:
> [root@node14 ~]# virsh start test1
> error: Failed to start domain test1
> error: internal error: guest failed to start: internal error: Child
> process (ip link set macvlan0 netns 25263) unexpected exit status 2:
> RTNETLINK answers: Invalid argument
2019 Jan 21
2
libvirt 5.0.0 - LXC container still in "virsh list" output after shutdown
Hello.
Centos 7.6 with libvirt build from base "virt" repository:
libvirt-daemon-driver-lxc-5.0.0-1.el7.x86_64
libvirt-client-5.0.0-1.el7.x86_64
libvirt-daemon-5.0.0-1.el7.x86_64
libvirt-daemon-driver-network-5.0.0-1.el7.x86_64
libvirt-libs-5.0.0-1.el7.x86_64
+
systemd-219-62.el7_6.2.x86_64
Now lxc containers with type='direct' can be started, but can't be stopped :)
2016 Mar 23
7
/proc/meminfo
Has anyone seen this issue? We're running containers under CentOS 7.2
and some of these containers are reporting incorrect memory allocation
in /proc/meminfo. The output below comes from a system with 32G of
memory and 84GB of swap. The values reported are completely wrong.
# cat /proc/meminfo
MemTotal: 9007199254740991 kB
MemFree: 9007199224543267 kB
MemAvailable: 12985680
2019 Mar 12
2
KVM-Docker-Networking using TAP and MACVLAN
Hi everyone!
I have the following requirement: I need to connect a set of Docker
containers to a KVM. The containers shall be isolated in a way that they
cannot communicate to each other without going through the KVM, which
will act as router/firewall. For this, I thought about the following
simple setup (as opposed to a more complex one involving a bridge with
vlan_filtering and a seperate VLAN
2019 Apr 03
1
SEV machines and memory pinning
Hello,
I am working on implementing SEV support in OpenStack. There are some
questions that came up in the discussion of the spec [0]
[0] https://review.openstack.org/#/c/641994/
As far as i understand, the memory for SEV machines need to be pinned so
that it doesn't migrate to swap and page migration. ROMS, UEFI pflash
and video RAM should be pinned too.
Initially we planned to use
2016 Nov 18
2
locking domain memory
Hi,
is there a way to lock a guests memory so it doesn't get swapped out? I
now there is memoryBacking->locked but that says it requires
memtune->hard_limit and the description of that basically "don't ever do
this" rendering the locked element kind of pointless.
How can I prevent the guests memory from being swapped out without
shooting myself in the foot?
Regards,
2012 Oct 14
0
LXC guest memory recycling under RHEL6
Hello, I'm running a LXC guest using libvirtd under RHEL6. The guest
has this particular memory behavior that I'm unable to explain.
I've the following memory settings for this guest's memory. Inside the
guest I'm running a Apache webserver serving PHP scripts and static
content.
<memory unit='GB'>2</memory>
<currentMemory
2016 Apr 26
2
Re: /proc/meminfo
On 04/26/2016 07:44 AM, mxs kolo wrote:
> Now reporduced with 100%
> 1) create contrainer with memory limit 1Gb
> 2) run inside simple memory test allocator:
> #include <malloc.h>
> #include <unistd.h>
> #include <memory.h>
> #define MB 1024 * 1024
> int main() {
> int total = 0;
> while (1) {
> void *p = malloc( 100*MB );
>
2016 Nov 22
1
Re: locking domain memory
On 21.11.2016 17:05, Michal Privoznik wrote:
> On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote:
>> Hi,
>> is there a way to lock a guests memory so it doesn't get swapped out? I
>> now there is memoryBacking->locked but that says it requires
>> memtune->hard_limit and the description of that basically "don't ever do
>> this" rendering the
2015 Feb 23
1
Re: HugePages - can't start guest that requires them
On 20.02.2015 21:32, G. Richard Bellamy wrote:
> <snip/>
>
> I've modified my config [1] based on my understanding, and am running
> into a new error. Basically I'm hitting the oom-killer [2] even though
> the hard_limit [3] of memtune is below the total number of hugepages
> set for that NUMA nodeset.
>
Just drop the hard_limit. It's a blackbox we should
2015 Feb 10
2
Re: HugePages - can't start guest that requires them
On 09.02.2015 18:19, G. Richard Bellamy wrote:
> First I'll quickly summarize my understanding of how to configure numa...
>
> In "//memoryBacking/hugepages/page[@nodeset]" I am telling libvirt to
> use hugepages for the guest, and to get those hugepages from a
> particular host NUMA node.
No, @nodeset refers to guest NUMA nodes.
>
> In
2016 Apr 26
1
Re: /proc/meminfo
On 04/26/2016 10:01 AM, mxs kolo wrote:
>> Cool, thanks for the info! Does this still affect libvirt 1.3.2 as well? You
>> mentioned elsewhere that you weren't hitting this issue with that version
> Sorry, I miss version and another details.
> Test make on CentOS Linux release 7.2.1511 (Core)
> and libvirt 1.3.2, build from sources with next options:
> --without qemu \