Displaying 20 results from an estimated 600 matches similar to: "1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so["
2014 Sep 18
0
Re: 1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
On 16.09.2014 17:40, mxs kolo wrote:
> HI all
>
> Centos 7, 3.10.0-123.6.3.el7.x86_64
> libvirt 1.27, libvirt 1.2.8 builded from source with
> ./configure --prefix=/usr
> make && make install
> LXC with direct network failed to start:
>
> Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode
> Sep 16 19:19:39 node01 kernel: device br502 left
2013 Jul 08
4
Re: Permission problem with /dev/net/tun
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi Daniel,
On 07/08/2013 11:41 AM, Daniel P. Berrange wrote:
>> the symptom my libvirt LXC container suffers from is:
>> root@depot:/dev/net# ls -la total 0 drwxr-xr-x 2 root root 40
>> Jun 29 16:26 . drwxr-xr-x 5 root root 480 Jun 29 16:26 ..
>> root@depot:/dev/net# mknod tun c 10 200 mknod: `tun': Operation
>>
2015 Apr 08
4
Centos 7.1.1503 + libvirt 1.2.14 = broken direct network mode
Hi all.
I use LXC on Centos 7 x86-64, with libvirt version 1.2.6 and 1.2.12
My container has bridged network:
# virsh dumpxml test1
<domain type='lxc'>
<name>test1</name>
<uuid>518539ab-7491-45ab-bb1d-3d7f11bfb0b1</uuid>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
2010 Apr 29
2
Hardware error or ocfs2 error?
Hello,
today I noticed the following on *only* one node:
----- cut here -----
Apr 29 11:01:18 node06 kernel: [2569440.616036] INFO: task ocfs2_wq:5214 blocked for more than 120 seconds.
Apr 29 11:01:18 node06 kernel: [2569440.616056] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 29 11:01:18 node06 kernel: [2569440.616080] ocfs2_wq D
2011 Jun 23
1
Public Folder ACL Problem
Hi All,
I am trying to achieve ACL set in Public folders, I have public namespace in
dovecot.conf like this
namespace public {
separator = /
prefix = Public/
location = maildir:/var/mail/public/
subscriptions = no
}
and now under public folder I have two subfolders .test and .test1 . I have
created dovecote-acl under .test so that it can be seen and subscribe.
but I can't see
2013 Jul 09
2
[PATCH 2/2] LXC: hostdev: parent directroy for hostdev atomically
Create parent directroy for hostdev atomically when we
start a lxc domain or attach a hostdev to a lxc domain.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
---
src/lxc/lxc_container.c | 42 ++++++++++++++++++++++++++++--------------
src/lxc/lxc_driver.c | 14 ++++++++++++++
2 files changed, 42 insertions(+), 14 deletions(-)
diff --git a/src/lxc/lxc_container.c
2019 May 03
3
Aw: Re: very high traffic without any load
2012 Sep 12
2
Ocfs2-users Digest, Vol 105, Issue 4
Seems RPM compatibility issue with OS Kernel.
Check OS Kernel and download RPM (4 Nos) for same kernel.
Regards,
Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG
Elitecore Technologies Pvt. Ltd.
904, Silicon Tower || Off C.G.Road
Behind Pariseema Building || Ahmedabad || INDIA
[GSM]: +91 9727746022
-----Original Message-----
From: ocfs2-users-bounces at oss.oracle.com
[mailto:ocfs2-users-bounces
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all
I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed.
Just after create and start inside LXC container present cgroups.
Example for memory:
[root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/
total 0
drwxr-xr-x 2 root root 0 Sep 15 17:14 .
drwxr-xr-x 12 root root 280 Sep 15 17:14 ..
-rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children
--w--w--w- 1 root root 0 Sep 15
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
>
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com
> <mailto:jaganz at gmail.com>> wrote:
>
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for
2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS.
gluster volume create myBrick replica 3 node01:/mnt/data/myBrick
node02:/mnt/data/myBrick node03:/mnt/data/myBrick
Unfortunately node1 seemed to stop syncing with the other nodes, but this
was undetected for weeks!
When I noticed it, I did a "service glusterd restart" on node1, hoping the
three nodes would sync again.
But this did not
2007 Feb 03
1
GSSAPI authentication behind HA servers
Hi all,
We have 2 mail servers sitting behind linux-HA machines.The mail
servers are currently running dovecot 1.0rc2.
Looking to enable GSSAPI authentication, I exported krb keytabs for
imap/node01.domain at REALM and imap/node02.domain at REALM for both mail
servers.
However, clients are connecting to mail.domain.com, which results in a
mismatch as far as the keytab is concerned (and rightly
2014 Sep 15
0
Re: cgroups inside LXC containers losts memory limits after some time
HI all
>After unpredictable time passed (1-5 day ?), cgroups inside LXC
>magicaly removed.
virsh dumpxml config look like this:
<domain type='lxc' id='3566'>
<name>puppet</name>
<uuid>6d49b280-5686-4e3c-b048-1b5d362fb137</uuid>
<memory unit='KiB'>8388608</memory>
<currentMemory
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after command on all 3 nodes
> 2. Are these 12 files also present in the 3rd data brick?
2019 May 01
4
very high traffic without any load
Hi everyone,
I am new to using tinc and currently trying to set up a full IPv6 mesh between 4 servers of mine. Setting it up went smoothly and all of the tinc clients do connect properly. Routing through the network works fine as well. There is however a large amount of management traffic which I assume should not be the case.
Here is a quick snapshot using "tinc -n netname top"
2006 Apr 13
1
Prototyping for basejail distribuition
Hi,
I attach 2 files in this email, the first is a Makefile and the second is
jail.conf.
For demonstre my idea i resolved create one "Pseudo Prototyping", for test
is necessary:
1 - Create dir /usr/local/basejail
2 - Copy Makefile to /usr/local/basejail
3 - Copy jail.conf to /etc
4 - The initial basejail is precompiled is distributed in CD1,
for simular basejail is necessary a
2011 Apr 21
7
[PATCHv11 0/6] libvirt/qemu - persistent modification of devices
Here is v11. Fixed comments/bugs and updated against the latest libvirt.git.
Changes v10->v11:
- fixed comments on each patches
- fixed cgroup handling in patch 3.
- fixed MODIFY_CURRENT handling in patch 4.
most of diff comes from refactoring qemu/qemu_driver.c
--
conf/domain_conf.c | 40 ++
conf/domain_conf.h | 5
libvirt_private.syms | 3
qemu/qemu_driver.c | 727
2016 Mar 23
7
/proc/meminfo
Has anyone seen this issue? We're running containers under CentOS 7.2
and some of these containers are reporting incorrect memory allocation
in /proc/meminfo. The output below comes from a system with 32G of
memory and 84GB of swap. The values reported are completely wrong.
# cat /proc/meminfo
MemTotal: 9007199254740991 kB
MemFree: 9007199224543267 kB
MemAvailable: 12985680
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote:
> Hi,
>
> Thank you for the answer and sorry for delay:
>
> 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
> 1. What does the glustershd.log say on all 3 nodes when you run
> the command? Does it complain anything about these files?
>
>
>