Displaying 20 results from an estimated 300 matches similar to: "/proc/meminfo"
2016 Apr 26
0
Re: /proc/meminfo
Now reporduced with 100%
1) create contrainer with memory limit 1Gb
2) run inside simple memory test allocator:
#include <malloc.h>
#include <unistd.h>
#include <memory.h>
#define MB 1024 * 1024
int main() {
int total = 0;
while (1) {
void *p = malloc( 100*MB );
memset(p,0, 100*MB );
total = total + 100;
printf("Alloc %d Mb\n",total);
sleep(1);
2016 Apr 26
2
Re: /proc/meminfo
On 04/26/2016 07:44 AM, mxs kolo wrote:
> Now reporduced with 100%
> 1) create contrainer with memory limit 1Gb
> 2) run inside simple memory test allocator:
> #include <malloc.h>
> #include <unistd.h>
> #include <memory.h>
> #define MB 1024 * 1024
> int main() {
> int total = 0;
> while (1) {
> void *p = malloc( 100*MB );
>
2014 Sep 15
2
cgroups inside LXC containers losts memory limits after some time
Hi all
I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed.
Just after create and start inside LXC container present cgroups.
Example for memory:
[root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/
total 0
drwxr-xr-x 2 root root 0 Sep 15 17:14 .
drwxr-xr-x 12 root root 280 Sep 15 17:14 ..
-rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children
--w--w--w- 1 root root 0 Sep 15
2016 Mar 24
0
Re: /proc/meminfo
Hi all
> Has anyone seen this issue? We're running containers under CentOS 7.2
> and some of these containers are reporting incorrect memory allocation
> in /proc/meminfo. The output below comes from a system with 32G of
> memory and 84GB of swap. The values reported are completely wrong.
yes, it's occured time to time on our installations.
Centos 7.2 + libvirt 1.2.18 and
2010 Apr 29
2
Hardware error or ocfs2 error?
Hello,
today I noticed the following on *only* one node:
----- cut here -----
Apr 29 11:01:18 node06 kernel: [2569440.616036] INFO: task ocfs2_wq:5214 blocked for more than 120 seconds.
Apr 29 11:01:18 node06 kernel: [2569440.616056] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 29 11:01:18 node06 kernel: [2569440.616080] ocfs2_wq D
2019 May 02
4
Aw: Re: very high traffic without any load
2014 Sep 16
2
1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
HI all
Centos 7, 3.10.0-123.6.3.el7.x86_64
libvirt 1.27, libvirt 1.2.8 builded from source with
./configure --prefix=/usr
make && make install
LXC with direct network failed to start:
Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode
Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode
Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
2011 Jun 23
1
Public Folder ACL Problem
Hi All,
I am trying to achieve ACL set in Public folders, I have public namespace in
dovecot.conf like this
namespace public {
separator = /
prefix = Public/
location = maildir:/var/mail/public/
subscriptions = no
}
and now under public folder I have two subfolders .test and .test1 . I have
created dovecote-acl under .test so that it can be seen and subscribe.
but I can't see
2019 Oct 28
1
libvirt_lxc memory limit, emulator process part of the cgroup?
hi,
I am currently investigating a bug with libvirt lxc. Whenever I do a
systemctl daemon-reload on the host, my container loses his memory limit
and then reports having access to 8 exabyte of memory.
I have tracked the issue down to two parts:
memory.limit_in_bytes jumps from the correct value to 9223372036854771712.
libvirt lxc appears to set the memory limit in transient way without
writing
2014 Jan 30
2
Dynamically setting permanent memory libvirt-lxc
I'm trying to permanently change memory allocation for a libvirt-lxc domain. So far I tried changing memory in memory.limit_in_bytes under /cgroup/memory/libvirt/lxc/<container>/. This didn't help. It appears that libvirt is not reading changes in cgroup.
My requirements are
1) Be able to dynamically change memory of a LXC domain without reboot
2) The memory change must survive
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
>
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com
> <mailto:jaganz at gmail.com>> wrote:
>
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for
2019 May 01
4
very high traffic without any load
Hi everyone,
I am new to using tinc and currently trying to set up a full IPv6 mesh between 4 servers of mine. Setting it up went smoothly and all of the tinc clients do connect properly. Routing through the network works fine as well. There is however a large amount of management traffic which I assume should not be the case.
Here is a quick snapshot using "tinc -n netname top"
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi,
Thank you for the answer and sorry for delay:
2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
1. What does the glustershd.log say on all 3 nodes when you run the
> command? Does it complain anything about these files?
>
No, glustershd.log is clean, no extra log after command on all 3 nodes
> 2. Are these 12 files also present in the 3rd data brick?
2009 Feb 03
1
Linux HA or Heartbeat IP address question
Hi all,
I am following the guide on HowToForge to get Heartbeat going for two
Apache web servers
(http://www.howtoforge.com/high_availability_heartbeat_centos), a
quick question for anyone who might have a similar setup.
Do I have to assign the service IP to either of the NICs or does
Heartbeat do that automagically?
Thanks
--
"The secret impresses no-one, the trick you use it for is
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote:
> Hi,
>
> Thank you for the answer and sorry for delay:
>
> 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>>:
>
> 1. What does the glustershd.log say on all 3 nodes when you run
> the command? Does it complain anything about these files?
>
>
>
2019 May 03
3
Aw: Re: very high traffic without any load
2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS.
gluster volume create myBrick replica 3 node01:/mnt/data/myBrick
node02:/mnt/data/myBrick node03:/mnt/data/myBrick
Unfortunately node1 seemed to stop syncing with the other nodes, but this
was undetected for weeks!
When I noticed it, I did a "service glusterd restart" on node1, hoping the
three nodes would sync again.
But this did not
2007 Feb 03
1
GSSAPI authentication behind HA servers
Hi all,
We have 2 mail servers sitting behind linux-HA machines.The mail
servers are currently running dovecot 1.0rc2.
Looking to enable GSSAPI authentication, I exported krb keytabs for
imap/node01.domain at REALM and imap/node02.domain at REALM for both mail
servers.
However, clients are connecting to mail.domain.com, which results in a
mismatch as far as the keytab is concerned (and rightly
2013 Aug 07
2
libvirt possibly ignoring cache=none ?
Hi,
I have an instance with 8G ram assigned. All block devices have cache
disabled (cache=none) on host. However, cgroup is reporting 4G of
cache associated to the instance (on host)
# cgget -r memory.stat libvirt/qemu/i-000009fa
libvirt/qemu/i-000009fa:
memory.stat: cache 4318011392
rss 8676360192
...
When I drop all system caches on host..
# echo 3 > /proc/sys/vm/drop_caches
#
..cache