Displaying 20 results from an estimated 45 matches for "node01".
Did you mean:
node0
2010 Apr 29
2
Hardware error or ocfs2 error?
...xa/0x20
Apr 29 11:01:18 node06 kernel: [2569440.616825] [<ffffffff810646f0>] ? kthread+0x0/0x81
Apr 29 11:01:18 node06 kernel: [2569440.616840] [<ffffffff81011ba0>] ? child_rip+0x0/0x20
----- cut here -----
On all the others I had the following:
----- cut here -----
Apr 29 11:00:23 node01 kernel: [2570880.752038] INFO: task o2quot/0:2971 blocked for more than 120 seconds.
Apr 29 11:00:23 node01 kernel: [2570880.752059] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 29 11:00:23 node01 kernel: [2570880.752083] o2quot/0 D 000000000000000...
2019 May 03
3
Aw: Re: very high traffic without any load
2019 May 02
4
Aw: Re: very high traffic without any load
2014 Sep 16
2
1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
HI all
Centos 7, 3.10.0-123.6.3.el7.x86_64
libvirt 1.27, libvirt 1.2.8 builded from source with
./configure --prefix=/usr
make && make install
LXC with direct network failed to start:
Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode
Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode
Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
service for macvlan0.
Sep 16 19:19:39 node01 kernel: XFS (dm-16): Mounting Filesystem
Sep 16 19:19:39 node01 kernel: XFS...
2011 Jun 23
1
Public Folder ACL Problem
...lic/
location = maildir:/var/mail/public/
subscriptions = no
}
and now under public folder I have two subfolders .test and .test1 . I have
created dovecote-acl under .test so that it can be seen and subscribe.
but I can't see those subfolders in public folder. Logs says
Jun 23 17:50:54 node01 dovecot: IMAP(shantanu at techblue.co.in): acl:
initializing backend with data: vfile
Jun 23 17:50:54 node01 dovecot: IMAP(shantanu at techblue.co.in): acl: acl
username = shantanu at techblue.co.in
Jun 23 17:50:54 node01 dovecot: IMAP(shantanu at techblue.co.in): acl: owner =
0
Jun 23 17:50:54 nod...
2014 Sep 18
0
Re: 1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
On 16.09.2014 17:40, mxs kolo wrote:
> HI all
>
> Centos 7, 3.10.0-123.6.3.el7.x86_64
> libvirt 1.27, libvirt 1.2.8 builded from source with
> ./configure --prefix=/usr
> make && make install
> LXC with direct network failed to start:
>
> Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode
> Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode
> Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
> service for macvlan0.
> Sep 16 19:19:39 node01 kernel: XFS (dm-16): Mounting Filesystem
> Sep 16 19...
2012 Sep 12
2
Ocfs2-users Digest, Vol 105, Issue 4
...keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem "ocfs2_dlmfs": Unable to load filesystem "ocfs2_dlmfs"
Failed
And in the /var/log/message log I get below error
Sep 12 11:21:41 node01 modprobe: FATAL: Module ocfs2_stackglue not found.
Sep 12 11:21:41 node01 modprobe: FATAL: Module ocfs2_dlmfs not found.
Sep 12 11:33:13 node01 modprobe: FATAL: Module ocfs2_stackglue not found.
Sep 12 11:33:13 node01 modprobe: FATAL: Module ocfs2_dlmfs not found.
How can I fix this and get this w...
2016 Apr 26
0
Re: /proc/meminfo
...out
Alloc 100 Mb
Alloc 200 Mb
Alloc 300 Mb
Alloc 400 Mb
Alloc 500 Mb
Alloc 600 Mb
Alloc 700 Mb
Alloc 800 Mb
Alloc 900 Mb
Alloc 1000 Mb
Killed
As You can see, limit worked and "free" inside container show correct values
3) Check situation outside container, from top hadrware node:
[root@node01]# cat
/sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmxs2.test.scope/memory.limit_in_bytes
1073741824
4) Check list of pid in cgroups (it's IMPOTANT moment):
[root@node01]# cat
/sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmxs2.test.scope/tasks
7445...
2016 Mar 23
7
/proc/meminfo
Has anyone seen this issue? We're running containers under CentOS 7.2
and some of these containers are reporting incorrect memory allocation
in /proc/meminfo. The output below comes from a system with 32G of
memory and 84GB of swap. The values reported are completely wrong.
# cat /proc/meminfo
MemTotal: 9007199254740991 kB
MemFree: 9007199224543267 kB
MemAvailable: 12985680
2016 Apr 26
2
Re: /proc/meminfo
...Alloc 500 Mb
> Alloc 600 Mb
> Alloc 700 Mb
> Alloc 800 Mb
> Alloc 900 Mb
> Alloc 1000 Mb
> Killed
>
> As You can see, limit worked and "free" inside container show correct values
>
> 3) Check situation outside container, from top hadrware node:
> [root@node01]# cat
> /sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmxs2.test.scope/memory.limit_in_bytes
> 1073741824
> 4) Check list of pid in cgroups (it's IMPOTANT moment):
> [root@node01]# cat
> /sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2d7445\\x2dtst\\x2dmx...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...his problem: "engine" gluster volume have always unsynced
> elements and we cant' fix the problem, on command line we have tried to use
> the "heal" command but elements remain always unsynced ....
>
> Below the heal command "status":
>
> [root at node01 ~]# gluster volume heal engine info
> Brick node01:/gluster/engine/brick
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2
> /.shard...
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
..." gluster volume have always unsynced
> elements and we cant' fix the problem, on command line we have
> tried to use the "heal" command but elements remain always
> unsynced ....
>
> Below the heal command "status":
>
> [root at node01 ~]# gluster volume heal engine info
> Brick node01:/gluster/engine/brick
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64
> /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60
> /.shard/8aa74564-6740-403e-ad51-f56d9ca...
2012 Sep 12
0
Ocfs2-users Digest, Vol 105, Issue 4
...keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem "ocfs2_dlmfs": Unable to load filesystem "ocfs2_dlmfs"
Failed
And in the /var/log/message log I get below error
Sep 12 11:21:41 node01 modprobe: FATAL: Module ocfs2_stackglue not found.
Sep 12 11:21:41 node01 modprobe: FATAL: Module ocfs2_dlmfs not found.
Sep 12 11:33:13 node01 modprobe: FATAL: Module ocfs2_stackglue not found.
Sep 12 11:33:13 node01 modprobe: FATAL: Module ocfs2_dlmfs not found.
How can I fix this and get this w...
2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS.
gluster volume create myBrick replica 3 node01:/mnt/data/myBrick
node02:/mnt/data/myBrick node03:/mnt/data/myBrick
Unfortunately node1 seemed to stop syncing with the other nodes, but this
was undetected for weeks!
When I noticed it, I did a "service glusterd restart" on node1, hoping the
three nodes would sync again.
But this did...
2007 Feb 03
1
GSSAPI authentication behind HA servers
Hi all,
We have 2 mail servers sitting behind linux-HA machines.The mail
servers are currently running dovecot 1.0rc2.
Looking to enable GSSAPI authentication, I exported krb keytabs for
imap/node01.domain at REALM and imap/node02.domain at REALM for both mail
servers.
However, clients are connecting to mail.domain.com, which results in a
mismatch as far as the keytab is concerned (and rightly so).
Connections directly to node01 and node02 work fine for gssapi auth.
I proceeded to export a k...
2019 May 01
4
very high traffic without any load
...er a large amount of management traffic which I assume should not be the case.
Here is a quick snapshot using "tinc -n netname top" showing cumulative values:
Tinc sites Nodes: 4 Sort: name Cumulative
Node IN pkts IN bytes OUT pkts OUT bytes
node01 98749248 67445198848 98746920 67443404800
node02 37877112 25869768704 37878860 25870893056
node03 34607168 23636463616 34608260 23637114880
node04 26262640 17937174528 26262956 17937287168
That's 67GB for node01 in approximately 1.5 ho...
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ll 3 nodes
> 3. Can you provide the output of `gluster volume info` for the this volume?
>
*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: node01:/gluster/engine/brick*
*Brick2: node02:/gluster/engine/brick*
*Brick3: node04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead: on*
*transport.address-family: inet*
*storage.owner-uid: 36*
*performance.quick-read: off*
*performance.read-ahead: off*
*perfor...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ume?
>
>
>
> /Volume Name: engine/
> /Type: Replicate/
> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: node01:/gluster/engine/brick/
> /Brick2: node02:/gluster/engine/brick/
> /Brick3: node04:/gluster/engine/brick/
> /Options Reconfigured:/
> /nfs.disable: on/
> /performance.readdir-ahead: on/
> /transport.address-family: inet/
> /storage.owner-uid: 36/
>...
2006 Apr 13
1
Prototyping for basejail distribuition
...ame"
flags="-l -U root"
#
# JAIL RC.CONF
#
sendmail_enable="NO"
inetd_flags="-wW -a"
rpcbind_enable="NO"
network_interfaces=""
#
# FILES
#
copy_to_jail="/etc/localtime /etc/resolv.conf /etc/csh.cshrc
/etc/csh.login"
#
# JAILS
#
jail_node01_rootdir="/usr/jail/node01"
jail_node01_hostname="node01.example.com"
jail_node01_ip="127.0.0.1 "
jail_node02_rootdir="/usr/jail/node02"
jail_node02_hostname="node02.example.com"
jail_node02_ip="127.0.0.2 "
-------
In this moment is possib...
2009 Feb 03
1
Linux HA or Heartbeat IP address question
Hi all,
I am following the guide on HowToForge to get Heartbeat going for two
Apache web servers
(http://www.howtoforge.com/high_availability_heartbeat_centos), a
quick question for anyone who might have a similar setup.
Do I have to assign the service IP to either of the NICs or does
Heartbeat do that automagically?
Thanks
--
"The secret impresses no-one, the trick you use it for is