search for: node02

Displaying 20 results from an estimated 27 matches for "node02".

Did you mean: node0
2019 May 03
3
Aw: Re: very high traffic without any load
2019 May 02
4
Aw: Re: very high traffic without any load
2011 Jan 17
2
ping_pong using o2cb and cman
I was testing ocfs2 on a 2 node cluster set up. ocfs2-tools version is 1.6.3 ocfs2 kernel version is 2.6.36 Using cman on 2 nodes node02 dw # ping_pong -rwm /data/test.dat 3 data increment = 2 14 locks/sec node01 dw # ping_pong -rw /data/test.dat 3 data increment = 2 10 locks/sec node02 dw # ping_pong -r /data/test.dat 3 1980 locks/sec Using cman on 1 node node02 dw # ping_pong -rwm /data/test.dat 3 data incremen...
2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS. gluster volume create myBrick replica 3 node01:/mnt/data/myBrick node02:/mnt/data/myBrick node03:/mnt/data/myBrick Unfortunately node1 seemed to stop syncing with the other nodes, but this was undetected for weeks! When I noticed it, I did a "service glusterd restart" on node1, hoping the three nodes would sync again. But this did not happen. Only the CPU...
2007 Feb 03
1
GSSAPI authentication behind HA servers
Hi all, We have 2 mail servers sitting behind linux-HA machines.The mail servers are currently running dovecot 1.0rc2. Looking to enable GSSAPI authentication, I exported krb keytabs for imap/node01.domain at REALM and imap/node02.domain at REALM for both mail servers. However, clients are connecting to mail.domain.com, which results in a mismatch as far as the keytab is concerned (and rightly so). Connections directly to node01 and node02 work fine for gssapi auth. I proceeded to export a key for mail.domain.com into the...
2019 May 01
4
very high traffic without any load
...ot be the case.   Here is a quick snapshot using "tinc -n netname top" showing cumulative values:   Tinc sites             Nodes:    4  Sort: name        Cumulative Node                IN pkts   IN bytes   OUT pkts  OUT bytes node01             98749248 67445198848   98746920 67443404800 node02             37877112 25869768704   37878860 25870893056 node03             34607168 23636463616   34608260 23637114880 node04             26262640 17937174528   26262956 17937287168     That's 67GB for node01 in approximately 1.5 hours. Needless to say, this kind of traffic is entirely prohibit...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...; /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20 > /__DIRECT_IO_TEST__ > Status: Connected > Number of entries: 12 > > Brick node02:/gluster/engine/brick > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267- > 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01 > <gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f> > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > <gfid:1e309376-c62e-424f-9857-...
2006 Apr 13
1
Prototyping for basejail distribuition
...;NO" network_interfaces="" # # FILES # copy_to_jail="/etc/localtime /etc/resolv.conf /etc/csh.cshrc /etc/csh.login" # # JAILS # jail_node01_rootdir="/usr/jail/node01" jail_node01_hostname="node01.example.com" jail_node01_ip="127.0.0.1 " jail_node02_rootdir="/usr/jail/node02" jail_node02_hostname="node02.example.com" jail_node02_ip="127.0.0.2 " ------- In this moment is possible create large numbers of jail, i implemente in makefile, [root@daemon:/usr/local/basejail] # make >>> Sample in /usr/share/exam...
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...-ad51-f56d9ca5d7a7.61 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20 > /__DIRECT_IO_TEST__ > Status: Connected > Number of entries: 12 > > Brick node02:/gluster/engine/brick > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01 > <gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f> > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > <gfid:1e309376-c62e-...
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...e output of `gluster volume info` for the this volume? > *Volume Name: engine* *Type: Replicate* *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: node01:/gluster/engine/brick* *Brick2: node02:/gluster/engine/brick* *Brick3: node04:/gluster/engine/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-ahead: on* *transport.address-family: inet* *storage.owner-uid: 36* *performance.quick-read: off* *performance.read-ahead: off* *performance.io-cache: off* *performance.stat-...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...e/ > /Type: Replicate/ > /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: node01:/gluster/engine/brick/ > /Brick2: node02:/gluster/engine/brick/ > /Brick3: node04:/gluster/engine/brick/ > /Options Reconfigured:/ > /nfs.disable: on/ > /performance.readdir-ahead: on/ > /transport.address-family: inet/ > /storage.owner-uid: 36/ > /performance.quick-read: off/ > /per...
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...3e-ad51-f56d9ca5d7a7.64* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000000a0000000000000000* *trusted.bit-rot.version=0x090000000000000059647d5b000447e9* *trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0* *NODE02:* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000001a0000000000...
2008 Jun 07
56
Unable to create more than 1 VM
Hi, I have already set up a VM that can access the network using the NAT mode. The problem I have is that I''d like to create another VM that also has access to the network. The problem I get is that when a VM is started, the other one will refuse to start. Actually it starts, but when I want to "xm console" into it I get the following error message: "xenconsole: Could not
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x0000000a0000000000000000/ > /trusted.bit-rot.version=0x090000000000000059647d5b000447e9/ > /trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0/ > / > / > */ > /* > */NODE02:/* > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ &gt...
2011 Sep 09
1
Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
...n those as well). All of the hosts mount the volume using the FUSE module. The base filesystem on all of the nodes is XFS, however tests with ext4 have yielded similar results. Command used to create the volume: gluster volume create cluster-volume replica 2 transport tcp node01:/mnt/local-store/ node02:/mnt/local-store/ node03:/mnt/local-store/ node04:/mnt/local-store/ Command used to mount the Gluster volume on each node: mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume Creating a 40GB file onto a node's local storage (ie no Gluster involvement): dd if=/dev/zero of=/mnt/l...
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...39;true', hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417 2017-07-24 15:54:01,066+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2) [b7590c4] FINISH, GlusterServersListVDSCommand, return: [10.10.20.80/24:CONNECTED, node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417 2017-07-24 15:54:01,076+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2) [b7590c4] START, GlusterVolumesListVDSCommand(HostName = node01.localdomain.local, GlusterVolumesListVD...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...> /Type: Replicate/ > /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: gdnode01:/gluster/data/brick/ > /Brick2: gdnode02:/gluster/data/brick/ > /Brick3: gdnode04:/gluster/data/brick/ > /Options Reconfigured:/ > /nfs.disable: on/ > /performance.readdir-ahead: on/ > /transport.address-family: inet/ > /storage.owner-uid: 36/ > /performance.quick-read: off/ > /perfo...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...> sources=[0] 1 sinks=2* > > Hi, following your suggestion, I've checked the "peer" status and I found that there is too many name for the hosts, I don't know if this can be the problem or part of it: *gluster peer status on NODE01:* *Number of Peers: 2* *Hostname: dnode02.localdomain.local* *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd* *State: Peer in Cluster (Connected)* *Other names:* *192.168.10.52* *dnode02.localdomain.local* *10.10.20.90* *10.10.10.20* *gluster peer status on NODE02:* *Number of Peers: 2* *Hostname: dnode01.localdomain.local* *Uuid: a568bd6...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...#39;4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 29a62417 > 2017-07-24 15:54:01,066+02 INFO [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2) > [b7590c4] FINISH, GlusterServersListVDSCommand, return: [10.10.20.80/24:CONNECTED, > node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417 > 2017-07-24 15:54:01,076+02 INFO [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2) > [b7590c4] START, GlusterVolumesListVDSCommand(HostName = > node01.localdomain.local...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...: > > > *Volume Name: data* > *Type: Replicate* > *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* > *Status: Started* > *Snapshot Count: 0* > *Number of Bricks: 1 x 3 = 3* > *Transport-type: tcp* > *Bricks:* > *Brick1: gdnode01:/gluster/data/brick* > *Brick2: gdnode02:/gluster/data/brick* > *Brick3: gdnode04:/gluster/data/brick* > *Options Reconfigured:* > *nfs.disable: on* > *performance.readdir-ahead: on* > *transport.address-family: inet* > *storage.owner-uid: 36* > *performance.quick-read: off* > *performance.read-ahead: off* > *pe...