search for: node04

Displaying 20 results from an estimated 21 matches for "node04".

Did you mean: node0
2019 May 03
3
Aw: Re: very high traffic without any load
2019 Aug 25
2
tinc 1.1pre17 on fedora 30
2019 May 02
4
Aw: Re: very high traffic without any load
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...he this volume? > *Volume Name: engine* *Type: Replicate* *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: node01:/gluster/engine/brick* *Brick2: node02:/gluster/engine/brick* *Brick3: node04:/gluster/engine/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-ahead: on* *transport.address-family: inet* *storage.owner-uid: 36* *performance.quick-read: off* *performance.read-ahead: off* *performance.io-cache: off* *performance.stat-prefetch: off* *performance.low-prio-th...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...D: d19c19e3-910d-437b-8ba7-4f2a23d17515/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: node01:/gluster/engine/brick/ > /Brick2: node02:/gluster/engine/brick/ > /Brick3: node04:/gluster/engine/brick/ > /Options Reconfigured:/ > /nfs.disable: on/ > /performance.readdir-ahead: on/ > /transport.address-family: inet/ > /storage.owner-uid: 36/ > /performance.quick-read: off/ > /performance.read-ahead: off/ > /performance....
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6 > <gfid:9ad720b2-507d-4830-8294-ec8adee6d384> > <gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf> > Status: Connected > Number of entries: 12 > > Brick node04:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > > running the "gluster volume heal engine" don't solve the problem... > 1. What does the glustershd.log say on all 3 nodes when you run the command? Does it complain anything about the...
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...-ad51-f56d9ca5d7a7.64* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000120000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0* *NODE04:* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *tru...
2019 Aug 26
0
tinc 1.1pre17 on fedora 30
...n fedora 30 hosts and I am running > into a problem. Building and starting tinc works just fine. [...] > However, the hosts cannot connect to each other. When checking the logs, the > following appears over and over again, for any combination of hosts: > > Error while connecting to node04 (<redacted> port 655): Permission denied That sounds like there is a local firewall rule that blocks outgoing TCP connections to <redacted> port 655. Either that, or the Address statement in hosts/node04 contains an error, so that it thinks it's a broadcast address. Check if you c...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053- > a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6 > <gfid:9ad720b2-507d-4830-8294-ec8adee6d384> > <gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf> > Status: Connected > Number of entries: 12 > > Brick node04:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > > > running the "gluster volume heal engine" don't solve the problem... > > Some extra info: > > We have recently changed the gluster from: 2 (full repliacated) + 1 > arbiter to 3 ful...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...0000/ > /trusted.afr.engine-client-2=0x000000120000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0/ > / > / > / > / > / > / > / > / > /*NODE04*:/ > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.ve...
2019 May 04
0
very high traffic without any load
...f two of my hosts, > through the VPN. What do you mean with "session"? Some http-requests that you are sending through the VPN? Or something special? > Warning, wall of text incoming: > Source                Destination           Protocol Length Info > node01-public         node04-public         TCP      929    tinc(655) → 40690 [PSH, ACK] Seq=1 Ack=1 Win=240 Len=843 TSval=66121145 TSecr=65947641 > node01-public         node04-public         TCP      1294   tinc(655) → 40690 [ACK] Seq=844 Ack=1 Win=240 Len=1208 TSval=66121145 TSecr=65947641 > [..] The packets above be...
2019 May 01
4
very high traffic without any load
...        Nodes:    4  Sort: name        Cumulative Node                IN pkts   IN bytes   OUT pkts  OUT bytes node01             98749248 67445198848   98746920 67443404800 node02             37877112 25869768704   37878860 25870893056 node03             34607168 23636463616   34608260 23637114880 node04             26262640 17937174528   26262956 17937287168     That's 67GB for node01 in approximately 1.5 hours. Needless to say, this kind of traffic is entirely prohibiting me from using tinc. I have read a few messages in this mailing list, but the only thing I could find were traffic spikes,...
2011 Sep 09
1
Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
...e using the FUSE module. The base filesystem on all of the nodes is XFS, however tests with ext4 have yielded similar results. Command used to create the volume: gluster volume create cluster-volume replica 2 transport tcp node01:/mnt/local-store/ node02:/mnt/local-store/ node03:/mnt/local-store/ node04:/mnt/local-store/ Command used to mount the Gluster volume on each node: mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume Creating a 40GB file onto a node's local storage (ie no Gluster involvement): dd if=/dev/zero of=/mnt/local-store/test.file bs=1M count=40000 4194304000...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...168.10.52* *dnode02.localdomain.local* *10.10.20.90* *10.10.10.20* *gluster peer status on NODE02:* *Number of Peers: 2* *Hostname: dnode01.localdomain.local* *Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12* *State: Peer in Cluster (Connected)* *Other names:* *gdnode01* *10.10.10.10* *Hostname: gdnode04* *Uuid: ce6e0f6b-12cf-4e40-8f01-d1609dfc5828* *State: Peer in Cluster (Connected)* *Other names:* *192.168.10.54* *10.10.10.40* *gluster peer status on NODE04:* *Number of Peers: 2* *Hostname: dnode02.neridom.dom* *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd* *State: Peer in Cluster (Connected)*...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: gdnode01:/gluster/data/brick/ > /Brick2: gdnode02:/gluster/data/brick/ > /Brick3: gdnode04:/gluster/data/brick/ > /Options Reconfigured:/ > /nfs.disable: on/ > /performance.readdir-ahead: on/ > /transport.address-family: inet/ > /storage.owner-uid: 36/ > /performance.quick-read: off/ > /performance.read-ahead: off/ > /performance.io...
2019 May 01
0
very high traffic without any load
...: name Cumulative > Node IN pkts IN bytes OUT pkts OUT bytes > node01 98749248 67445198848 98746920 67443404800 > node02 37877112 25869768704 37878860 25870893056 > node03 34607168 23636463616 34608260 23637114880 > node04 26262640 17937174528 26262956 17937287168 > > > That's 67GB for node01 in approximately 1.5 hours. Needless to say, this > kind of traffic is entirely prohibiting me from using tinc. I have read a > few messages in this mailing list, but the only thing I could find...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...are full replicated: *Volume Name: data* *Type: Replicate* *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: gdnode01:/gluster/data/brick* *Brick2: gdnode02:/gluster/data/brick* *Brick3: gdnode04:/gluster/data/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-ahead: on* *transport.address-family: inet* *storage.owner-uid: 36* *performance.quick-read: off* *performance.read-ahead: off* *performance.io-cache: off* *performance.stat-prefetch: off* *performance.low-prio-thre...
2011 Jul 11
0
Instability when using RDMA transport
...ing the RDMA transport. - High memory usage on one Gluster server. In this case node06. Output is from 'top' command. - node06: 14850 root 16 0 23.1g 17g 1956 S 120.9 56.6 8:38.78 glusterfsd - node05: 12633 root 16 0 418m 157m 1852 S 0.0 0.5 2:56.02 glusterfsd - node04: 21066 root 15 0 355m 151m 1852 S 0.0 0.6 1:07.71 glusterfsd - Temporary work around by using IPoIB instead of RDMA - May take 10 - 15 minutes for first failure. ===== Version Information ===== - CentOS 5.6 kernel 2.6.18-238.9.1.el5 - OFED 1.5.3.1 - Gluster 3.2.1 RPMs - Ext3 filesyst...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ype: Replicate* > *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* > *Status: Started* > *Snapshot Count: 0* > *Number of Bricks: 1 x 3 = 3* > *Transport-type: tcp* > *Bricks:* > *Brick1: gdnode01:/gluster/data/brick* > *Brick2: gdnode02:/gluster/data/brick* > *Brick3: gdnode04:/gluster/data/brick* > *Options Reconfigured:* > *nfs.disable: on* > *performance.readdir-ahead: on* > *transport.address-family: inet* > *storage.owner-uid: 36* > *performance.quick-read: off* > *performance.read-ahead: off* > *performance.io-cache: off* > *performance.s...
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...7-4132-a4b3-af332247570c'}), log id: 29a62417 2017-07-24 15:54:01,066+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler2) [b7590c4] FINISH, GlusterServersListVDSCommand, return: [10.10.20.80/24:CONNECTED, node02.localdomain.local:CONNECTED, gdnode04:CONNECTED], log id: 29a62417 2017-07-24 15:54:01,076+02 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler2) [b7590c4] START, GlusterVolumesListVDSCommand(HostName = node01.localdomain.local, GlusterVolumesListVDSParameters:{runAsync='true',...