search for: node03

Displaying 16 results from an estimated 16 matches for "node03".

Did you mean: node0
2018 Feb 08
1
How to fix an out-of-sync node?
I have a setup with 3 nodes running GlusterFS. gluster volume create myBrick replica 3 node01:/mnt/data/myBrick node02:/mnt/data/myBrick node03:/mnt/data/myBrick Unfortunately node1 seemed to stop syncing with the other nodes, but this was undetected for weeks! When I noticed it, I did a "service glusterd restart" on node1, hoping the three nodes would sync again. But this did not happen. Only the CPU load went up on all three...
2005 Nov 08
0
TX/RX ring buffer allocation (xen-unstable)
...: when booting a handful of domUs and connecting them (comprising a small experimental IP network with links and nodes etc.), some interfaces that have been given by the configuration files won''t be set up properly inside the domUs. Example config snippet for such a domU: name = "node03" kernel = "/boot/vmlinuz-2.6.12.6-xenU" memory = 24 disk = [ ''phy:mapper/xenrootfs.node03,sda1,w'' ] hostname = "node03" root = "/dev/sda1 ro" vif = [ ''bridge=hub01'',''bridge=hub02'',''bridge=h...
2019 May 01
4
very high traffic without any load
...netname top" showing cumulative values:   Tinc sites             Nodes:    4  Sort: name        Cumulative Node                IN pkts   IN bytes   OUT pkts  OUT bytes node01             98749248 67445198848   98746920 67443404800 node02             37877112 25869768704   37878860 25870893056 node03             34607168 23636463616   34608260 23637114880 node04             26262640 17937174528   26262956 17937287168     That's 67GB for node01 in approximately 1.5 hours. Needless to say, this kind of traffic is entirely prohibiting me from using tinc. I have read a few messages in this mail...
2011 Sep 09
1
Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
...the hosts mount the volume using the FUSE module. The base filesystem on all of the nodes is XFS, however tests with ext4 have yielded similar results. Command used to create the volume: gluster volume create cluster-volume replica 2 transport tcp node01:/mnt/local-store/ node02:/mnt/local-store/ node03:/mnt/local-store/ node04:/mnt/local-store/ Command used to mount the Gluster volume on each node: mount -t glusterfs localhost:/cluster-volume /mnt/cluster-volume Creating a 40GB file onto a node's local storage (ie no Gluster involvement): dd if=/dev/zero of=/mnt/local-store/test.file bs=1...
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote: > [Adding gluster-users] > > On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com > <mailto:jaganz at gmail.com>> wrote: > > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 > full replicated node . This cluster have 2 gluster volume: > > - data: volume for
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...nged the gluster from: 2 (full repliacated) + 1 >> arbiter to 3 full replicated cluster >> > > Just curious, how did you do this? `remove-brick` of arbiter brick > followed by an `add-brick` to increase to replica-3? > > Yes #gluster volume remove-brick engine replica 2 node03:/gluster/data/brick force *(OK!)* #gluster volume heal engine info *(no entries!)* #gluster volume add-brick engine replica 3 node04:/gluster/engine/brick *(OK!)* *After some minutes* [root at node01 ~]# gluster volume heal engine info Brick node01:/gluster/engine/brick Status: Connected Numbe...
2019 May 01
0
very high traffic without any load
...> values: > > > Tinc sites Nodes: 4 Sort: name Cumulative > Node IN pkts IN bytes OUT pkts OUT bytes > node01 98749248 67445198848 98746920 67443404800 > node02 37877112 25869768704 37878860 25870893056 > node03 34607168 23636463616 34608260 23637114880 > node04 26262640 17937174528 26262956 17937287168 > > > That's 67GB for node01 in approximately 1.5 hours. Needless to say, this > kind of traffic is entirely prohibiting me from using tinc. I have read a >...
2012 Apr 12
1
Console to RHEL6.1 container
...evpts mount in fstab The container xml file is attached. If devpts entry in container's fstab has the 'newinstance' option, I get a brief "domain test01 started" message on the stdout. When I run 'virsh consle test01', I only see a few messages --- Setting hostname node03: [ OK ] Setting up Logical Volume Management: No volume groups found [ OK ] Checking filesystems [ OK ] mount: can't find / in /etc/fst...
2017 Jul 26
0
Web python framework under GlusterFS
...setup and start a GlusterFS volume: Volume Name: XXXXXXXXXXX Type: Replicate Volume ID:XXXXXXXXXXXXXXXXXXXXX Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: node01:/mnt/disks/brick01/XXXXXXXXXXX Brick2: node02:/mnt/disks/brick02/XXXXXXXXXXX Brick3: node03:/mnt/disks/brick03/XXXXXXXXXXX Options Reconfigured: transport.address-family: inet nfs.disable: on diagnostics.latency-measurement: on diagnostics.count-fop-hits: on Then install and mount the glusterFS client on one of the application server. GlusterFS client/application server spec 4 Vcore 8 GB...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...repliacated) + 1 arbiter to 3 full replicated cluster >> > > Just curious, how did you do this? `remove-brick` of arbiter > brick followed by an `add-brick` to increase to replica-3? > > > Yes > > > #gluster volume remove-brick engine replica 2 > node03:/gluster/data/brick force *(OK!)* > > #gluster volume heal engine info *(no entries!)* > > #gluster volume add-brick engine replica 3 > node04:/gluster/engine/brick *(OK!)* > > *After some minutes* > > [root at node01 ~]# gluster volume heal engine info > Brick node0...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users] On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote: > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 full > replicated node . This cluster have 2 gluster volume: > > - data: volume for the Data (Master) Domain (For vm) > - engine: volume fro the hosted_storage Domain (for hosted engine) > >
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...1 >>> arbiter to 3 full replicated cluster >>> >> >> Just curious, how did you do this? `remove-brick` of arbiter brick >> followed by an `add-brick` to increase to replica-3? >> >> > Yes > > > #gluster volume remove-brick engine replica 2 node03:/gluster/data/brick > force *(OK!)* > > #gluster volume heal engine info *(no entries!)* > > #gluster volume add-brick engine replica 3 node04:/gluster/engine/brick > *(OK!)* > > *After some minutes* > > [root at node01 ~]# gluster volume heal engine info > Brick n...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...gt;>> >> >> Just curious, how did you do this? `remove-brick` of arbiter >> brick followed by an `add-brick` to increase to replica-3? >> >> >> Yes >> >> >> #gluster volume remove-brick engine replica 2 >> node03:/gluster/data/brick force *(OK!)* >> >> #gluster volume heal engine info *(no entries!)* >> >> #gluster volume add-brick engine replica 3 >> node04:/gluster/engine/brick *(OK!)* >> >> *After some minutes* >> >> [root at node0...
2010 Nov 18
46
[HOWTO] Running Xen 4.0 host (dom0) with Redhat Enterprise Linux 6 (RHEL6)
Hello, If you''re interested in running Xen 4.0 hypervisor/dom0 on RHEL6, take a look at here: http://wiki.xen.org/xenwiki/RHEL6Xen4Tutorial It explains steps needed to rebuild Xen 4.0.1 src.rpm from Fedora on RHEL6, and how to fetch dom0 capable 2.6.32.x kernel from upstream git repository. It also shows how to get libvirt/virt-manager working with Xen on RHEL6. Hopefully it helps :)
2010 Nov 18
46
[HOWTO] Running Xen 4.0 host (dom0) with Redhat Enterprise Linux 6 (RHEL6)
Hello, If you''re interested in running Xen 4.0 hypervisor/dom0 on RHEL6, take a look at here: http://wiki.xen.org/xenwiki/RHEL6Xen4Tutorial It explains steps needed to rebuild Xen 4.0.1 src.rpm from Fedora on RHEL6, and how to fetch dom0 capable 2.6.32.x kernel from upstream git repository. It also shows how to get libvirt/virt-manager working with Xen on RHEL6. Hopefully it helps :)
2011 Jul 11
0
Instability when using RDMA transport
...around by using IPoIB instead of RDMA - May take 10 - 15 minutes for first failure. ===== Version Information ===== - CentOS 5.6 kernel 2.6.18-238.9.1.el5 - OFED 1.5.3.1 - Gluster 3.2.1 RPMs - Ext3 filesystem ===== Roles of nodes ===== node04, node05, node06 - Gluster servers. node01, node02, node03 - Clients. Mount node04:/gluster-vol01 on /gluster and run the test script in /gluster/test ===== Gluster Volume Info. ===== Volume Name: gluster-vol01 Type: Distribute Status: Started Number of Bricks: 3 Transport-type: rdma Bricks: Brick1: node04:/gluster-raw-storage Brick2: node05:/gluster-raw-...