Displaying 14 results from an estimated 14 matches for "ovirt3".
Did you mean:
ovirt
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...licate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> server.allow-insecure: on
> performance...
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
...>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigur...
2018 May 29
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...472
> [56474.249231] blk_update_request: I/O error, dev dm-2, sector 3905945584
> [56474.250221] blk_update_request: I/O error, dev dm-2, sector 2048
>
>
>
>
> On Tue, May 29, 2018 at 11:59 AM, Jim Kusznir <jim at palousetech.com> wrote:
>
>> I see in messages on ovirt3 (my 3rd machine, the one upgraded to 4.2):
>>
>> May 29 11:54:41 ovirt3 ovs-vsctl:
>> ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database
>> connection failed (No such file or directory)
>> May 29 11:54:51 ovirt3 ovs-vsctl:
>> ovs|00001|db_ctl_b...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...like this:
>>
>> Volume Name: vmssd
>> Type: Distributed-Replicate
>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>...
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
...ume Name: vmssd
>>> Type: Distributed-Replicate
>>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 2 x (2 + 1) = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>>> Brick6: ovirt2:/gluster/ssd1_vmssd...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...ith GlusterFS on 10G: that doesn't
feel right at all.
My volume configuration looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport....
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...25.00 us 2216436.00 us 925421
>> READ
>> 46.30 1178.04 us 13.00 us 1700704.00 us 884635
>> INODELK
>>
>> Duration: 7485 seconds
>> Data Read: 71250527215 bytes
>> Data Written: 5119903744 bytes
>>
>> Brick: ovirt3.nwfiber.com:/gluster/brick2/data
>> ----------------------------------------------
>> Cumulative Stats:
>> Block Size: 1b+
>> No. of Reads: 0
>> No. of Writes: 3264419
>> %-latency Avg-latency Min-Latency...
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...the less, I
>> waited for recovery to occur (while customers started calling asking why
>> everything stopped working....). As I waited, I was checking, and gluster
>> volume status only showed ovirt1 and ovirt2....Apparently gluster had
>> stopped/failed at some point on ovirt3. I assume that was the cause of the
>> outage, still, if everything was working fine with ovirt1 gluster, and
>> ovirt2 powers on with a very broke gluster (the volume status was showing
>> NA for the port fileds for the gluster volumes), I would not expect to have
>> a wor...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
...te
> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
> Options Reconfigured:
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> performance.strict-o-direct: on
> nfs.disable: on
> user.cifs: off
> network.ping-timeout: 30
> cluster.shd-max-threads: 6
> clust...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...-457e-ba21-5d3173c612de
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
>> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
>> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
>> Options Reconfigured:
>> diagnostics.count-fop-hits: on
>> diagnostics.latency-measurement: on
>> performance.strict-o-direct: on
>> nfs.disable: on
>> user.cifs: off
>> network.ping-timeout: 30
>> clust...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
>>> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
>>> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
>>> Options Reconfigured:
>>> diagnostics.count-fop-hits: on
>>> diagnostics.latency-measurement: on
>>> performance.strict-o-direct: on
>>> nfs.disable: on
>>> user.cifs: off
>>> network.pin...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...;>> Snapshot Count: 0
>>>> Number of Bricks: 1 x (2 + 1) = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
>>>> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
>>>> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
>>>> Options Reconfigured:
>>>> diagnostics.count-fop-hits: on
>>>> diagnostics.latency-measurement: on
>>>> performance.strict-o-direct: on
>>>> nfs.disable: on
>>>> user.cifs: off...
2018 May 29
0
[ovirt-users] Gluster problems, cluster performance issues
...except for a gluster sync issue that showed up.
>
> My cluster is a 3 node hyperconverged cluster. I upgraded the hosted
> engine first, then engine 3. When engine 3 came back up, for some reason
> one of my gluster volumes would not sync. Here's sample output:
>
> [root at ovirt3 ~]# gluster volume heal data-hdd info
> Brick 172.172.1.11:/gluster/brick3/data-hdd
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-
> 7ac5-4725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/647be733-
> f153-4cdc-85bd-ba...
2018 May 29
0
[ovirt-users] Gluster problems, cluster performance issues
...>>>
>>> My cluster is a 3 node hyperconverged cluster. I upgraded the hosted
>>> engine first, then engine 3. When engine 3 came back up, for some reason
>>> one of my gluster volumes would not sync. Here's sample output:
>>>
>>> [root at ovirt3 ~]# gluster volume heal data-hdd info
>>> Brick 172.172.1.11:/gluster/brick3/data-hdd
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/48d7ecb8-7ac5-4
>>> 725-bca5-b3519681cf2f/0d6080b0-7018-4fa3-bb82-1dd9ef07d9b9
>>> /cc65f671-3377-494a-a7d4-1d9f7c3ae46c/images/...