Displaying 11 results from an estimated 11 matches for "ovirt1".
Did you mean:
ovirt
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...erSD/192.168.8.11:_iso
>
> As you can see, in the process of rebuilding the hard drive for ovirt2, I
> did resize some things to give more space to data, where I desperately need
> it. If this goes well and the storage is given a clean bill of health at
> this time, then I will take ovirt1 down and resize to match ovirt2, and
> thus score a decent increase in storage for data. I fully realize that
> right now the gluster mounted volumes should have the total size as the
> least common denominator.
>
> So, is this size reduction appropriate? A big part of me thinks da...
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
...his:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.addres...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...t;> Type: Distributed-Replicate
>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>> Options Reconfigured:
>> nfs.disable:...
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
...te
>>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 2 x (2 + 1) = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>>> Options Reconfigured:
&g...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...gt; yet.
>
> [root at ovirt2 images]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> changelog.changelog: on
> geo-replicat...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...el right at all.
My volume configuration looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-r...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...seen to tell them to bring it online. It claims none of the servers
> can see that volume, but I've quadruple-checked that the volumes are
> mounted on the engines and are fully functional there. I have some more
> VMs I need to get back up and running. How do I fix this?
>
> ovirt1, for unknown reasons, will not work. Attempts to bring it online
> fail, and I haven't figured out what log file to look in yet for more
> details.
>
As Krutika mentioned before, the storage performance issues seems to stem
from the slow brick on ovirt1 - ovirt1.nwfiber.com:/gluster...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...no obvious way I've
seen to tell them to bring it online. It claims none of the servers can
see that volume, but I've quadruple-checked that the volumes are mounted on
the engines and are fully functional there. I have some more VMs I need to
get back up and running. How do I fix this?
ovirt1, for unknown reasons, will not work. Attempts to bring it online
fail, and I haven't figured out what log file to look in yet for more
details.
On Wed, May 30, 2018 at 9:36 AM, Jim Kusznir <jim at palousetech.com> wrote:
> Hi all again:
>
> I'm now subscribed to gluster-us...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...h periodic massive spikes in latency
(occasionally so bad as to cause ovirt to detect a engine bad health
status). Often, if I check the logs just then, I'll see those call traces
in xfs_log_worker or other gluster processes, as well as hung task timeout
messages.
As to the profile suggesting ovirt1 had poorer performance than ovirt2, I
don't have an explanation. gluster volume info engine on both hosts are
identical. The computers and drives are identical (Dell R610 with PERC 6/i
controller configured to pass through the drive). ovirt1 and ovirt2's
partiion scheme/map do vary somew...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at
ovirt1.nwfiber.com:/gluster/brick1/engine
ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
Are the bricks of engine volume on both these servers identical in terms of
their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote:
>...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...n: 4834450432 bytes
>>
>>
>> On Tue, May 29, 2018 at 2:55 PM, Jim Kusznir <jim at palousetech.com> wrote:
>>
>>> Thank you for your response.
>>>
>>> I have 4 gluster volumes. 3 are replica 2 + arbitrator. replica bricks
>>> are on ovirt1 and ovirt2, arbitrator on ovirt3. The 4th volume is replica
>>> 3, with a brick on all three ovirt machines.
>>>
>>> The first 3 volumes are on an SSD disk; the 4th is on a Seagate SSHD
>>> (same in all three machines). On ovirt3, the SSHD has reported hard IO...