Displaying 11 results from an estimated 11 matches for "ovirt2".
Did you mean:
ovirt
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
...g outage, and hours and hours of
> work, including beginning to look at rebuilding my cluster....
>
> So, now my cluster is operating again, and everything looks good EXCEPT
> for one major Gluster issue/question that I haven't found any references or
> info on.
>
> my host ovirt2, one of the replica gluster servers, is the one that lost
> its storage and had to reinitialize it from the cluster. the iso volume is
> perfectly fine and complete, but the engine and data volumes are smaller on
> disk on this node than on the other node (and this node before the crash)....
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
...istributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: o...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet6
>&g...
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
...cfe464853
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 2 x (2 + 1) = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>>> Options Reconfigured:
>>> nfs.disable: on
>>> transport.addr...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...ther host, then delete that and see
> that it disappeared on the first host; it passed that test. Here's the
> info and status. (I have NOT performed the steps that Krutika and
> Ravishankar suggested yet, as I don't have my data volumes working again
> yet.
>
> [root at ovirt2 images]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
perf...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...y
(occasionally so bad as to cause ovirt to detect a engine bad health
status). Often, if I check the logs just then, I'll see those call traces
in xfs_log_worker or other gluster processes, as well as hung task timeout
messages.
As to the profile suggesting ovirt1 had poorer performance than ovirt2, I
don't have an explanation. gluster volume info engine on both hosts are
identical. The computers and drives are identical (Dell R610 with PERC 6/i
controller configured to pass through the drive). ovirt1 and ovirt2's
partiion scheme/map do vary somewhat, but I figured that wouldn'...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at
ovirt1.nwfiber.com:/gluster/brick1/engine
ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
Are the bricks of engine volume on both these servers identical in terms of
their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote:
> Hi:
>
> Thank you. I was finally a...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...s to cause ovirt to detect a engine bad health
> status). Often, if I check the logs just then, I'll see those call traces
> in xfs_log_worker or other gluster processes, as well as hung task timeout
> messages.
>
> As to the profile suggesting ovirt1 had poorer performance than ovirt2, I
> don't have an explanation. gluster volume info engine on both hosts are
> identical. The computers and drives are identical (Dell R610 with PERC 6/i
> controller configured to pass through the drive). ovirt1 and ovirt2's
> partiion scheme/map do vary somewhat, but I figu...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...detect a engine bad health
>> status). Often, if I check the logs just then, I'll see those call traces
>> in xfs_log_worker or other gluster processes, as well as hung task timeout
>> messages.
>>
>> As to the profile suggesting ovirt1 had poorer performance than ovirt2, I
>> don't have an explanation. gluster volume info engine on both hosts are
>> identical. The computers and drives are identical (Dell R610 with PERC 6/i
>> controller configured to pass through the drive). ovirt1 and ovirt2's
>> partiion scheme/map do vary some...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...?
>
> --Jim
>
> On Tue, May 29, 2018 at 3:01 PM, Jim Kusznir <jim at palousetech.com> wrote:
>
>> I think this is the profile information for one of the volumes that lives
>> on the SSDs and is fully operational with no down/problem disks:
>>
>> [root at ovirt2 yum.repos.d]# gluster volume profile data info
>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data
>> ----------------------------------------------
>> Cumulative Stats:
>> Block Size: 256b+ 512b+
>> 1024b+
>> No. of Reads:...