Displaying 6 results from an estimated 6 matches for "nwfiber".
Did you mean:
fiber
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
....
>
> [root at ovirt2 images]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> changelog.changelog: on
> geo-replication.igno...
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at
ovirt1.nwfiber.com:/gluster/brick1/engine
ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
Are the bricks of engine volume on both these servers identical in terms of
their config?
-Krutika
On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote:
> Hi:
>...
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...x40/0x40
[13493.643624] perf: interrupt took too long (12026 > 11991), lowering
kernel.perf_event_max_sample_rate to 16000
On Wed, May 30, 2018 at 2:44 AM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> The profile seems to suggest very high latencies on the brick at
> ovirt1.nwfiber.com:/gluster/brick1/engine
> ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
> Are the bricks of engine volume on both these servers identical in terms
> of their config?
>
> -Krutika
>
>
> On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at pal...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...rupt took too long (12026 > 11991), lowering
> kernel.perf_event_max_sample_rate to 16000
>
>
> On Wed, May 30, 2018 at 2:44 AM, Krutika Dhananjay <kdhananj at redhat.com>
> wrote:
>
>> The profile seems to suggest very high latencies on the brick at
>> ovirt1.nwfiber.com:/gluster/brick1/engine
>> ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1?
>> Are the bricks of engine volume on both these servers identical in terms
>> of their config?
>>
>> -Krutika
>>
>>
>> On Wed, May 30, 2018 at 3:0...
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
...is?
>
> ovirt1, for unknown reasons, will not work. Attempts to bring it online
> fail, and I haven't figured out what log file to look in yet for more
> details.
>
As Krutika mentioned before, the storage performance issues seems to stem
from the slow brick on ovirt1 - ovirt1.nwfiber.com:/gluster/brick1/engine
Have you checked this?
When you say ovirt1 will not work - do you mean the host cannot be
activated in oVirt engine? Can you look at/share the engine.log (on the HE
VM) to understand why it won't. Also if the storage domain is not
accessible by any of the servers,...
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
...jim at palousetech.com> wrote:
>
>> I think this is the profile information for one of the volumes that lives
>> on the SSDs and is fully operational with no down/problem disks:
>>
>> [root at ovirt2 yum.repos.d]# gluster volume profile data info
>> Brick: ovirt2.nwfiber.com:/gluster/brick2/data
>> ----------------------------------------------
>> Cumulative Stats:
>> Block Size: 256b+ 512b+
>> 1024b+
>> No. of Reads: 983 2696
>> 1059
>> No. of Writes:...