similar to: [ovirt-users] Re: Gluster problems, cluster performance issues

Displaying 20 results from an estimated 600 matches similar to: "[ovirt-users] Re: Gluster problems, cluster performance issues"

2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
[Adding gluster-users back] Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/ Also the gluster client logs - under /var/log/glusterfs/rhev-data-center-mnt-glusterSD<volume>.log On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim at palousetech.com> wrote: > I believe the gluster data store for
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
Adding Ravi to look into the heal issue. As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit: commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini at redhat.com> Date: Wed Jun 21 16:35:46 2017
2018 May 29
0
[ovirt-users] Gluster problems, cluster performance issues
[Adding gluster-users to look at the heal issue] On Tue, May 29, 2018 at 9:17 AM, Jim Kusznir <jim at palousetech.com> wrote: > Hello: > > I've been having some cluster and gluster performance issues lately. I > also found that my cluster was out of date, and was trying to apply updates > (hoping to fix some of these), and discovered the ovirt 4.1 repos were > taken
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
On Thu, May 31, 2018 at 3:16 AM, Jim Kusznir <jim at palousetech.com> wrote: > I've been back at it, and still am unable to get more than one of my > physical nodes to come online in ovirt, nor am I able to get more than the > two gluster volumes (storage domains) to show online within ovirt. > > In Storage -> Volumes, they all show offline (many with one brick down,
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
I've been back at it, and still am unable to get more than one of my physical nodes to come online in ovirt, nor am I able to get more than the two gluster volumes (storage domains) to show online within ovirt. In Storage -> Volumes, they all show offline (many with one brick down, which is correct: I have one server off) However, in Storage -> domains, they all show down (although
2018 May 29
0
[ovirt-users] Gluster problems, cluster performance issues
Do you see errors reported in the mount logs for the volume? If so, could you attach the logs? Any issues with your underlying disks. Can you also attach output of volume profiling? On Wed, May 30, 2018 at 12:13 AM, Jim Kusznir <jim at palousetech.com> wrote: > Ok, things have gotten MUCH worse this morning. I'm getting random errors > from VMs, right now, about a third of my
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config? -Krutika On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote: > Hi: > > Thank you. I
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
Hi all again: I'm now subscribed to gluster-users as well, so I should get any replies from that side too. At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir <jim at palousetech.com> wrote: > Well, after a very stressful weekend, I think I have things largely > working. Turns out that most of the above issues were caused by the linux > permissions of the exports for all three volumes (they had been reset to > 600; setting them to 774 or 770 fixed many of the issues). Of course, I >
2017 Sep 08
0
GlusterFS as virtual machine storage
Back to replica 3 w/o arbiter. Two fio jobs running (direct=1 and direct=0), rebooting one node... and VM dmesg looks like: [ 483.862664] blk_update_request: I/O error, dev vda, sector 23125016 [ 483.898034] blk_update_request: I/O error, dev vda, sector 2161832 [ 483.901103] blk_update_request: I/O error, dev vda, sector 2161832 [ 483.904045] Aborting journal on device vda1-8. [ 483.906959]
2016 Apr 12
0
Problems with scsi-target-utils when hosted on dom0 centos 7 xen box
On Mon, Apr 11, 2016 at 9:14 PM, Nathan Coulson <nathan at bravenet.com> wrote: > Hello > > We were attempting to use scsi-target-utils, hosted on a xen dom0 vm using > localhost, and running into some problems. I was not able to reproduce this > on a centos 7.2 server using the default kernel. Have you tried booting the Virt SIG kernel natively and seeing if you can
2017 Aug 10
1
Errors on an SSD drive
On 08/09/2017 01:48 PM, hw wrote: > Robert Moskowitz wrote: >> I am building a new system using an Kingston 240GB SSD drive I pulled >> from my notebook (when I had to upgrade to a 500GB SSD drive). >> Centos install went fine and ran for a couple days then got errors on >> the console. Here is an example: >> >> [168176.995064] sd 0:0:0:0: [sda] tag#14
2017 Aug 09
0
Errors on an SSD drive
Robert Moskowitz wrote: > I am building a new system using an Kingston 240GB SSD drive I pulled from my notebook (when I had to upgrade to a 500GB SSD drive). Centos install went fine and ran for a couple days then got errors on the console. Here is an example: > > [168176.995064] sd 0:0:0:0: [sda] tag#14 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK > [168177.004050]
2017 Aug 10
0
Errors on an SSD drive
I have yet to see a SSD read\write error which wasn't related to disk issues like a bad sector but the controller might have an issue with the drive. To verify it you will need to burn some read\write IOPS of the drive but if it's under warranty then it's better to verify it now then later. Eliezer ---- Eliezer Croitoru Linux System Administrator Mobile: +972-5-28704261 Email:
2017 Aug 09
0
Errors on an SSD drive
If it's a bad sector problem, you'd write to sector 17066160 and see if the drive complies or spits back a write error. It looks like a bad sector in that the same LBA is reported each time but I've only ever seen this with both a read error and a UNC error. So I'm not sure it's a bad sector. What is DID_BAD_TARGET? And what do you get for smartctl -x <dev> Chris
2016 Apr 13
0
Problems with scsi-target-utils when hosted on dom0 centos 7 xen box
On 2016-04-12 09:43 AM, Nathan Coulson wrote: > By natively, I take it using > kernel /vmlinuz (vs kernel /xen) > > Not yet, but working on setting up such an environment. > > (At this time, I was using virt-install to reproduce the problem, and > the original server we are testing on did not support kvm but the 2nd > server does). > > On 2016-04-12 03:26 AM, George
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose <sabose at redhat.com> wrote: > [Adding gluster-users] > > On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > >> Hi folks, >> >> I have 3x servers in a
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
[Adding gluster-users] On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > Hi folks, > > I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 > configuration. My VMs run off a replica 3 arbiter 1 volume comprised of > 6 bricks, which themselves live on two SSDs in each of the servers (one > brick per SSD). The bricks are
2017 Aug 10
0
Errors on an SSD drive
On 08/09/2017 10:44 PM, mad.scientist.at.large at tutanota.com wrote: > what file system are you using? ssd drives have different characteristics that need to be accomadated (including a relatively slow write process which is obvious as soon as the buffer is full), and never, never put a swap partition on it, the high activity will wear it out rather quickly. might also check cables, often a
2017 Nov 15
2
virtlock - a VM goes read-only
Dear colleagues, I am facing a problem that has been troubling me for last week and a half. Please if you are able to help or offer some guidance. I have a non-prod POC environment with 2 CentOS7 fully updated hypervisors and an NFS filer that serves as a VM image storage. The overall environment works exceptionally well. However, starting a few weeks ago I have been trying to implement virtlock