similar to: [ovirt-users] Recovering from a multi-node failure

Displaying 20 results from an estimated 200 matches similar to: "[ovirt-users] Recovering from a multi-node failure"

2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
[Adding gluster-users] On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > Hi folks, > > I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 > configuration. My VMs run off a replica 3 arbiter 1 volume comprised of > 6 bricks, which themselves live on two SSDs in each of the servers (one > brick per SSD). The bricks are
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose <sabose at redhat.com> wrote: > [Adding gluster-users] > > On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > >> Hi folks, >> >> I have 3x servers in a
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Dear Krutika, Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4? Is this metric for example based on the number of cores a GlusterFS server has? I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
Hi all again: I'm now subscribed to gluster-users as well, so I should get any replies from that side too. At this point, I am seeing acceptable (although slower than I expect) performance much of the time, with periodic massive spikes in latency (occasionally so bad as to cause ovirt to detect a engine bad health status). Often, if I check the logs just then, I'll see those call traces
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
I've been back at it, and still am unable to get more than one of my physical nodes to come online in ovirt, nor am I able to get more than the two gluster volumes (storage domains) to show online within ovirt. In Storage -> Volumes, they all show offline (many with one brick down, which is correct: I have one server off) However, in Storage -> domains, they all show down (although
2018 May 30
2
[ovirt-users] Re: Gluster problems, cluster performance issues
The profile seems to suggest very high latencies on the brick at ovirt1.nwfiber.com:/gluster/brick1/engine ovirt2.* shows decent numbers. Is everything OK with the brick on ovirt1? Are the bricks of engine volume on both these servers identical in terms of their config? -Krutika On Wed, May 30, 2018 at 3:07 PM, Jim Kusznir <jim at palousetech.com> wrote: > Hi: > > Thank you. I
2018 May 30
0
[ovirt-users] Re: Gluster problems, cluster performance issues
[Adding gluster-users back] Nothing amiss with volume info and status. Can you check the agent.log and broker.log - will be under /var/log/ovirt-hosted-engine-ha/ Also the gluster client logs - under /var/log/glusterfs/rhev-data-center-mnt-glusterSD<volume>.log On Wed, May 30, 2018 at 12:08 PM, Jim Kusznir <jim at palousetech.com> wrote: > I believe the gluster data store for
2017 Jun 28
0
Gluster volume not mounted
The mount log file of the volume would help in debugging the actual cause. On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote: > Good morning Gluster users, > > I'm very new to the Gluster file system. My apologies if this is not the > correct way to seek assistance. However, I would appreciate some insight > into understanding the issue I have.
2018 Jun 01
0
[ovirt-users] Re: Gluster problems, cluster performance issues
On Thu, May 31, 2018 at 3:16 AM, Jim Kusznir <jim at palousetech.com> wrote: > I've been back at it, and still am unable to get more than one of my > physical nodes to come online in ovirt, nor am I able to get more than the > two gluster volumes (storage domains) to show online within ovirt. > > In Storage -> Volumes, they all show offline (many with one brick down,
2017 Jun 27
2
Gluster volume not mounted
Good morning Gluster users, I'm very new to the Gluster file system. My apologies if this is not the correct way to seek assistance. However, I would appreciate some insight into understanding the issue I have. I have three nodes running two volumes, engine and data. The third node is the arbiter on both volumes. Both volumes were operation fine but one of the volumes, data, no longer
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
Adding Ravi to look into the heal issue. As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit: commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini at redhat.com> Date: Wed Jun 21 16:35:46 2017
2024 Jun 26
0
Trouble with data size in a Distributed-Replicated 2 x 3 = 6 volume
HI, In my Distriuted-Replicated volume [ 2 x 3 = 6] used as a data domain for oVirt hypervisor, I had a trouble with used space : It was a previous Replicated 1 x 3 = 3 volume extended by adding one brick (one disk) in each server to recharge a 2 x 3 = 6 volume. Df -h on the mount point give ? 98% used volume (see below) Filesystem Size Used Avail Use% Mounted on
2021 Jan 07
1
HCI Cluster - CentOS8 to Streams Upgrade Broken
I have a test environment. Three node HCI cluster. CentOS8 build. Gluster as file system with standard cockpit deploy of HCI. Converted to CentOS Streams which seemed to go fine. Did a yum update and no issues. Did a reboot.. and now engine will no longer start. So I can no longer start my Virtual machines. I posted as bug https://bugzilla.redhat.com/show_bug.cgi?id=1911910 I posted to
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news. Is this planned to be published in next release? Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > Thanks for that update. Very happy to hear it ran fine without any issues. > :) > > Yeah so you can ignore those 'No such file or directory' errors. They > represent a transient state where DHT in the client process
2017 Nov 24
2
[ovirt-users] slow performance with export storage on glusterfs
On Thu, Nov 23, 2017 at 4:56 PM, Ji?? Sl??ka <jiri.slezka at slu.cz> wrote: > Hi, > > On 11/22/2017 07:30 PM, Nir Soffer wrote: > > On Mon, Nov 20, 2017 at 5:22 PM Ji?? Sl??ka <jiri.slezka at slu.cz > > <mailto:jiri.slezka at slu.cz>> wrote: > > > > Hi, > > > > I am trying realize why is exporting of vm to export storage on
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi, Did you get a chance to verify this fix again? If this fix works for you, is it OK if we move this bug to CLOSED state and revert the rebalance-cli warning patch? -Krutika On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > Hello, > > > Yes, i forgot to upgrade the client as well. > > I did the upgrade and created a new volume,
2017 Nov 28
2
[ovirt-users] slow performance with export storage on glusterfs
What about mounting over nfs instead of the fuse client. Or maybe libgfapi. Is that available for export domains On Fri, Nov 24, 2017 at 3:48 AM Ji?? Sl??ka <jiri.slezka at slu.cz> wrote: > On 11/24/2017 06:41 AM, Sahina Bose wrote: > > > > > > On Thu, Nov 23, 2017 at 4:56 PM, Ji?? Sl??ka <jiri.slezka at slu.cz > > <mailto:jiri.slezka at slu.cz>>
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha > scritto: > >> Thanks for that update.
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next