similar to: op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)

Displaying 20 results from an estimated 2000 matches similar to: "op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)"

2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > > > On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> >> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: >> >>> >>> >>>> ... >>>> >>>> then
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
And what does glusterd log indicate for these failures? On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi < >> gianluca.cecchi at gmail.com> wrote: >> >>>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > And what does glusterd log indicate for these failures? > See here in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing It seems that on each host the peer files have been updated with a new entry "hostname2": [root at ovirt01 ~]# cat
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> OK, so the log just hints to the following: >> >> [2017-07-05 15:04:07.178204] E [MSGID: 106123] >> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit >>
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 6:55 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > > >> > You can switch back to info mode the moment this is hit one more time with > the debug log enabled. What I'd need here is the glusterd log (with debug > mode) to figure out the exact cause of the failure. > > >> >> Let me know, >> thanks >> >>
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> Eventually I can destroy and recreate this "export" volume again with the >> old names (ovirt0N.localdomain.local) if you give me the sequence of >> commands,
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
OK, so the log just hints to the following: [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Reset Brick on local node [2017-07-05 15:04:07.178214] E [MSGID: 106123] [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases] 0-management: Commit Op Failed While going through the code,
2017 Jul 07
0
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
You'd need to allow some more time to dig into the logs. I'll try to get back on this by Monday. On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee <amukherj at redhat.com>
2017 Jul 10
0
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee <amukherj at redhat.com> >> wrote: >> >>> >>> >>> On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > OK, so the log just hints to the following: > > [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] > 0-management: Commit failed for operation Reset Brick on local node > [2017-07-05 15:04:07.178214] E [MSGID: 106123] >
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
[Adding gluster-users] On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > Hi folks, > > I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10 > configuration. My VMs run off a replica 3 arbiter 1 volume comprised of > 6 bricks, which themselves live on two SSDs in each of the servers (one > brick per SSD). The bricks are
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose <sabose at redhat.com> wrote: > [Adding gluster-users] > > On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot <bootc at bootc.net> wrote: > >> Hi folks, >> >> I have 3x servers in a
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
Couple of things: 1. Like Darrell suggested, you should enable stat-prefetch and increase client and server event threads to 4. # gluster volume set <VOL> performance.stat-prefetch on # gluster volume set <VOL> client.event-threads 4 # gluster volume set <VOL> server.event-threads 4 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
Dear Krutika, Sorry for asking so naively but can you tell me on what factor do you base that the client and server event-threads parameters for a volume should be set to 4? Is this metric for example based on the number of cores a GlusterFS server has? I am asking because I saw my GlusterFS volumes are set to 2 and would like to set these parameters to something meaningful for performance
2017 Jun 12
0
Gluster deamon fails to start
On Mon, Jun 12, 2017 at 7:30 PM, Langley, Robert <Robert.Langley at ventura.org > wrote: > As far as the peer status (and I now remember seeing this earlier) the > issue appears to be that the host name for gsaov07 is attempting to resolve > over the wrong network for gluster "ent...." and not "stor.local". > So, it may be as simple as removing gsaov07 as a
2017 Jun 12
2
Gluster deamon fails to start
As far as the peer status (and I now remember seeing this earlier) the issue appears to be that the host name for gsaov07 is attempting to resolve over the wrong network for gluster "ent...." and not "stor.local". So, it may be as simple as removing gsaov07 as a peer, then probing over the correct network. I'll follow up with the Engine log. Sent using OWA for iPhone
2018 May 30
1
[ovirt-users] Re: Gluster problems, cluster performance issues
Adding Ravi to look into the heal issue. As for the fsync hang and subsequent IO errors, it seems a lot like https://bugzilla.redhat.com/show_bug.cgi?id=1497156 and Paolo Bonzini from qemu had pointed out that this would be fixed by the following commit: commit e72c9a2a67a6400c8ef3d01d4c461dbbbfa0e1f0 Author: Paolo Bonzini <pbonzini at redhat.com> Date: Wed Jun 21 16:35:46 2017
2017 Jun 12
3
Gluster deamon fails to start
On Mon, Jun 12, 2017 at 6:41 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > > On Mon, 12 Jun 2017 at 17:40, Langley, Robert <Robert.Langley at ventura.org> > wrote: > >> Thank you for your response. There has been no change of IP addresses. >> And I have tried restarting the glusterd service multiple times. >> I am using fully qualified names with a
2018 Mar 22
2
[ovirt-users] GlusterFS performance with only one drive per host?
On Mon, Mar 19, 2018 at 5:57 PM, Jayme <jaymef at gmail.com> wrote: > I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm > considering storage options. I don't have a requirement for high amounts > of storage, I have a little over 1TB to store but want some overhead so I'm > thinking 2TB of usable space would be sufficient. > >
2016 Nov 21
1
blockcommit and gluster network disk path
Hi, I'm running into problems with blockcommit and gluster network disks - wanted to check how to pass path for network disks. How's the protocol and host parameters specified? For a backing volume chain as below, executing virsh blockcommit fioo5 vmstore/912d9062-3881-479b-a6e5-7b074a252cb6/images/27b0cbcb-4dfd-4eeb-8ab0-8fda54a6d8a4/027a3b37-77d4-4fa9-8173-b1fedba1176c --base