Displaying 20 results from an estimated 5000 matches similar to: "Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption"
2017 Jul 10
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
I upgraded from 3.8.12 to 3.8.13 without issues.
Two replicated volumes with online update, upgraded clients first and followed by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start glusterd, monitor healing process and logs, after completion proceed to the other node"
check gluster logs for more information.
--
Respectfully
Mahdi A. Mahdi
2017 Jul 11
3
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> I upgraded from 3.8.12 to 3.8.13 without issues.
>
> Two replicated volumes with online update, upgraded clients first and
> followed by servers upgrade, "stop glusterd, pkill gluster*, update
> gluster*, start glusterd, monitor healing process and logs, after
> completion proceed to
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Well it was probably caused by running replica 2 and doing online
upgrade. However I added brick, turned volume to replica 3 with
arbiter and got very strange issue I will mail to this list in a
moment...
Thanks.
-ps
On Tue, Jul 11, 2017 at 1:55 PM, Pranith Kumar Karampuri
<pkarampu at redhat.com> wrote:
>
>
> On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at
2017 Jul 11
2
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Tue, Jul 11, 2017 at 5:12 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> >
> > You should first upgrade servers and then clients. New servers can
> > understand old clients, but it is not easy for old servers to understand
> new
> > clients in case it started doing something new.
>
> But isn't that the reason op-version exists? So that regardless
2017 Jul 11
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
>
> You should first upgrade servers and then clients. New servers can
> understand old clients, but it is not easy for old servers to understand new
> clients in case it started doing something new.
But isn't that the reason op-version exists? So that regardless of
client/server mix, nobody tries to do "new" things above the current
op-version?
He is not changing mayor
2017 Oct 06
2
Gluster 3.8.13 data corruption
Could you disable stat-prefetch on the volume and create another vm off
that template and see if it works?
-Krutika
On Fri, Oct 6, 2017 at 8:28 AM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:
> Any chance of a backup you could do bit compare with?
>
>
>
> Sent from my Windows 10 phone
>
>
>
> *From: *Mahdi Adnan <mahdi.adnan at outlook.com>
2017 Oct 09
1
Gluster 3.8.13 data corruption
OK.
Is this problem unique to templates for a particular guest OS type? Or is
this something you see for all guest OS?
Also, can you get the output of `getfattr -d -m . -e hex <path>` for the
following two "paths" from all of the bricks:
path to the file representing the vm created off this template wrt the
brick. It will usually be $BRICKPATH/xxxx....xx/images/$UUID where $UUID
2017 Oct 06
0
Gluster 3.8.13 data corruption
Hi,
Thank you for your reply.
Lindsay,
Uunfortunately i do not have backup for this template.
Krutika,
The stat-prefetch is already disabled on the volume.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: Krutika Dhananjay <kdhananj at redhat.com>
Sent: Friday, October 6, 2017 7:39 AM
To: Lindsay Mathieson
Cc: Mahdi Adnan; gluster-users at gluster.org
Subject: Re:
2017 Oct 05
2
Gluster 3.8.13 data corruption
Hi,
We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for oVirt.
Today, we found an issue with one of the VMs template, after deploying a VM from this template it will not boot, it stuck at mount the root partition.
We've been using this templates for months now and we did not had any issues with it.
Both oVirt and Gluster logs is not showing any errors or
2017 Oct 06
0
Gluster 3.8.13 data corruption
Any chance of a backup you could do bit compare with?
Sent from my Windows 10 phone
From: Mahdi Adnan
Sent: Friday, 6 October 2017 12:26 PM
To: gluster-users at gluster.org
Subject: [Gluster-users] Gluster 3.8.13 data corruption
Hi,
We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for oVirt.
Today, we found an issue with one of the VMs template, after
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news.
Is this planned to be published in next release?
Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
scritto:
> Thanks for that update. Very happy to hear it ran fine without any issues.
> :)
>
> Yeah so you can ignore those 'No such file or directory' errors. They
> represent a transient state where DHT in the client process
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if we move this bug to CLOSED state and
revert the rebalance-cli warning patch?
-Krutika
On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hello,
>
>
> Yes, i forgot to upgrade the client as well.
>
> I did the upgrade and created a new volume,
2017 May 17
3
Rebalance + VM corruption - current status and request for feedback
Hi,
In the past couple of weeks, we've sent the following fixes concerning VM
corruption upon doing rebalance -
https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051
These fixes are very much part of the latest 3.10.2 release.
Satheesaran within Red Hat also verified that they work and he's not seeing
corruption issues anymore.
I'd like to
2018 Jan 17
1
Gluster endless heal
Hi,
I have an issue with Gluster 3.8.14.
The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything.
The bricks are all SSDs, and the logs of the source node is spamming with
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit of glusterfsd.
Exactly. if this happen, probably there
2017 Sep 07
3
GlusterFS as virtual machine storage
*shrug* I don't use arbiter for vm work loads just straight replica 3.
There are some gotchas with using an arbiter for VM workloads. If
quorum-type is auto and a brick that is not the arbiter drop out then if
the up brick is dirty as far as the arbiter is concerned i.e. the only good
copy is on the down brick you will get ENOTCONN and your VMs will halt on
IO.
On 6 September 2017 at 16:06,
2017 Sep 07
0
GlusterFS as virtual machine storage
Hi Neil, docs mention two live nodes of replica 3 blaming each other and
refusing to do IO.
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/#1-replica-3-volume
On Sep 7, 2017 17:52, "Alastair Neil" <ajneil.tech at gmail.com> wrote:
> *shrug* I don't use arbiter for vm work loads just straight replica 3.