similar to: Gluster 3.8.13 data corruption

Displaying 20 results from an estimated 2000 matches similar to: "Gluster 3.8.13 data corruption"

2017 Oct 06
0
Gluster 3.8.13 data corruption
Any chance of a backup you could do bit compare with? Sent from my Windows 10 phone From: Mahdi Adnan Sent: Friday, 6 October 2017 12:26 PM To: gluster-users at gluster.org Subject: [Gluster-users] Gluster 3.8.13 data corruption Hi, We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for oVirt. Today, we found an issue with one of the VMs template, after
2017 Oct 06
2
Gluster 3.8.13 data corruption
Could you disable stat-prefetch on the volume and create another vm off that template and see if it works? -Krutika On Fri, Oct 6, 2017 at 8:28 AM, Lindsay Mathieson < lindsay.mathieson at gmail.com> wrote: > Any chance of a backup you could do bit compare with? > > > > Sent from my Windows 10 phone > > > > *From: *Mahdi Adnan <mahdi.adnan at outlook.com>
2017 Oct 06
0
Gluster 3.8.13 data corruption
Hi, Thank you for your reply. Lindsay, Uunfortunately i do not have backup for this template. Krutika, The stat-prefetch is already disabled on the volume. -- Respectfully Mahdi A. Mahdi ________________________________ From: Krutika Dhananjay <kdhananj at redhat.com> Sent: Friday, October 6, 2017 7:39 AM To: Lindsay Mathieson Cc: Mahdi Adnan; gluster-users at gluster.org Subject: Re:
2017 Oct 09
1
Gluster 3.8.13 data corruption
OK. Is this problem unique to templates for a particular guest OS type? Or is this something you see for all guest OS? Also, can you get the output of `getfattr -d -m . -e hex <path>` for the following two "paths" from all of the bricks: path to the file representing the vm created off this template wrt the brick. It will usually be $BRICKPATH/xxxx....xx/images/$UUID where $UUID
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi, Did you get a chance to verify this fix again? If this fix works for you, is it OK if we move this bug to CLOSED state and revert the rebalance-cli warning patch? -Krutika On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > Hello, > > > Yes, i forgot to upgrade the client as well. > > I did the upgrade and created a new volume,
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news. Is this planned to be published in next release? Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > Thanks for that update. Very happy to hear it ran fine without any issues. > :) > > Yeah so you can ignore those 'No such file or directory' errors. They > represent a transient state where DHT in the client process
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 -Krutika On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote: > Great news. > Is this planned to be published in next release? > > Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha > scritto: > >> Thanks for that update.
2017 May 17
3
Rebalance + VM corruption - current status and request for feedback
Hi, In the past couple of weeks, we've sent the following fixes concerning VM corruption upon doing rebalance - https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051 These fixes are very much part of the latest 3.10.2 release. Satheesaran within Red Hat also verified that they work and he's not seeing corruption issues anymore. I'd like to
2017 Jul 11
3
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > I upgraded from 3.8.12 to 3.8.13 without issues. > > Two replicated volumes with online update, upgraded clients first and > followed by servers upgrade, "stop glusterd, pkill gluster*, update > gluster*, start glusterd, monitor healing process and logs, after > completion proceed to
2017 Jul 10
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
I upgraded from 3.8.12 to 3.8.13 without issues. Two replicated volumes with online update, upgraded clients first and followed by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start glusterd, monitor healing process and logs, after completion proceed to the other node" check gluster logs for more information. -- Respectfully Mahdi A. Mahdi
2017 Jul 10
2
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
Hi, is there a recommended way to upgrade Gluster cluster when upgrading to newer revision? I experienced filesystem corruption on several but not all VMs (KVM, FUSE) stored on Gluster during Gluster upgrade. After upgrading one of two nodes, I checked peer status and volume heal info, everything seemed fine so I upgraded second node and then two VMs remounted root as read-only and dmesg
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Hi, Sorry i did't confirm the results sooner. Yes, it's working fine without issues for me. If anyone else can confirm so we can be sure it's 100% resolved. -- Respectfully Mahdi A. Mahdi ________________________________ From: Krutika Dhananjay <kdhananj at redhat.com> Sent: Tuesday, June 6, 2017 9:17:40 AM To: Mahdi Adnan Cc: gluster-user; Gandalf Corvotempesta; Lindsay
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and fixed some months ago and after that, this bug arose?. Is still unclear to me why two very similiar bug was discovered in two different times for the same operation How this is possible? If you fixed the first bug, why the second one wasn't triggered on your test environment? Il 6 giu 2017 10:35 AM, "Mahdi
2018 Jan 17
1
Gluster endless heal
Hi, I have an issue with Gluster 3.8.14. The cluster is 4 nodes with replica count 2, on of the nodes went offline for around 15 minutes, when it came back online, self heal triggered and it just did not stop afterward, it's been running for 3 days now, maxing the bricks utilization without actually healing anything. The bricks are all SSDs, and the logs of the source node is spamming with
2018 Jan 12
3
Integration of GPU with glusterfs
On 12/01/2018 3:14 AM, Darrell Budic wrote: > It would also add physical resource requirements to future client > deploys, requiring more than 1U for the server (most likely), and I?m > not likely to want to do this if I?m trying to optimize for client > density, especially with the cost of GPUs today. Nvidia has banned their GPU's being used in Data Centers now to, I imagine
2018 Jan 15
2
[Gluster-devel] Integration of GPU with glusterfs
It is disappointing to see the limitation being put by Nvidia on low cost GPU usage on data centers. https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/ We thought of providing an option in glusterfs by which we can control if we want to use GPU or not. So, the concern of gluster eating out GPU's which could be used by others can be addressed. --- Ashish ----- Original
2017 Jun 11
5
How to remove dead peer, osrry urgent again :(
On 11/06/2017 10:46 AM, WK wrote: > I thought you had removed vna as defective and then ADDED in vnh as > the replacement? > > Why is vna still there? Because I *can't* remove it. It died, was unable to be brought up. The gluster peer detach command only works with live servers - A severe problem IMHO. -- Lindsay Mathieson
2018 Jan 12
0
Integration of GPU with glusterfs
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote: >On 12/01/2018 3:14 AM, Darrell Budic wrote: >> It would also add physical resource requirements to future client >> deploys, requiring more than 1U for the server (most likely), and I?m > >> not likely to want to do this if I?m trying to optimize for client >>
2017 Jun 18
3
Debian 3.8.12 packages have been updated?
I installed 3.8.12 a while back and the packages seem to have been updated since (2017-06-13), prompting me for updates. I haven't seen any release announcements or notes on this though. -- Lindsay Mathieson