Displaying 20 results from an estimated 4000 matches similar to: "data corruption - any update?"
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2017 Oct 04
0
data corruption - any update?
On 3 October 2017 at 13:27, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Any update about multiple bugs regarding data corruptions with
> sharding enabled ?
>
> Is 3.12.1 ready to be used in production?
>
Most issues have been fixed but there appears to be one more race for which
the patch is being worked on.
@Krutika, is that correct?
Thanks,
2017 Oct 04
0
data corruption - any update?
Just so I know.
Is it correct to assume that this corruption issue is ONLY involved if
you are doing rebalancing with sharding enabled.
So if I am not doing rebalancing I should be fine?
-bill
On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran
> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote:
> Just so I know.
>
> Is it correct to assume that this corruption issue is ONLY involved if you
> are doing rebalancing with sharding enabled.
>
> So if I am not doing rebalancing I should be fine?
>
That is correct.
> -bill
>
>
>
> On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
2017 Jun 04
2
Rebalance + VM corruption - current status and request for feedback
Great news.
Is this planned to be published in next release?
Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
scritto:
> Thanks for that update. Very happy to hear it ran fine without any issues.
> :)
>
> Yeah so you can ignore those 'No such file or directory' errors. They
> represent a transient state where DHT in the client process
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning;
Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb.
I attached two logs (shard and stripe logs)
Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself.
Gencer
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2017 Jun 06
2
Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if we move this bug to CLOSED state and
revert the rebalance-cli warning patch?
-Krutika
On Mon, May 29, 2017 at 6:51 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Hello,
>
>
> Yes, i forgot to upgrade the client as well.
>
> I did the upgrade and created a new volume,
2017 May 17
3
Rebalance + VM corruption - current status and request for feedback
Hi,
In the past couple of weeks, we've sent the following fixes concerning VM
corruption upon doing rebalance -
https://review.gluster.org/#/q/status:merged+project:glusterfs+branch:master+topic:bug-1440051
These fixes are very much part of the latest 3.10.2 release.
Satheesaran within Red Hat also verified that they work and he's not seeing
corruption issues anymore.
I'd like to
2017 Oct 11
0
data corruption - any update?
Just to clarify as i'm planning to put gluster in production (after
fixing some issue, but for this I need community help):
corruption happens only in this cases:
- volume with shard enabled
AND
- rebalance operation
In any other cases, corruption should not happen (or at least is not
known to happen)
So, what If I have to replace a failed brick/disks ? Will this trigger
a rebalance and
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto:
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
If you turn off sharding on a sharded volume with data in it, all sharded
2018 May 04
2
shard corruption bug
Hi to all
is the "famous" corruption bug when sharding enabled fixed or still a work
in progress ?
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>:
> Is it that really so?
yes, i've opened a bug asking developers to block removal of sharding
when volume has data on it or to write a huge warning message
saying that data loss will happen
> I thought that sharding was a extended attribute on the files created when
> sharding is enabled.
>
> Turning off
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the bug in the past
2012 Apr 27
1
geo-replication and rsync
Hi,
can someone tell me the differenct between geo-replication and plain rsync?
On which frequency files are replicated with geo-replication?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120427/72f35727/attachment.html>
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
> to settle. No problems. I am now running replica 4
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL
> will be as well. This assumption might be wrong. So I will test it. It would
> be interesting to see client to work in case of crash (SIGKILL) and not in
> case of graceful exit of glusterfsd.
Exactly. if this happen, probably there