Displaying 20 results from an estimated 2000 matches similar to: "Very slow performance on Sharded GlusterFS"
2017 Jun 30
2
Very slow performance on Sharded GlusterFS
Hi,
I have an 2 nodes with 20 bricks in total (10+10).
First test:
2 Nodes with Distributed - Striped - Replicated (2 x 2)
10GbE Speed between nodes
"dd" performance: 400mb/s and higher
Downloading a large file from internet and directly to the gluster:
250-300mb/s
Now same test without Stripe but with sharding. This results are same when I
set shard size 4MB or
2017 Jun 30
0
Very slow performance on Sharded GlusterFS
Could you please provide the volume-info output?
-Krutika
On Fri, Jun 30, 2017 at 4:23 PM, <gencer at gencgiyen.com> wrote:
> Hi,
>
>
>
> I have an 2 nodes with 20 bricks in total (10+10).
>
>
>
> First test:
>
>
>
> 2 Nodes with Distributed ? Striped ? Replicated (2 x 2)
>
> 10GbE Speed between nodes
>
>
>
> ?dd? performance:
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
Hi Krutika,
Sure, here is volume info:
root at sr-09-loc-50-14-18:/# gluster volume info testvol
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: 30426017-59d5-4091-b6bc-279a905b704a
Status: Started
Snapshot Count: 0
Number of Bricks: 10 x 2 = 20
Transport-type: tcp
Bricks:
Brick1: sr-09-loc-50-14-18:/bricks/brick1
Brick2: sr-09-loc-50-14-18:/bricks/brick2
Brick3:
2017 Jun 30
1
Very slow performance on Sharded GlusterFS
Just noticed that the way you have configured your brick order during
volume-create makes both replicas of every set reside on the same machine.
That apart, do you see any difference if you change shard-block-size to
512MB? Could you try that?
If it doesn't help, could you share the volume-profile output for both the
tests (separate)?
Here's what you do:
1. Start profile before starting
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi Krutika,
Have you be able to look out my profiles? Do you have any clue, idea or suggestion?
Thanks,
-Gencer
From: Krutika Dhananjay [mailto:kdhananj at redhat.com]
Sent: Friday, June 30, 2017 3:50 PM
To: gencer at gencgiyen.com
Cc: gluster-user <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Very slow performance on Sharded GlusterFS
Just noticed that the
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Hi Gencer,
I just checked the volume-profile attachments.
Things that seem really odd to me as far as the sharded volume is concerned:
1. Only the replica pair having bricks 5 and 6 on both nodes 09 and 10
seems to have witnessed all the IO. No other bricks witnessed any write
operations. This is unacceptable for a volume that has 8 other replica
sets. Why didn't the shards get distributed
2017 Jul 04
2
Very slow performance on Sharded GlusterFS
Thanks. I think reusing the same volume was the cause of lack of IO
distribution.
The latest profile output looks much more realistic and in line with i
would expect.
Let me analyse the numbers a bit and get back.
-Krutika
On Tue, Jul 4, 2017 at 12:55 PM, <gencer at gencgiyen.com> wrote:
> Hi Krutika,
>
>
>
> Thank you so much for myour reply. Let me answer all:
>
>
2017 Jul 04
0
Very slow performance on Sharded GlusterFS
Hi Krutika,
Thank you so much for myour reply. Let me answer all:
1. I have no idea why it did not get distributed over all bricks.
2. Hm.. This is really weird.
And others;
No. I use only one volume. When I tested sharded and striped volumes, I manually stopped volume, deleted volume, purged data (data inside of bricks/disks) and re-create by using this command:
sudo gluster
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
What if you disabled eager lock and run your test again on the sharded
configuration along with the profile output?
# gluster volume set <VOL> cluster.eager-lock off
-Krutika
On Tue, Jul 4, 2017 at 9:03 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> Thanks. I think reusing the same volume was the cause of lack of IO
> distribution.
> The latest profile output
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Ki Krutika,
After that setting:
$ dd if=/dev/zero of=/mnt/ddfile bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.7351 s, 91.5 MB/s
$ dd if=/dev/zero of=/mnt/ddfile2 bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 23.7351 s, 90.5 MB/s
$ dd if=/dev/zero of=/mnt/ddfile3 bs=1G count=1
1+0 records
2017 Jul 27
0
Very slow performance on Sharded GlusterFS
The current sharding has very limited use cases like vmstore where the
clients accessing the sharded file will always be one. Krutuka will be
the right person to answer your questions .
Regards
Rafi KC
On 06/30/2017 04:28 PM, gencer at gencgiyen.com wrote:
>
> Hi,
>
>
>
> I have an 2 nodes with 20 bricks in total (10+10).
>
>
>
> First test:
>
>
>
2017 Jul 06
0
Very slow performance on Sharded GlusterFS
Krutika, I?m sorry I forgot to add logs. I attached them now.
Thanks,
Gencer.
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com
Sent: Thursday, July 6, 2017 10:27 AM
To: 'Krutika Dhananjay' <kdhananj at redhat.com>
Cc: 'gluster-user' <gluster-users at gluster.org>
Subject: Re:
2017 Jul 06
2
Very slow performance on Sharded GlusterFS
Hi Krutika,
I also did one more test. I re-created another volume (single volume. Old one destroyed-deleted) then do 2 dd tests. One for 1GB other for 2GB. Both are 32MB shard and eager-lock off.
Samples:
sr:~# gluster volume profile testvol start
Starting volume profile on testvol has been successful
sr:~# dd if=/dev/zero of=/testvol/dtestfil0xb bs=1G count=1
1+0 records in
1+0
2017 Jul 12
1
Very slow performance on Sharded GlusterFS
Hi,
Sorry for the late response.
No, so eager-lock experiment was more to see if the implementation had any
new bugs.
It doesn't look like it does. I think having it on would be the right thing
to do. It will reduce the number of fops having to go over the network.
Coming to the performance drop, I compared the volume profile output for
stripe and 32MB shard again.
The only thing that is
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning;
Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb.
I attached two logs (shard and stripe logs)
Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself.
Gencer
2017 Jul 10
0
Very slow performance on Sharded GlusterFS
Hi Krutika,
May I kindly ping to you and ask that If you have any idea yet or figured out whats the issue may?
I am awaiting your reply with four eyes :)
Apologies for the ping :)
-Gencer.
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of gencer at gencgiyen.com
Sent: Thursday, July 6, 2017 11:06 AM
To: 'Krutika
2018 Feb 27
1
On sharded tiered volume, only first shard of new file goes on hot tier.
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on hot tier.
On a sharded tiered volume, only the first shard of a new file
goes on the hot tier, the rest
2018 Apr 23
1
Reconstructing files from shards
2018-04-23 9:34 GMT+02:00 Alessandro Briosi <ab1 at metalit.com>:
> Is it that really so?
yes, i've opened a bug asking developers to block removal of sharding
when volume has data on it or to write a huge warning message
saying that data loss will happen
> I thought that sharding was a extended attribute on the files created when
> sharding is enabled.
>
> Turning off
2018 Apr 23
0
Reconstructing files from shards
Il 22/04/2018 11:39, Gandalf Corvotempesta ha scritto:
> Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com
> <mailto:ab1 at metalit.com>> ha scritto:
>
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename
> and
> then copy i.e.)
>
> This
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto:
> Imho the easiest path would be to turn off sharding on the volume and
> simply do a copy of the files (to a different directory, or rename and
> then copy i.e.)
>
> This should simply store the files without sharding.
>
If you turn off sharding on a sharded volume with data in it, all sharded