Displaying 20 results from an estimated 80000 matches similar to: "small files performance"
2017 Oct 13
1
small files performance
Where did you read 2k IOPS?
Each disk is able to do about 75iops as I'm using SATA disk, getting even
closer to 2000 it's impossible
Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto:
> Depends what you need.
> 2K iops for small file writes is not a bad result.
> In my case I had a system that was just poorly written and it was
>
2017 Oct 10
0
small files performance
I just tried setting:
performance.parallel-readdir on
features.cache-invalidation on
features.cache-invalidation-timeout 600
performance.stat-prefetch
performance.cache-invalidation
performance.md-cache-timeout 600
network.inode-lru-limit 50000
performance.cache-invalidation on
and clients could not see their files with ls when accessing via a fuse
mount. The files and directories were there,
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>:
> Hi Gandalf,
>
> We have multiple tuning to do for small-files which decrease the time for
> negative lookups , meta-data caching, parallel readdir. Bumping the server
> and client event threads will help you out in increasing the small file
> performance.
>
> gluster v set <vol-name> group
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi,
I want to give an update for this. I also tested READ speed. It seems, sharded volume has a lower read speed than striped volume.
This machine has 24 cores with 64GB of RAM . I really don?t think its caused due to low system. Stripe is kind of a shard but a fixed size based on stripe value / filesize. Hence, I would expect at least the same speed or maybe little slower. What I get is
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning;
Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb.
I attached two logs (shard and stripe logs)
Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself.
Gencer
2017 Jul 01
0
Very slow performance on Sharded GlusterFS
I did the changes (one brick from 09th server and one replica from 10th server and continued with this order) and re-test. Nothing changed. Still slow. (exactly same result.)
-Gencer.
From: Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com]
Sent: Friday, June 30, 2017 8:19 PM
To: gencer at gencgiyen.com
Cc: Krutika Dhananjay <kdhananj at redhat.com>; gluster-user
2017 Jun 29
0
How to shutdown a node properly ?
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as
2017 Sep 23
1
EC 1+2
Already read that.
Seems that I have to use a multiple of 512, so 512*(3-2) is 512.
Seems fine
Il 23 set 2017 5:00 PM, "Dmitri Chebotarov" <4dimach at gmail.com> ha scritto:
> Hi
>
> Take a look at this link (under ?Optimal volumes?), for Erasure Coded
> volume optimal configuration
>
> http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/
>
2017 Oct 04
0
data corruption - any update?
Just so I know.
Is it correct to assume that this corruption issue is ONLY involved if
you are doing rebalancing with sharding enabled.
So if I am not doing rebalancing I should be fine?
-bill
On 10/3/2017 10:30 PM, Krutika Dhananjay wrote:
>
>
> On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran
> <nbalacha at redhat.com <mailto:nbalacha at redhat.com>> wrote:
2017 Oct 04
2
data corruption - any update?
On Wed, Oct 4, 2017 at 10:51 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
>
>
> On 3 October 2017 at 13:27, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Any update about multiple bugs regarding data corruptions with
>> sharding enabled ?
>>
>> Is 3.12.1 ready to be used in production?
>>
>
>
2012 Jan 01
0
Possible bug and performance of small files (with limited use-case workaround)
I am testing gluster for possible deployment. The test is over internal
network between virtual machines, but if we go production it would
probably be infiniband.
Just pulled the latest binaries, namely 3.2.5-2.
First. can anything be done to help performance? It's rather slow for
doing a tar extract. Here is the volumes:
Volume Name: v1
Type: Replicate
Status: Started
Number
2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown
don't?
I think this is a bug, if I'm shutting down a server, it's obvious that all
clients should stop to connect to it....
Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks!
Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto:
> The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
>
> -Krutika
>
> On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
> gandalf.corvotempesta at gmail.com> wrote:
>
>> Great news.
>> Is this planned to be published in next
2017 Sep 23
0
EC 1+2
Hi
Take a look at this link (under ?Optimal volumes?), for Erasure Coded
volume optimal configuration
http://docs.gluster.org/Administrator%20Guide/Setting%20Up%20Volumes/
On Sat, Sep 23, 2017 at 10:01 Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Is possible to create a dispersed volume 1+2 ? (Almost the same as replica
> 3, the same as RAID-6)
>
> If
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on
reboot/shutdown?
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to the volumes
> for 42 seconds (network.ping-timeout). Is there a special way to
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn?t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything obviously gets killed but the
client does not get notified
2017 Oct 04
0
data corruption - any update?
On 3 October 2017 at 13:27, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Any update about multiple bugs regarding data corruptions with
> sharding enabled ?
>
> Is 3.12.1 ready to be used in production?
>
Most issues have been fixed but there appears to be one more race for which
the patch is being worked on.
@Krutika, is that correct?
Thanks,
2017 Jun 05
0
Rebalance + VM corruption - current status and request for feedback
The fixes are already available in 3.10.2, 3.8.12 and 3.11.0
-Krutika
On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Great news.
> Is this planned to be published in next release?
>
> Il 29 mag 2017 3:27 PM, "Krutika Dhananjay" <kdhananj at redhat.com> ha
> scritto:
>
>> Thanks for that update.
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Hi,
Sorry i did't confirm the results sooner.
Yes, it's working fine without issues for me.
If anyone else can confirm so we can be sure it's 100% resolved.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: Krutika Dhananjay <kdhananj at redhat.com>
Sent: Tuesday, June 6, 2017 9:17:40 AM
To: Mahdi Adnan
Cc: gluster-user; Gandalf Corvotempesta; Lindsay
2017 Jun 06
0
Rebalance + VM corruption - current status and request for feedback
Any additional tests would be great as a similiar bug was detected and
fixed some months ago and after that, this bug arose?.
Is still unclear to me why two very similiar bug was discovered in two
different times for the same operation
How this is possible?
If you fixed the first bug, why the second one wasn't triggered on your
test environment?
Il 6 giu 2017 10:35 AM, "Mahdi