Displaying 9 results from an estimated 9 matches for "finditi".
Did you mean:
findity
2018 Mar 14
2
Expected performance for WORM scenario
We can't stick to single server because the law. Redundancy is a legal
requirement for our business.
I'm sort of giving up on gluster though. It would seem a pretty stupid
content addressable storage would suit our needs better.
On 13 March 2018 at 10:12, Ondrej Valousek <Ondrej.Valousek at s3group.com>
wrote:
> Yes, I have had this in place already (well except of the negative
2018 Mar 13
1
Expected performance for WORM scenario
On Tue, Mar 13, 2018 at 2:42 PM, Ondrej Valousek <
Ondrej.Valousek at s3group.com> wrote:
> Yes, I have had this in place already (well except of the negative cache,
> but enabling that did not make much effect).
>
> To me, this is no surprise ? nothing can match nfs performance for small
> files for obvious reasons:
>
Could you give profile info of the run you did with
2018 Mar 13
0
Expected performance for WORM scenario
Yes, I have had this in place already (well except of the negative cache, but enabling that did not make much effect).
To me, this is no surprise ? nothing can match nfs performance for small files for obvious reasons:
1. Single server, does not have to deal with distributed locks
2. Afaik, gluster does not support read/write delegations the same way NFS does.
3. Glusterfs is
2018 Mar 13
3
Expected performance for WORM scenario
On Tue, Mar 13, 2018 at 1:37 PM, Ondrej Valousek <
Ondrej.Valousek at s3group.com> wrote:
> Well, it might be close to the _*synchronous*_ nfs, but it is still well
> behind of the asynchronous nfs performance.
>
> Simple script (bit extreme I know, but helps to draw the picture):
>
>
>
> #!/bin/csh
>
>
>
> set HOSTNAME=`/bin/hostname`
>
> set j=1
2018 Mar 12
0
Expected performance for WORM scenario
Hi,
Can you send us the following details:
1. gluster volume info
2. What client you are using to run this?
Thanks,
Nithya
On 12 March 2018 at 18:16, Andreas Ericsson <andreas.ericsson at findity.com>
wrote:
> Heya fellas.
>
> I've been struggling quite a lot to get glusterfs to perform even
> halfdecently with a write-intensive workload. Testnumbers are from gluster
>
2018 Mar 13
0
Expected performance for WORM scenario
Well, it might be close to the _synchronous_ nfs, but it is still well behind of the asynchronous nfs performance.
Simple script (bit extreme I know, but helps to draw the picture):
#!/bin/csh
set HOSTNAME=`/bin/hostname`
set j=1
while ($j <= 7000)
echo ahoj > test.$HOSTNAME.$j
@ j++
end
rm -rf test.$HOSTNAME.*
Takes 9 seconds to execute on the NFS share, but 90 seconds on
2018 Mar 12
4
Expected performance for WORM scenario
Heya fellas.
I've been struggling quite a lot to get glusterfs to perform even
halfdecently with a write-intensive workload. Testnumbers are from gluster
3.10.7.
We store a bunch of small files in a doubly-tiered sha1 hash fanout
directory structure. The directories themselves aren't overly full. Most of
the data we write to gluster is "write once, read probably never", so 99%
2018 Mar 13
5
Expected performance for WORM scenario
On Mon, Mar 12, 2018 at 6:23 PM, Ondrej Valousek <
Ondrej.Valousek at s3group.com> wrote:
> Hi,
>
> Gluster will never perform well for small files.
>
> I believe there is nothing you can do with this.
>
It is bad compared to a disk filesystem but I believe it is much closer to
NFS now.
Andreas,
Looking at your workload, I am suspecting there to be lot of LOOKUPs
2018 Mar 14
0
Expected performance for WORM scenario
That seems unlikely. I pre-create the directory layout and then write to
directories I know exist.
I don't quite understand how any settings at all can reduce performance to
1/5000 of what I get when writing straight to ramdisk though, and
especially when running on a single node instead of in a cluster. Has
anyone else set this up and managed to get better write performance?
On 13 March