similar to: Poor gluster performance on large files.

Displaying 20 results from an estimated 900 matches similar to: "Poor gluster performance on large files."

2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon, Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default. Thanks & Regards On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote: > Hi gluster users, > I've spent several
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD? Regards, Bartosz > Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47: > > Hi gluster users, > I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2017 Sep 28
2
Bandwidth and latency requirements
Interesting table Karan!, Could you please tell us how you did the benchmark? fio or iozone orsimilar? thanks Arman. On Wed, Sep 27, 2017 at 1:20 PM, Karan Sandha <ksandha at redhat.com> wrote: > Hi Collin, > > During our arbiter latency testing for completion of ops we found the > below results:- an arbiter node in another data centre and both the data > bricks in the
2017 Sep 25
2
Bandwidth and latency requirements
Hi all I've googled but can't find an answer to my question. I have two data centers. Currently, I have a replica (count of 2 plus arbiter) in one data center but is used by both. I want to change this to be a distributed replica across the two data centers. There is a 20Mbps pipe and approx 22 ms latency. Is this sufficient? I really don't want to do the geo-replication in its
2017 Sep 27
0
Bandwidth and latency requirements
Hi Collin, During our arbiter latency testing for completion of ops we found the below results:- an arbiter node in another data centre and both the data bricks in the same data centre, 1) File-size 1 KB (10000 files ) 2) mkdir Latency 5ms 10ms 20ms 50ms 100ms 200ms Ops Create 755 secs 1410 secs 2717 secs 5874 secs 12908 sec 26113 sec Mkdir 922 secs 1725 secs 3325 secs 8127
2013 Aug 21
1
Gluster 3.4 Samba VFS writes slow in Win 7 clients
Hello? We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client. two glusterfs server nodes export share with name of "gvol": hardwares: brick use a raid 5 logic disk with 8 * 2T SATA HDDs 10G network connection one linux client mount the "gvol" with cmd: [root at localhost current]# mount.cifs //192.168.100.133/gvol
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with 3.11.3 too. On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote: > On 09/06/2017 05:48 AM, Serkan ?oban wrote: >> >> Hi, >> >> Just do some ingestion tests to 40 node 16+4EC 19PB single volume. >> 100 clients are writing each has 5 threads total 500 threads.
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone [1] on it results in an oops [2]. remove_suid is called, accessing offset 14 of a NULL pointer. Let me know if you''d like me to test any fix, do further debugging or get more information. Thanks, Daniel --- [1] # mkfs.btrfs /dev/sda4 # mount /dev/sda4 /mnt /mnt# iozone -a . --- [2] [ 899.118926] BUG: unable to
2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
2015 Apr 14
3
VM Performance using KVM Vs. VMware ESXi
Hi All We are currently testing our product using KVM as the hypervisor. We are not using KVM as a bare-metal hypervisor. We use it on top of a RHEL installation. So basically RHEL acts as our host and using KVM we deploy guests on this system. We have all along tested and shipped our application image for VMware ESXi installations , So this it the first time we are trying our application
2008 Feb 19
1
ZFS and small block random I/O
Hi, We''re doing some benchmarking at a customer (using IOzone) and for some specific small block random tests, performance of their X4500 is very poor (~1.2 MB/s aggregate throughput for a 5+1 RAIDZ). Specifically, the test is the IOzone multithreaded throughput test of an 8GB file size and 8KB record size, with the server physmem''d to 2GB. I noticed a couple of peculiar
2008 Aug 01
1
file descriptor in bad state
I've just setup a simple gluster storage system on Centos 5.2 x64 w/ gluster 1.3.10 I have three storage bricks and one client Everytime i run iozone across this setup, i seem to get a bad file descriptor around the 4k mark. Any thoughts why? I'm sure more info is wanted, i'm just not sure what else to include at this point. thanks [root at green gluster]# cat
2010 Mar 06
3
Monitoring my disk activity
Recently, I''m benchmarking all kinds of stuff on my systems. And one question I can''t intelligently answer is what blocksize I should use in these tests. I assume there is something which monitors present disk activity, that I could run on my production servers, to give me some statistics of the block sizes that the users are actually performing on the production server.
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote: > Hello, > > so the current domain configuration: > <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2018 Sep 17
2
Re: NUMA issues on virtualized hosts
On 09/14/2018 03:36 PM, Lukas Hejtmanek wrote: > Hello, > > ok, I found that cpu pinning was wrong, so I corrected it to be 1:1. The issue > with iozone remains the same. > > The spec is running, however, it runs slower than 1-NUMA case. > > The corrected XML looks like follows: [Reformated XML for better reading] <cpu mode="host-passthrough">
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL, IHAC who would like to use Sun Fire X4500 to be the NFS server for the backend services, and would like to see the potential performance gain comparing to their existing systems. However the outputs of the I/O stress test with iozone show the mixed results as follows: * The read performance sharply degrades (almost down to 1/20, i.e from 2,000,000 down to 100,000) when the
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos. Regards Victor -- This message posted from opensolaris.org
2014 Oct 14
3
Filesystem writes unexpectedly slow (CentOS 6.4)
I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of disk arrays connected on an Areca controller. I just added a new external array, 8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but this seems to be a problem on this machine in general, on all file systems (even, possibly, NFS, but I'm not sure about that one yet). So, if I use
2016 Jan 25
2
How to make performance test in samba4
Hi everybody, I have several VMs running on Xenserver 6.5 I have 2 samba servers (1 DC & 1 Fileserver). They are both running Debian Linux 8.2 Jessie. I'm using the samba debian package (4.1.17) My VM has 8G RAM and 4 vCPU How can i test if the performances are quite good or not? with reading/writing Could I have better performance with the latest samba release? thanks Pierre --