similar to: ZFS and small block random I/O

Displaying 20 results from an estimated 800 matches similar to: "ZFS and small block random I/O"

2009 Dec 15
1
IOZone: Number of outstanding requests..
Hello: Sorry for asking iozone ques in this mailing list but couldn't find any mailing list on iozone... In IOZone, is there a way to configure # of outstanding requests client sends to server side? Something on the lines of IOMeter option "Number of outstanding requests". Thanks a lot!
2015 Apr 14
3
VM Performance using KVM Vs. VMware ESXi
Hi All We are currently testing our product using KVM as the hypervisor. We are not using KVM as a bare-metal hypervisor. We use it on top of a RHEL installation. So basically RHEL acts as our host and using KVM we deploy guests on this system. We have all along tested and shipped our application image for VMware ESXi installations , So this it the first time we are trying our application
2008 Jul 03
2
iozone remove_suid oops...
Having done a current checkout, creating a new FS and running iozone [1] on it results in an oops [2]. remove_suid is called, accessing offset 14 of a NULL pointer. Let me know if you''d like me to test any fix, do further debugging or get more information. Thanks, Daniel --- [1] # mkfs.btrfs /dev/sda4 # mount /dev/sda4 /mnt /mnt# iozone -a . --- [2] [ 899.118926] BUG: unable to
2017 Oct 27
0
Poor gluster performance on large files.
Why don?t you set LSI to passtrough mode and set one brick per HDD? Regards, Bartosz > Wiadomo?? napisana przez Brandon Bates <brandon at brandonbates.com> w dniu 27.10.2017, o godz. 08:47: > > Hi gluster users, > I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for
2017 Oct 30
0
Poor gluster performance on large files.
Hi Brandon, Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. Server event threads is 1 and client event threads are 2 by default. Thanks & Regards On Fri, Oct 27, 2017 at 12:17 PM, Brandon Bates <brandon at brandonbates.com> wrote: > Hi gluster users, > I've spent several
2017 Oct 27
5
Poor gluster performance on large files.
Hi gluster users, I've spent several months trying to get any kind of high performance out of gluster. The current XFS/samba array is used for video editing and 300-400MB/s for at least 4 clients is minimum (currently a single windows client gets at least 700/700 for a single client over samba, peaking to 950 at times using blackmagic speed test). Gluster has been getting me as low as
2011 Jan 08
1
how to graph iozone output using OpenOffice?
Hi all, Can anyone please steer me in the right direction with this one? I've searched the net, but couldn't find a clear answer. How do I actually generate graphs from iozone, using OpenOffice? Every website I've been to simply mentions that iozone can output an xls file which can be used in MS Excel to generate a 3D graph. But, I can't see how it's actually done. Can anyone
2008 Jul 16
1
[Fwd: [Fwd: The results of iozone stress on NFS/ZFS and SF X4500 shows the very bad performance in read but good in write]]
Dear ALL, IHAC who would like to use Sun Fire X4500 to be the NFS server for the backend services, and would like to see the potential performance gain comparing to their existing systems. However the outputs of the I/O stress test with iozone show the mixed results as follows: * The read performance sharply degrades (almost down to 1/20, i.e from 2,000,000 down to 100,000) when the
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2017 Sep 07
2
3.10.5 vs 3.12.0 huge performance loss
It is sequential write with file size 2GB. Same behavior observed with 3.11.3 too. On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srangana at redhat.com> wrote: > On 09/06/2017 05:48 AM, Serkan ?oban wrote: >> >> Hi, >> >> Just do some ingestion tests to 40 node 16+4EC 19PB single volume. >> 100 clients are writing each has 5 threads total 500 threads.
2018 Sep 18
1
Re: NUMA issues on virtualized hosts
On 09/17/2018 04:59 PM, Lukas Hejtmanek wrote: > Hello, > > so the current domain configuration: > <cpu mode='host-passthrough'><topology sockets='8' cores='4' threads='1'/><numa><cell cpus='0-3' memory='62000000' /><cell cpus='4-7' memory='62000000' /><cell cpus='8-11'
2013 Aug 21
1
Gluster 3.4 Samba VFS writes slow in Win 7 clients
Hello? We have used glusterfs3.4 with the lasted samba-glusterfs-vfs lib to test samba performance in windows client. two glusterfs server nodes export share with name of "gvol": hardwares: brick use a raid 5 logic disk with 8 * 2T SATA HDDs 10G network connection one linux client mount the "gvol" with cmd: [root at localhost current]# mount.cifs //192.168.100.133/gvol
2004 Jun 26
1
OCFS Performance on a Hitachi SAN
I've been reading this group for a while and I've noticed a variety of comments regarding running OCFS on top of path-management packages such as EMC's Powerpath, and it brought to mind a problem I've been having. I'm currently testing a six-node cluster connected to a Hitachi 9570V SAN storage array, using OCFS 1.0.12. I have six LUNs presented to the hosts using HDLM,
2010 Oct 04
1
samba 3.3 - poor performance (compared to NFS)
I have a system that I'm vetting as a NAS server. It has a 2.0TB XFS filesystem mounted on /storage and I'm doing benchmarks using nfs3, nfs4, and samba. I'm testing via iozone by mounting the filesystem from my "nas client" box and then running iozone on the mounted filesystem. NFS seems pretty fast - ie, several orders of magnitude faster than samba, and I'm
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave
2014 Oct 14
3
Filesystem writes unexpectedly slow (CentOS 6.4)
I have a rather large box (2x8-core Xeon, 96GB RAM) where I have a couple of disk arrays connected on an Areca controller. I just added a new external array, 8 3TB drives in RAID5, and the testing I'm doing right now is on this array, but this seems to be a problem on this machine in general, on all file systems (even, possibly, NFS, but I'm not sure about that one yet). So, if I use
2018 Sep 14
1
Re: NUMA issues on virtualized hosts
Hello again, when the iozone writes slow. This is how slabtop looks like: 62476752 62476728 0% 0.10K 1601968 39 6407872K buffer_head 1000678 999168 0% 0.56K 142954 7 571816K radix_tree_node 132184 125911 0% 0.03K 1066 124 4264K kmalloc-32 118496 118224 0% 0.12K 3703 32 14812K kmalloc-node 73206 56467 0% 0.19K 3486 21
2007 Nov 29
2
Balancing I/O Load
We are seeing some disturbing (probably due to our ignorance) behavior from lustre 1.6.3 right now. We have 8 OSSs with 3 OSTs per OSS (24 physical LUNs). We just created a brand new lustre file system across this configuration using the default mkfs.lustre formatting options. We have this file system mounted across 400 clients. At the moment, we have 63 IOzone threads running
2006 Oct 31
0
6256083 Need a lightweight file page mapping mechanism to substitute segmap
Author: praks Repository: /hg/zfs-crypto/gate Revision: 4c3b7ab574cc73502effa96c11c293e04fd54309 Log message: 6256083 Need a lightweight file page mapping mechanism to substitute segmap 6387639 segkpm segment set to incorrect size for amd64 Files: create: usr/src/uts/common/vm/vpm.c create: usr/src/uts/common/vm/vpm.h update: usr/src/pkgdefs/SUNWhea/prototype_com update:
2010 Aug 06
0
Re: PATCH 3/6 - direct-io: do not merge logically non-contiguous requests
On Fri, May 21, 2010 at 15:37:45AM -0400, Josef Bacik wrote: > On Fri, May 21, 2010 at 11:21:11AM -0400, Christoph Hellwig wrote: >> On Wed, May 19, 2010 at 04:24:51PM -0400, Josef Bacik wrote: >> > Btrfs cannot handle having logically non-contiguous requests submitted. For >> > example if you have >> > >> > Logical: [0-4095][HOLE][8192-12287]