search for: filebench

Displaying 20 results from an estimated 33 matches for "filebench".

2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/...
2007 Oct 08
16
Fileserver performance tests
...y doing something like the following: [i]zpool create zfs_raid10_16_disks mirror c3t0d0 c4t0d0 mirror c3t1d0 c4t1d0 mirror c3t2d0 c4t2d0 mirror c3t3d0 c4t3d0 mirror c3t4d0 c4t4d0 mirror c3t5d0 c4t5d0 mirror c3t6d0 c4t6d0 mirror c3t7d0 c4t7d0[/i] the i set "noatime" and ran the following filebench tests: [i] root at sun1 # ./filebench filebench> load fileserver 12746: 7.445: FileServer Version 1.14 2005/06/21 21:18:52 personality successfully loaded 12746: 7.445: Usage: set $dir=<dir> 12746: 7.445: set $filesize=<size> defaults to 131072 12746: 7.445: set $...
2007 Oct 30
2
[osol-help] Squid Cache on a ZFS file system
...> Basically, I want to know if somebody here on this list is using a ZFS > file system for a proxy cache and what will be it''s performance? Will it > improve and degrade Squid''s performance? Or better still, is there any > kind of benchmark tools for ZFS performance? filebench sounds like it''d be useful for you. It''s coming in the next Nevada release, but since it looks like you''re on Solaris 10, take a look at: http://blogs.sun.com/erickustarz/entry/filebench Remember to ''zfs set atime=off mypool/cache'' - there''s...
2006 Nov 03
2
Filebench, X4200 and Sun Storagetek 6140
...busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above available for tests, I''m open to suggestions on potential configs that I could run for you. Pop me a mail if you want something specific _or_ you have suggestions concerning filebench (varmail) config setup. Cheers This message posted from opensolaris.org
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2009 Apr 23
1
ZFS SMI vs EFI performance using filebench
I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2 at 1503MHz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I r...
2014 Aug 21
2
[PATCH] vhost: Add polling mode
...gt; Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545 MB/sec). > > > > filebench, 1 vm: > > ops/sec improved by 13% with the polling patch. Number of exits > was reduced by > > 31%. > > The same experiment with 3 vms running filebench showed similar numbers. > > > > Signed-off-by: Razya Ladelsky <razya at il.ibm.com> > > This rea...
2014 Aug 21
2
[PATCH] vhost: Add polling mode
...gt; Netperf, 1 vm: > > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > Number of exits/sec decreased 6x. > > The same improvement was shown when I tested with 3 vms running netperf > > (4086 MB/sec -> 5545 MB/sec). > > > > filebench, 1 vm: > > ops/sec improved by 13% with the polling patch. Number of exits > was reduced by > > 31%. > > The same experiment with 3 vms running filebench showed similar numbers. > > > > Signed-off-by: Razya Ladelsky <razya at il.ibm.com> > > This rea...
2017 Jul 12
1
Hi all
I have setup a distributed glusterfs volume with 3 servers. the network is 1GbE, i get filebench test with a client. refer to this link: https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf the more server for gluster, more throughput should gain. I have tested the network, the bandwidth is 117 MB/s, so when i have 3 servers i should gain about 300 MB/s (3*11...
2014 Aug 20
0
[PATCH] vhost: Add polling mode
...I/O acceleration features described in: > KVM Forum 2013: Efficient and Scalable Virtio (by Abel Gordon) > https://www.youtube.com/watch?v=9EyweibHfEs > and > https://www.mail-archive.com/kvm at vger.kernel.org/msg98179.html > > I ran some experiments with TCP stream netperf and filebench (having 2 threads > performing random reads) benchmarks on an IBM System x3650 M4. > I have two machines, A and B. A hosts the vms, B runs the netserver. > The vms (on A) run netperf, its destination server is running on B. > All runs loaded the guests in a way that they were (cpu) satu...
2014 Aug 10
7
[PATCH] vhost: Add polling mode
...imate goal is to implement the I/O acceleration features described in: KVM Forum 2013: Efficient and Scalable Virtio (by Abel Gordon) https://www.youtube.com/watch?v=9EyweibHfEs and https://www.mail-archive.com/kvm at vger.kernel.org/msg98179.html I ran some experiments with TCP stream netperf and filebench (having 2 threads performing random reads) benchmarks on an IBM System x3650 M4. I have two machines, A and B. A hosts the vms, B runs the netserver. The vms (on A) run netperf, its destination server is running on B. All runs loaded the guests in a way that they were (cpu) saturated. For example,...
2014 Aug 10
7
[PATCH] vhost: Add polling mode
...imate goal is to implement the I/O acceleration features described in: KVM Forum 2013: Efficient and Scalable Virtio (by Abel Gordon) https://www.youtube.com/watch?v=9EyweibHfEs and https://www.mail-archive.com/kvm at vger.kernel.org/msg98179.html I ran some experiments with TCP stream netperf and filebench (having 2 threads performing random reads) benchmarks on an IBM System x3650 M4. I have two machines, A and B. A hosts the vms, B runs the netserver. The vms (on A) run netperf, its destination server is running on B. All runs loaded the guests in a way that they were (cpu) saturated. For example,...
2014 Aug 21
0
[PATCH] vhost: Add polling mode
...; > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). > > > Number of exits/sec decreased 6x. > > > The same improvement was shown when I tested with 3 vms running netperf > > > (4086 MB/sec -> 5545 MB/sec). > > > > > > filebench, 1 vm: > > > ops/sec improved by 13% with the polling patch. Number of exits > > > was reduced by 31%. > > > The same experiment with 3 vms running filebench showed similar numbers. > > > > > > Signed-off-by: Razya Ladelsky <razya at il.ibm.com> &g...
2008 Jul 06
2
Measuring ZFS performance - IOPS and throughput
Can anybody tell me how to measure the raw performance of a new system I''m putting together? I''d like to know what it''s capable of in terms of IOPS and raw throughput to the disks. I''ve seen Richard''s raidoptimiser program, but I''ve only seen results for random read iops performance, and I''m particularly interested in write
2008 Aug 21
3
ZFS handling of many files
Hello, I have been experimenting with ZFS on a test box, preparing to present it to management. One thing I cannot test right now is our real-world application load. We write to CIFS shares currently in small files. We write about 250,000 files a day, in various sizes (1KB to 500MB). Some directories get a lot of individual files (sometimes 50,000 or more) in a single directory. We spoke to a Sun
2009 Jan 17
2
Comparison between the S-TEC Zeus and the Intel X25-E ??
I''m looking at the newly-orderable (via Sun) STEC Zeus SSDs, and they''re outrageously priced. http://www.stec-inc.com/product/zeusssd.php I just looked at the Intel X25-E series, and they look comparable in performance. At about 20% of the cost. http://www.intel.com/design/flash/nand/extreme/index.htm Can anyone enlighten me as to any possible difference between an STEC
2007 Jan 10
1
Solaris 10 11/06
...rt() 6401400 zfs(1) usage output is excessively long 6405330 swap on zvol isn''t added during boot 6405966 Hot Spare support in ZFS 6409228 typo in aclutils.h 6409302 passing a non-root vdev via zpool_create() panics system 6415739 assertion failed: !(zio->io_flags & 0x00040) 6416482 filebench oltp workload hangs in zfs 6416759 ::dbufs does not find bonus buffers anymore 6416794 zfs panics in dnode_reallocate during incremental zfs restore 6417978 double parity RAID-Z a.k.a. RAID6 6420204 root filesystem''s delete queue is not running 6421216 ufsrestore should use acl_set() for s...
2014 Aug 10
0
[PATCH] vhost: Add polling mode
...I/O acceleration features described in: > KVM Forum 2013: Efficient and Scalable Virtio (by Abel Gordon) > https://www.youtube.com/watch?v=9EyweibHfEs > and > https://www.mail-archive.com/kvm at vger.kernel.org/msg98179.html > > I ran some experiments with TCP stream netperf and filebench (having 2 threads > performing random reads) benchmarks on an IBM System x3650 M4. > I have two machines, A and B. A hosts the vms, B runs the netserver. > The vms (on A) run netperf, its destination server is running on B. > All runs loaded the guests in a way that they were (cpu) satu...
2014 Aug 20
0
[PATCH] vhost: Add polling mode
...I/O acceleration features described in: > KVM Forum 2013: Efficient and Scalable Virtio (by Abel Gordon) > https://www.youtube.com/watch?v=9EyweibHfEs > and > https://www.mail-archive.com/kvm at vger.kernel.org/msg98179.html > > I ran some experiments with TCP stream netperf and filebench (having 2 threads > performing random reads) benchmarks on an IBM System x3650 M4. > I have two machines, A and B. A hosts the vms, B runs the netserver. > The vms (on A) run netperf, its destination server is running on B. > All runs loaded the guests in a way that they were (cpu) satu...
2008 Nov 17
0
Overhead evaluation of my nfsv3client probe implementation
...th kmem_zalloc(). According to the overhead caused by tsd_get() and tsd_set(), I did an experiment to measure it. In this experiment, I run a dtrace script to enable nfsv3client probes and measure time consumed by each nfsv3 operation and tsd_get() and tsd_get(). Then I run some workloads in *filebench* to perform some file operations in a nfs mounted folder. I use the ratio of time spent on sd_get()+tsd_get() to the time spent on the whole operation to evaluate the overhead. The workload I selected are: randomrw, filemicro_rwrite, filemicro_rread, randomread, randomwrite. Summary of the...