Displaying 20 results from an estimated 21 matches for "tiobench".
Did you mean:
mibench
2008 Feb 14
2
btrfs v0.11 & btrfs v0.12 benchmark results
...Fu-Si Primergy RX330 S1
* AMD Opteron 2210 1.8 GHz
* 1 GB RAM
* 3 x 73 GB, 3Gb/s, hot plug, 10k rpm, 3.5" SAS HDD
* LSI RAID 128 MB
Fu-Si Econel 200
* Intel Xeon 5110
* 512 MB RAM
* 2 x 160 GB SATA HDD
Summary, graphs, etc.:
----------------------
tiobench:
http://hup.hu/old/HUP/FS_test_2008/tiobench.ods
bonnie++:
http://hup.hu/old/HUP/FS_test_2008/bonnieplusplus.ods
dbench:
http://hup.hu/old/HUP/FS_test_2008/dbench.ods
Test outputs:
-------------
Fu-Si Econel 200 - btrfs v0.11
http://hup.hu/old/HUP/FS_test_2008/meresek/btrfs_v0.11__eco200_l...
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a
few benchmark programs on them: bonnie++, tiobench, dbench and a few
generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
All with standard mkfs/mount options and +noatime for all of them.
Here are the results, no graphs - sorry:
http://nerdbynature.de/benchmarks/v40z/2009-12-22/
Reiserfs is locking up during dbench, so I...
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a
few benchmark programs on them: bonnie++, tiobench, dbench and a few
generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs.
All with standard mkfs/mount options and +noatime for all of them.
Here are the results, no graphs - sorry:
http://nerdbynature.de/benchmarks/v40z/2009-12-22/
Reiserfs is locking up during dbench, so I...
2003 Nov 30
1
bad performance on 2.4.23
...2.4.21-rc6-ac1 105.1 3.1
2.4.21-pre4-ac3 104.7 2.3
2.4.18 101.8 1.8
2.4.19 100.2 2.6
2.4.20-rc1 96.5 2.8
2.4.23-rc1 94.8 2.4
- tiobench(ext3):
In 'Sequential Reads' and 'Sequential Writes', 'Maximum Latency' is _too much high_
tiobench
========
Sequential Reads ext3
File Blk Num Avg Maximum Lat% Lat% CPU
Kernel Siz...
2008 Feb 06
0
[ANNOUNCE] Btrfs v0.12 released
...all bug fixes, and I wanted to get them out there
before the (destabilizing) work on multiple-devices took over.
So, here's v0.12. It comes with a shiny new disk format (sorry), but the gain
is dramatically better random writes to existing files. In testing here, the
random write phase of tiobench went from 1MB/s to 30MB/s. The fix was to
change the way back references for file extents were hashed.
You can download v0.12 here:
http://oss.oracle.com/projects/btrfs
Other changes:
Insert and delete multiple items at once in the btree where possible. Back
references added more tree balan...
2008 Feb 01
0
More performance fixes pushed out to btrfs-unstable
...ting 1 million empty files in a
single dir).
* When inserting back refs for file data extents, hash the offset of the
extent in the file when creating the key. For extents with many references,
this makes a huge difference in CPU time spent creating the back ref.
For the random write phase of tiobench, btrfs v0.11 writes at 2MB/s, using
100% system cpu time. The changes I just pushed out change that to writing
at disk speed.
The bad news is that changing the hashing is a disk format change. I had
originally planned on including the file offset in the hash, but missed it
when finishing off...
2005 May 11
0
Re: Hardware RAID Controller -- not a "bug"
From: Joshua Baker-LePain <jlb17 at duke.edu>
> I'm in the midst of testing a dual 9500-12 based system, and I've got all
> sorts of results (I posted tiobench numbers for XFS and ext3 recently).
Until the Escalade 9500S series matures, I've been recommending the following:
3Ware Escalade 7506/8506 for RAID-10.
3Ware Escalade 7506/8506 for RAID-5 when it is largely a read-only setup.
LSI Logic MegaRAID SATA 300-8X (XScale) for RAID-5.
Unless you...
2005 May 08
0
2.6.12-rc3-mm2 benchmarks
...eplying !!]
hi all,
from time to time i do some benchmarks for several filesystems and several
crypto-algorithms too, details here:
http://nerdbynature.de/bench/
latest results here:
http://nerdbynature.de/bench/prinz/2.6.12-rc3-mm2/bonnie.html
http://nerdbynature.de/bench/prinz/2.6.12-rc3-mm2/tiobench.txt
Christian.
- --
BOFH excuse #173:
Recursive traversal of loopback mount points
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCfWlhC/PVm5+NVoYRAmCBAJ9D+UrpvNJ+AoJijJwCN3DVs1Da/QCgkMoC
Ea5VVCQ1Q2XrJNahJ...
2001 Jul 23
2
Is anyone using Ext3 on 2.4.x and software raid 5?
Is anyone using Ext3 on a 2.4.x kernel with software raid5? Does it
work correctly? Last I checked noone had tried it yet, but that was
before I went away on vacation.
--
Daniel R. Bidwell | bidwell@andrews.edu
Andrews University Information Technology Services
If two always agree, one of them is unnecessary
"Friends don't let friends do DOS"
"In theory, theory and practice
2010 Oct 01
2
Format details for a raid partition....
So I have been playing with a RAID 10 f2 ( 2 disks far layout)
setup...thanks for all of the advice..Now I am playing with the format and
want to make sure I have it setup the best that I can, my raid was built
using the raid 10 option with 2 disks with the layout=far, chunk size
512....now I read all of the docs I could find about format and stride and
stripe size and this is what i came up
2006 Sep 13
4
benchmarking large RAID arrays
I'm just wondering what folks are using to benchmark/tune large arrays
these days. I've always used bonnie with file sizes 2-3 times physical
RAM. Maybe there's a better way?
Cheers,
2005 May 21
3
Standardized Benchmarking?
Hello all,
I'm creating a site where people can share their benchmarks. If you are
interested, the site is at (I just started on it so it has the stock
graphics and color scheme still):
www.dcsnow.com/mambo
I would like some thoughts on what would be a good way to standardize the
testing so the results are more comparable. Is Bonnie+ a good program for
hard drive speed testing? Is there
2008 Oct 03
2
[PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0: Introduction
Hi everyone,
This is the dm-ioband version 1.7.0 release.
Dm-ioband is an I/O bandwidth controller implemented as a device-mapper
driver, which gives specified bandwidth to each job running on the same
physical device.
- Can be applied to the kernel 2.6.27-rc5-mm1.
- Changes from 1.6.0 (posted on Sep 24, 2008):
- Fix a problem that processes issuing I/Os are permanently blocked
when I/O
2008 Oct 03
2
[PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0: Introduction
Hi everyone,
This is the dm-ioband version 1.7.0 release.
Dm-ioband is an I/O bandwidth controller implemented as a device-mapper
driver, which gives specified bandwidth to each job running on the same
physical device.
- Can be applied to the kernel 2.6.27-rc5-mm1.
- Changes from 1.6.0 (posted on Sep 24, 2008):
- Fix a problem that processes issuing I/Os are permanently blocked
when I/O
2008 Oct 03
2
[PATCH 0/2] dm-ioband: I/O bandwidth controller v1.7.0: Introduction
Hi everyone,
This is the dm-ioband version 1.7.0 release.
Dm-ioband is an I/O bandwidth controller implemented as a device-mapper
driver, which gives specified bandwidth to each job running on the same
physical device.
- Can be applied to the kernel 2.6.27-rc5-mm1.
- Changes from 1.6.0 (posted on Sep 24, 2008):
- Fix a problem that processes issuing I/Os are permanently blocked
when I/O
2008 Sep 22
7
performance of pv drivers for windows
Hello everybody,
I tried to measure the performance of the available drivers for windows as a
HVM guest.
I used the gplpv drivers 0.9.11-pre17, the PV drivers from Novell, and the
drivers
from Citrix XenSource with the XenServer 5.
The Novell and gplpv drivers were more or less at the same speed, for both,
network and disk performance.
The disk performance was about 10MB/s reading and
2006 Nov 26
1
ext3 4TB fs limit on amd64 (FAQ?)
...data.
Afterwards I checked the data with md5sum and did a fsck, everything seems
to be fine so far.
Was it just luck that I didn't see any data corruption? Can I use ext3 for
fs >4TB<8TB on amd64 these days? I also tried xfs, but unlike ext3 it
repeatable froze the system when I ran the tiobench benchmark.
Ralf
2007 Apr 12
7
Looking for a good disk exerciser
I recently added a Seagate 400Gb SATA drive to my system, and it has been
behaving strangely since I put it in. for one thing, the BIOS S.M.A.R.T.
came up with a warning the last time I booted with it enabled, saying that I
should backup my data and replace the disk (!).
I still have not made any irreversible data transfers to this drive, and I
have some time yet to take it back, but I'd
2013 Mar 12
2
ext4 and extremely slow filesystem traversal
...served blocks.
In other works the fs was created with 'mkfs -t ext4 -E stride=16 -m 0
-L volname /dev/vgX/Y'. I'm attaching the mke2fs.conf for reference too.
Everything is running with Debian Squeeze and its 2.6.32 kernel (amd64
flavour), on a 4 cores and 4 GB RAM server.
I ran a tiobench tonight on an idle instance (I have two identicals
systems - hw, sw, data - with exactly the same pb). I've attached
results as plain text to protect them from line wrapping. They look fine
to me.
When I try to backup the problematic filesystem with tar, rsync or
whatever tool traversing the...
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting
up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and
connected via load-shared 4Gbit FC links. This week I have tried many
different configurations, using firmware managed RAID, ZFS managed
RAID, and with the controller cache enabled or disabled.
My objective is to obtain the best single-file write performance.