similar to: server disk subsystem benchmarks, bonnie++ and/or others?

Displaying 20 results from an estimated 3000 matches similar to: "server disk subsystem benchmarks, bonnie++ and/or others?"

2007 Nov 26
4
Filesystem for Maildir
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> </head> <body bgcolor="#ffffff" text="#000000"> <font size="-1"><font face="Verdana">Hi all,<br> <br> In last year, i had made some research and benchmarks based on CentOS 4 to know which filesystem is better for
2010 May 05
6
Benchmark Disk IO
What is the best way to benchmark disk IO? I'm looking to move one of my servers, which is rather IO intense. But not without first benchmarking the current and new disk array, To make sure this isn't a full waste of time. thanks
2011 Nov 23
1
SSD diagnostics / test suite?
Hey folks, I looked back through the list archives and there are surprisingly few threads with "SSD" in the subject. In my new job I've been handed over a number of things that were outstanding with the previous Sys Admin, and one of them was an SSD that was suspect. I just plugged it into a diagnostics laptop to use the Linux Smart software to check it - and it all seems fine
2007 Jun 29
2
poor read performance
I am seeing what seems to be a notable limit on read performance of an ext3 filesystem. If anyone could offer some insight it would be helpful. Background: 12 x 500G SATA disks in a Hardware RAID enclosure connected via 2Gb/s FC to a 4 x 2.6 Ghz system with 4GB ram running RHEL4.5. Initially the enclosure was configured RAID5 10+1 parity, although I've also tried RAID 50 and currently RAID 0.
2008 Nov 13
7
Kernel oops when running bonnie++ on btrfs
I wanted to see how btrfs compares to other filesystems so I have been running bonnie++ on it. While the results are good(much faster then ext2) every once in awhile I get a kernel oops. I am testing on xubuntu 8.10 with the 2.6.27-7-686 kernel using the latest git sources. Most of the time the oops happens within 20min of running bonnie++ but sometimes it takes a few hours. This happens with and
2007 Sep 13
3
3Ware 9550SX and latency/system responsiveness
Dear list, I thought I'd just share my experiences with this 3Ware card, and see if anyone might have any suggestions. System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID 1 plus 2 hot spare config. The array is properly initialized, write cache is on, as is queueing (and supported by the drives). StoreSave
2006 Sep 13
4
benchmarking large RAID arrays
I'm just wondering what folks are using to benchmark/tune large arrays these days. I've always used bonnie with file sizes 2-3 times physical RAM. Maybe there's a better way? Cheers,
2009 Nov 03
8
recommend benchmarking SW
Hey folks, We've got some new hardware and are trying to figure out what best to do with it. Either run CentOS right on the bare metal, or virtualize, or several combination options. Mainly looking at : - CentOS on bare metal - CentOS on ESXi 4.0 with local disk - CentOS on ESXi with 1 VM running Openfiler to serve disk to other VMs And want to benchmark these 3 scenarios So far all we
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2018 May 03
1
Finding performance bottlenecks
Tony?s performance sounds significantly sub par from my experience. I did some testing with gluster 3.12 and Ovirt 3.9, on my running production cluster when I enabled the glfsapi, even my pre numbers are significantly better than what Tony is reporting: ??????????????????? Before using gfapi: ]# dd if=/dev/urandom of=test.file bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2009 Feb 13
3
Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)
Hi folks, For people who might be interested, here is how btrfs performs with two partitions on a single SSD drive in a RAID-1 mirror. This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM and a Samsung SSD (128GB Thin uSATA SSD). Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it. It has the following spec: Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array. RAID bus controller: Intel Corporation 82801 SATA RAID Controller For a simple
2010 Jul 20
16
zfs raidz1 and traditional raid 5 perfomrance comparision
Hi, for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one physical disk iops, since raidz1 is like raid5 , so is raid5 has same performance like raidz1? ie. random iops equal to one physical disk''s ipos. Regards Victor -- This message posted from opensolaris.org
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks, I would appreciate it if someone can help me understand some weird results I''m seeing with trying to do performance testing with an SSD offloaded ZIL. I''m attempting to improve my infrastructure''s burstable write capacity (ZFS based WebDav servers), and naturally I''m looking at implementing SSD based ZIL devices. I have a test machine with the
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
I can't seem to get the read and write performance better than approximately 40MB/s on an ext2 file system. IMO, this is horrible performance for a 6-drive, hardware RAID 5 array. Please have a look at what I'm doing and let me know if anybody has any suggestions on how to improve the performance... System specs: ----------------- 2 x 2.8GHz Xeons 6GB RAM 1 3ware 9500S-12 2 x 6-drive,