search for: bonnie++

Displaying 20 results from an estimated 152 matches for "bonnie++".

2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda new to load testing. Thanks. -------------- next part -------------- An HTML attachm...
2006 Oct 04
2
server disk subsystem benchmarks, bonnie++ and/or others?
Greetings I've searched to no avail so far... there is bound to be something more intelligible out there...??? I am playing with bonnie++ for the first time... May I please get some advise and list experience on using this or other disk subsystem benchmark programs properly with or without a GUI ? Test system in this case is a Compaq DL360 with 2 to 4 Gig DRAM and qty (2) 36Gig 10k drives in hardware raid1 (raid1 default 128k) an...
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0 disk and mounting it on both nodes, basic functionality seems in place but usually less than 5-10 minutes after I start bonnie++ as a test on one o...
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0 disk and mounting it on both nodes, basic functionality seems in place but usually less than 5-10 minutes after I start bonnie++ as a test on one o...
2009 Jan 10
3
Poor RAID performance new Xeon server?
...have used mdadm to configure the array. RAID bus controller: Intel Corporation 82801 SATA RAID Controller For a simple striped array I ran: # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt Attached are the results of 2 bonnie++ tests I made to test the performance: # bonnie++ -s 256m -d /mnt -u 0 -r 0 and # bonnie++ -s 1g -d /mnt -u 0 -r 0 I also tried 3 of the drives in a RAID 5 setup with gave similar results. Is it me or are the results poor? Is this the best I can expect from the hardware or is something wrong...
2008 Feb 27
8
Physical disks + Software Raid + lvm in domU
...use the two devices to assemble a software raid device (/dev/md0) - On the /dev/md0 I create a lvm volume group and lvm volumes on it. Everything seems to work fine if the lvm volumes on the DomU are lightly used. Under heavy load the DomU freeze up immediately. The i/o tests are performed with bonnie++ This is the DomU configuration file: # -*- mode: python; -*- kernel = "/boot/vmlinuz-2.6.18-6-xen-amd64" ramdisk = "/boot/initrd.img-2.6.18-6-xen-amd64" memory = 256 name = "apollo" vif = [''bridge=xenbr0''] disk = [''fil...
2007 Jan 11
4
Help understanding some benchmark results
...r promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now I''m not so sure. I''ve used bonnie++ and a variety of Linux RAID configs below to approximate equivalent ZFS configurations and compare. I do realise they''re not exactly the same thing, but it seems to me they''re reasonable comparisons and should return at least somewhat similar performance. I also realise bonnie...
2005 Oct 31
4
Best mkfs.ext2 performance options on RAID5 in CentOS 4.2
...he deadline scheduler. mkfs.ext2 options used: ------------------------ mkfs.ext2 -b 4096 -L /d01 -m 1 -O sparse_super,dir_index -R stride=64 -T largefile /dev/sda1 I'm using a stride size of 64 since the ext2 block size is 4KB and the array stripe size is 256KB (256/4 = 64). Output of using bonnie++: --------------------------- $ /usr/local/bonnie++/sbin/bonnie++ -d /d01/test -r 6144 -m anchor_ext2_4k_64s Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP...
2007 Nov 26
4
Filesystem for Maildir
...erformance but unreliable and bad recovery tools.<br> - XFS: My choice, good performance and reliability.<br> <br> On CentOS 5.0, a had the same benchmarks and now, EXT3 and XFS seems to had better or equivalent performance on Read and Create Random files. One of this tests, using bonnie++, show this:<br> <br> # bonnie++ -d /mnt/sdc1/testfile -s 8192 -m `hostname` -n 50:150000:5000:1000<br> <br> XFS:<br> Version 1.03 ------Sequential Output------ --Sequential Input- --Random-<br> -Per Chr- --Block-- -Rewrite- -Per Chr-...
2008 Nov 13
7
Kernel oops when running bonnie++ on btrfs
I wanted to see how btrfs compares to other filesystems so I have been running bonnie++ on it. While the results are good(much faster then ext2) every once in awhile I get a kernel oops. I am testing on xubuntu 8.10 with the 2.6.27-7-686 kernel using the latest git sources. Most of the time the oops happens within 20min of running bonnie++ but sometimes it takes a few hours. This ha...
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs is locking up during d...
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs is locking up during d...
2005 May 11
5
Xen reboots on dom-U disk stress...
Hi all I tried to run bonnie++ disk stresser in dom-U, who''s disk is backed with non-local (on nfs) loop-back file. The machine rebooted pretty quickly. So, how do I tell what''s barfing? Is it Xen? Is it dom-0 (nfs or loop-back)? I looked in dom-0''s /var/log/messages and didn''t see any o...
2005 Jul 14
1
a comparison of ext3, jfs, and xfs on hardware raid
...ill send me some tips for increasing ext3 performance. The system is using an Areca hardware raid controller with 5 7200RPM SATA disks. The RAID controller has 128MB of cache and the disks each have 8MB. The cache is write-back. The system is Linux 2.6.12 on amd64 with 1GB system memory. Using bonnie++ with a 10GB fileset, in MB/s: ext3 jfs xfs Read 112 188 141 Write 97 157 167 Rewrite 51 71 60 These number were obtained using the mkfs defaults for all filesystems and the deadline scheduler. As you can see JFS is kicking butt on this test. Nex...
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
...sly wrong but I'm not sure what. I've got a 4 disk RAID 10 array. According to 3dm2 the cache is on. I even tried setting The StorSave preference to "Performance" with no real benefit. There seems to be something really wrong with disk performance. Here's the results from bonnie: File './Bonnie.2551', size: 104857600 Writing with putc()...done Rewriting...done Writing intelligently...done Reading with getc()...done Reading intelligently...done Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done... -------Sequential Output-------- ---Se...
2009 Feb 13
3
Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)
Hi folks, For people who might be interested, here is how btrfs performs with two partitions on a single SSD drive in a RAID-1 mirror. This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM and a Samsung SSD (128GB Thin uSATA SSD). Version 1.03c ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
2008 Sep 25
4
Help with b97 HVM zvol-backed DomU disk performance
Hi Folks, I was wondering if anyone has an pointers/suggestions on how I might increase disk performance of a HVM zvol-backed DomU? - this is my first DomU, so hopefully its something obvious Running bonnie++ shows the DomU''s performance to be 3 orders of magnitude worse than Dom0''s, which itself is half as good as when not running xVM at all (see bottom for bonnie++ results) I''m using a 2*raidz2 SAN for the DomU backing. I''m also using a separate disk (a compact...
2013 Sep 04
4
Linux tool to check random I/O performance
we just purchase new I/O card and like to check I/O performance. for sequence I/O performance test we can use "hdparm -t /dev/xxx" bur for random I/O performance test which Linux command I can use? ** our environment does NOT allow install third party software.. Thanks
2003 Sep 22
3
Fwd: privsep in ssh
...ses a small amount of extra CPU power for root logins, and on systems such as SE Linux it provides security benefits. Anyone who wants to use the SE Linux PAM module for sshd probably wants this. -- http://www.coker.com.au/selinux/ My NSA Security Enhanced Linux packages http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark http://www.coker.com.au/postal/ Postal SMTP/POP benchmark http://www.coker.com.au/~russell/ My home page -------------- next part -------------- A non-text attachment was scrubbed... Name: diff Type: text/x-diff Size: 381 bytes Desc: not available Url : http://...
2007 Nov 08
1
XEN HVMs on LVM over iSCSI - test results, (crashes) and questions
...I have successfully installed a windows XP HVM, a Scientific Linux CERN - SLC - 4 and 3 (RedHat EL 4 and 3) HVMs and other non HVM machines. The volume I imported from the iSCSI is used with LVM: each HVM domain has a 8GB drive partitioned with a 1GB swap and rest on /. These are performance for bonnie++ (Per Chr column write and read): iSCSI dom0: w: 54M, r: 47M (no domU) Local disk dom0: w: 59M, r: 51M (3 idle domU) HVM single : w: 18M, r: 37M I''ve then launched bonnie++ on two separate SLC4 HVM (cloned): HVM 1 : w: 8M, r: 33M HVM 2 : w: 8.5M, r: 3...