similar to: Best mkfs.ext2 performance options on RAID5 in CentOS 4.2

Displaying 20 results from an estimated 900 matches similar to: "Best mkfs.ext2 performance options on RAID5 in CentOS 4.2"

2006 Apr 14
1
Ext3 and 3ware RAID5
I run a decent amount of 3ware hardware, all under centos-4. There seems to be some sort of fundamental disagreement between ext3 and 3ware's hardware RAID5 mode that trashes write performance. As a representative example, one current setup is 2 9550SX-12 boards in hardware RAID5 mode (256KB stripe size) with a software RAID0 stripe on top (also 256KB chunks). bonnie++ results look
2013 Mar 15
0
[PATCH] btrfs-progs: mkfs: add missing raid5/6 description
Signed-off-by: Matias Bjørling <m@bjorling.me> --- man/mkfs.btrfs.8.in | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/man/mkfs.btrfs.8.in b/man/mkfs.btrfs.8.in index 41163e0..db8c57c 100644 --- a/man/mkfs.btrfs.8.in +++ b/man/mkfs.btrfs.8.in @@ -37,7 +37,7 @@ mkfs.btrfs uses all the available storage for the filesystem. .TP \fB\-d\fR, \fB\-\-data
2012 Jan 04
9
Stress test zfs
Hi all, I''ve got a solaris 10 running 9/10 on a T3. It''s an oracle box with 128GB memory RIght now oracle . I''ve been trying to load test the box with bonnie++. I can seem to get 80 to 90 K writes, but can''t seem to get more than a couple K for writes. Any suggestions? Or should I take this to a bonnie++ mailing list? Any help is appreciated. I''m kinda
2005 Jul 28
3
Tyan Thunder K8SE S2892 Report
I had my eye on the Tyan dual-Opteron mobos for awhile. I tried to find a posting *anywhere* sharing experiences with these boards under Linux. No such luck. So placing myself under the heading "Where Angles Fear to Tread," I went ahead and built a system anyway. Here's what I've learned. The specs: Tyan Thunder K8SE S2892, BIOS 1.01 2x Opteron 270, 2Ghz Dual-Core, retail
2004 Sep 13
2
CentOS 3.1: sshd and pam /etc/security/limits.conf file descriptor settings problem
Why can't non-uid 0 users have more than 1024 file descriptors when logging in via ssh? I'm trying to allow a user to have a hard limit of 8192 file descriptors(system defaults to 1024) via the following setting in /etc/security/limits.conf: jdoe hard nofile 8192 But when jdoe logs in via ssh and does 'ulimit -Hn' he gets '1024' as a response. If he tries to
2008 Feb 27
8
Physical disks + Software Raid + lvm in domU
Hi, I''m trying to setup my box as described below: - The Dom0 exports two disks as physical devices to the DomU - DomU use the two devices to assemble a software raid device (/dev/md0) - On the /dev/md0 I create a lvm volume group and lvm volumes on it. Everything seems to work fine if the lvm volumes on the DomU are lightly used. Under heavy load the DomU freeze up immediately. The
2009 Jan 10
3
Poor RAID performance new Xeon server?
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it. It has the following spec: Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array. RAID bus controller: Intel Corporation 82801 SATA RAID Controller For a simple
2006 Oct 04
2
server disk subsystem benchmarks, bonnie++ and/or others?
Greetings I've searched to no avail so far... there is bound to be something more intelligible out there...??? I am playing with bonnie++ for the first time... May I please get some advise and list experience on using this or other disk subsystem benchmark programs properly with or without a GUI ? Test system in this case is a Compaq DL360 with 2 to 4 Gig DRAM and qty (2) 36Gig 10k drives
2007 Nov 26
4
Filesystem for Maildir
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> </head> <body bgcolor="#ffffff" text="#000000"> <font size="-1"><font face="Verdana">Hi all,<br> <br> In last year, i had made some research and benchmarks based on CentOS 4 to know which filesystem is better for
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2008 Mar 04
2
7.0-Release and 3ware 9550SXU w/BBU - horrible write performance
Hi, I've got a new server with a 3ware 9550SXU with the Battery. I am using FreeBSD 7.0-Release (tried both 4BSD and ULE) using AMD64 and the 3ware performance for writes is just plain horrible. Something is obviously wrong but I'm not sure what. I've got a 4 disk RAID 10 array. According to 3dm2 the cache is on. I even tried setting The StorSave preference to
2004 Jul 14
3
ext3 performance with hardware RAID5
I'm setting up a new fileserver. It has two RAID controllers, a PERC 3/DI providing mirrored system disks and a PERC 3/DC providing a 1TB RAID5 volume consisting of eight 144GB U160 drives. This will serve NFS, Samba and sftp clients for about 200 users. The logical drive was created with the following settings: RAID = 5 stripe size = 32kb write policy = wrback read policy =
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2009 Dec 24
6
benchmark results
I've had the chance to use a testsystem here and couldn't resist running a few benchmark programs on them: bonnie++, tiobench, dbench and a few generic ones (cp/rm/tar/etc...) on ext{234}, btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options and +noatime for all of them. Here are the results, no graphs - sorry: http://nerdbynature.de/benchmarks/v40z/2009-12-22/ Reiserfs
2013 Sep 04
4
Linux tool to check random I/O performance
we just purchase new I/O card and like to check I/O performance. for sequence I/O performance test we can use "hdparm -t /dev/xxx" bur for random I/O performance test which Linux command I can use? ** our environment does NOT allow install third party software.. Thanks
2005 May 11
5
Xen reboots on dom-U disk stress...
Hi all I tried to run bonnie++ disk stresser in dom-U, who''s disk is backed with non-local (on nfs) loop-back file. The machine rebooted pretty quickly. So, how do I tell what''s barfing? Is it Xen? Is it dom-0 (nfs or loop-back)? I looked in dom-0''s /var/log/messages and didn''t see any obvious record of a dom-0 whoopsie (but is that the right place to look
2005 Jul 14
1
a comparison of ext3, jfs, and xfs on hardware raid
I'm setting up a new file server and I just can't seem to get the expected performance from ext3. Unfortunately I'm stuck with ext3 due to my use of Lustre. So I'm hoping you dear readers will send me some tips for increasing ext3 performance. The system is using an Areca hardware raid controller with 5 7200RPM SATA disks. The RAID controller has 128MB of cache and the disks
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2008 Nov 13
7
Kernel oops when running bonnie++ on btrfs
I wanted to see how btrfs compares to other filesystems so I have been running bonnie++ on it. While the results are good(much faster then ext2) every once in awhile I get a kernel oops. I am testing on xubuntu 8.10 with the 2.6.27-7-686 kernel using the latest git sources. Most of the time the oops happens within 20min of running bonnie++ but sometimes it takes a few hours. This happens with and