Hello fellow sysadmins! I've assembled a whitebox system with a SuperMicro motherboard, case, 8GB of memory and a single quad core Xeon processor. I have two 9650SE-8LPML cards (8 ports each) in each server with 12 1TB SATA drives total. Three drives per "lane" on each card. CentOS 5.2 x86_64. I'm looking for advice on tuning this thing for performance. Especially for the role of performing as either an NFS datastore for VMware or an iSCSI one. I set up two volumes (one for each card); one RAID6 and one RAID5. I used the default 64K block size and am trying various filesystems in tandem with it. I stumbled across the following recommendations on 3Ware's site: echo "64" > /sys/block/sda/queue/max_sectors_kb blockdev --setra 16384 /dev/sda echo "512" > /sys/block/sda/queue/nr_requests But am wondering if there are other things I should be looking at, including changing the IO scheduler. Any particular options I should use with filesystem creation to match up with my RAID block size? I also noted that there is a newer 3Ware driver (2.26.08.004) available than the one that comes stock with CentOS 5.2 (2.26.02.008). Not sure if I can expect performance improvement by "upgrading", and I imagine I'd have to mess with my initrd file in any case or boot with a driver disk option and blacklist the built-in driver... Thanks for any feedback! Ray
Ray Van Dolson wrote:> Hello fellow sysadmins! > > I've assembled a whitebox system with a SuperMicro motherboard, case, > 8GB of memory and a single quad core Xeon processor. > > I have two 9650SE-8LPML cards (8 ports each) in each server with 12 1TB > SATA drives total. Three drives per "lane" on each card. > > CentOS 5.2 x86_64. > > I'm looking for advice on tuning this thing for performance. > Especially for the role of performing as either an NFS datastore for > VMware or an iSCSI one. > > I set up two volumes (one for each card); one RAID6 and one RAID5. I > used the default 64K block size and am trying various filesystems in > tandem with it. I stumbled across the following recommendations on > 3Ware's site: > > echo "64" > /sys/block/sda/queue/max_sectors_kb > blockdev --setra 16384 /dev/sda > echo "512" > /sys/block/sda/queue/nr_requests > > But am wondering if there are other things I should be looking at, > including changing the IO scheduler. Any particular options I should > use with filesystem creation to match up with my RAID block size? > > I also noted that there is a newer 3Ware driver (2.26.08.004) available > than the one that comes stock with CentOS 5.2 (2.26.02.008). Not sure > if I can expect performance improvement by "upgrading", and I imagine > I'd have to mess with my initrd file in any case or boot with a driver > disk option and blacklist the built-in driver... > > Thanks for any feedback! > > Ray > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >Ray, I've had good performance from xfs with large filesystems. What kind of files are you looking to use, lots of smaller files or large media files?? -------------- next part -------------- A non-text attachment was scrubbed... Name: rkampen.vcf Type: text/x-vcard Size: 121 bytes Desc: not available URL: <http://lists.centos.org/pipermail/centos/attachments/20090115/9bf39fab/attachment-0003.vcf>
On Thu, Jan 15, 2009 at 06:04:59PM -0500, Rob Kampen wrote:> Ray, I've had good performance from xfs with large filesystems. > What kind of files are you looking to use, lots of smaller files or large > media files??I was leaning towards using XFS as well. We'll probably be handling a lot of large files (VMware datastore). In my initial tests the iSCSI target (tgtd) appears to run quite quickly, whereas the NFS daemon gets bogged down and results in high system load. I'm not yet sure if I should be usin tgtd[1] or IET (iSCSI Enterprise Target[2]). Thanks, Ray [1] http://stgt.berlios.de [2] http://iscsitarget.sourceforge.net/