If you dig into the email archives you''ll see lots of threads about where to use ZFS or hw level raid, the tradeoffs, possible performance hits, etc. It really is context sensitive. Karen Chau wrote:> Hi Torrey, thanks for you response. > I''m not sure if I can create a LUN using a single disk on the 6130. > If I use 6 disks to create 3 LUNS (2 disks per LUN) and create a > raidz pool. I will have stripe w/parity on *BOTH* LUN level and ZFS > level, would this cause a performance issue? How about recovery??? > > > Torrey McMahon wrote On 01/03/07 09:56,: >> You want to give ZFS multiple LUNs so it can have redundancy within >> the pool. (Mirror or RAIDZ) Otherwise, you will not be able to >> recover from certain types of errors. A zpool with a single LUN would >> only let you detect the errors. >> >> Karen Chau wrote: >>> >>> >>> ------------------------------------------------------------------------ >>> >>> >>> Subject: >>> Re: ZFS and storage array >>> From: >>> Karen Chau <Karen.Chau at Sun.COM> >>> Date: >>> Wed, 03 Jan 2007 08:55:56 -0800 >>> To: >>> zfs-interest at Sun.COM >>> >>> To: >>> zfs-interest at Sun.COM >>> CC: >>> ITSM-eng <itsm-engineering at Sun.COM> >>> >>> >>> http://docs.sun.com/source/819-0032-10/chapter9.html >>> >>> The Sun StorEdge 6130 array software is configured with a default >>> storage profile, storage pool, and storage domain: >>> >>> * The default storage profile configures associated volumes to >>> have a RAID-5 RAID level, 512-Kbyte segment size, enabled >>> read-ahead mode, FC disk type, and a variable number of drives. >>> * The default storage pool uses the Default profile (RAID-5) and >>> groups all volumes with the same storage characteristics, as >>> defined by the storage profile. >>> >>> If I create a raidz pool using the default storage pool (raidz on >>> top raid-5), will this make sense or work?? >>> >>> >>> Karen Chau wrote On 01/02/07 15:27,: >>>> We''re upgrading to new server with a 6130 storage array (14x >>>> 279.396 GB), I want to create a 700G RAID-Z zpool just like our >>>> current setup. >>>> >>>> How should I do this to get the best performance? >>>> >>>> 1) Create a volume using 3 disks and create zpool using this volume? >>>> 2) Create 3 volumes using 3 disks and create zpool using 3 volumes? >>>> >>>> Our application Canary has approx 750 clients uploading to the >>>> server every 10 mins, that''s approx 108,000 gzip tarballs per day >>>> writing to the /upload directory. The parser untars the tarball >>>> which consists of 8 ascii files into the /archives directory. /app >>>> is our application and tools (apache, tomcat, etc) directory. We >>>> also have batch jobs that run throughout the day, I would say we >>>> read 2 to 3 times more than we write. >>>> >>>> directory info >>>> -------------- >>>> >>>> /app - 40G >>>> /upload - 20G >>>> /archives - 640 >>>> >>>> itsm-mpk-2# zpool status canary >>>> pool: canary >>>> state: ONLINE >>>> scrub: none requested >>>> config: >>>> >>>> NAME STATE READ WRITE CKSUM >>>> canary ONLINE 0 0 0 >>>> raidz ONLINE 0 0 0 >>>> c1t1d0 ONLINE 0 0 0 >>>> c1t2d0 ONLINE 0 0 0 >>>> c1t3d0 ONLINE 0 0 0 >>>> >>>> errors: No known data errors >>>> >>>> >>>> Thanks, >>>> Karen >>>> >>>> >>> >> >> >
You want to give ZFS multiple LUNs so it can have redundancy within the pool. (Mirror or RAIDZ) Otherwise, you will not be able to recover from certain types of errors. A zpool with a single LUN would only let you detect the errors. Karen Chau wrote:> > > ------------------------------------------------------------------------ > > Subject: > Re: ZFS and storage array > From: > Karen Chau <Karen.Chau@Sun.COM> > Date: > Wed, 03 Jan 2007 08:55:56 -0800 > To: > zfs-interest@Sun.COM > > To: > zfs-interest@Sun.COM > CC: > ITSM-eng <itsm-engineering@Sun.COM> > > > http://docs.sun.com/source/819-0032-10/chapter9.html > > The Sun StorEdge 6130 array software is configured with a default > storage profile, storage pool, and storage domain: > > * The default storage profile configures associated volumes to > have a RAID-5 RAID level, 512-Kbyte segment size, enabled > read-ahead mode, FC disk type, and a variable number of drives. > * The default storage pool uses the Default profile (RAID-5) and > groups all volumes with the same storage characteristics, as > defined by the storage profile. > > If I create a raidz pool using the default storage pool (raidz on top > raid-5), will this make sense or work?? > > > Karen Chau wrote On 01/02/07 15:27,: >> We''re upgrading to new server with a 6130 storage array (14x 279.396 >> GB), I want to create a 700G RAID-Z zpool just like our current setup. >> >> How should I do this to get the best performance? >> >> 1) Create a volume using 3 disks and create zpool using this volume? >> 2) Create 3 volumes using 3 disks and create zpool using 3 volumes? >> >> Our application Canary has approx 750 clients uploading to the server >> every 10 mins, that''s approx 108,000 gzip tarballs per day writing to >> the /upload directory. The parser untars the tarball which consists >> of 8 ascii files into the /archives directory. /app is our >> application and tools (apache, tomcat, etc) directory. We also have >> batch jobs that run throughout the day, I would say we read 2 to 3 >> times more than we write. >> >> directory info >> -------------- >> >> /app - 40G >> /upload - 20G >> /archives - 640 >> >> itsm-mpk-2# zpool status canary >> pool: canary >> state: ONLINE >> scrub: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> canary ONLINE 0 0 0 >> raidz ONLINE 0 0 0 >> c1t1d0 ONLINE 0 0 0 >> c1t2d0 ONLINE 0 0 0 >> c1t3d0 ONLINE 0 0 0 >> >> errors: No known data errors >> >> >> Thanks, >> Karen >> >> >_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Skipped content of type multipart/alternative-------------- next part -------------- _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An embedded message was scrubbed... From: Karen Chau <Karen.Chau@sun.com> Subject: Re: ZFS and storage array Date: Wed, 03 Jan 2007 08:55:56 -0800 Size: 10518 Url: http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070103/8aa3721e/attachment-0001.mht -------------- next part -------------- _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss