Dedhi Sujatmiko
2010-Mar-31 09:05 UTC
[zfs-discuss] Need advice on handling 192 TB of Storage on hardware raid storage
Dear all, I have a hardware based array storage with a capacity of 192TB and being sliced into 64 LUNs of 3TB. What will be the best way to configure the ZFS on this? Of course we are not requiring the self healing capability of the ZFS. We just want the capability of handling big size file system and performance. Currently we are running using Solaris 10 May 2009 (Update 7), and configure the ZFS where : a. 1 hardware LUN (3TB) will become 1 zpool b. 1 zpool will become 1 ZFS file system c. 1 ZFS file system will become 1 mountpoint (obviously). The problem we have is that when the customer runs the I/O in parallel to the 64 file systems, the kernel usage (%sys) shot up very high to the 90% region and the IOPS level is degrading. It can be seen also that during that time the storage''s own front end CPU does not change much, which means the bottleneck is not on the hardware storage level, but somewhere inside the Solaris box. Is there any experience of having the similar setup like the one I have? Or anybody can point me to an information on what will be the best way to deal with the hardware storage on this size? Please advice and thanks in advance, Dedhi
Richard Elling
2010-Mar-31 16:50 UTC
[zfs-discuss] Need advice on handling 192 TB of Storage on hardware raid storage
On Mar 31, 2010, at 2:05 AM, Dedhi Sujatmiko wrote:> Dear all, > > I have a hardware based array storage with a capacity of 192TB and being sliced into 64 LUNs of 3TB. > What will be the best way to configure the ZFS on this? Of course we are not requiring the self healing capability of the ZFS. We just want the capability of handling big size file system and performance.Answers below based on the assumption that you value performance over space over dependability.> Currently we are running using Solaris 10 May 2009 (Update 7), and configure the ZFS where :First, upgrade or patch to the latest Solaris 10 kernel/zfs bits.> a. 1 hardware LUN (3TB) will become 1 zpoolThe RAID configuration of the LUs will be critical. ZFS can be easily configured to overrun most RAID arrays using modest server hardware.> b. 1 zpool will become 1 ZFS file system > c. 1 ZFS file system will become 1 mountpoint (obviously).I see no reason to do this. For best performance, put multiple LUs into the pool.> The problem we have is that when the customer runs the I/O in parallel to the 64 file systems, the kernel usage (%sys) shot up very high to the 90% region and the IOPS level is degrading. It can be seen also that during that time the storage''s own front end CPU does not change much, which means the bottleneck is not on the hardware storage level, but somewhere inside the Solaris box.The cause of the high system time should be investigated. I have seen huge amounts of I/O to RAID arrays consume relatively little system time.> Is there any experience of having the similar setup like the one I have? Or anybody can point me to an information on what will be the best way to deal with the hardware storage on this size?In general, spread the I/O across all resources to get the best overall response time.> Please advice and thanks in advanceHTH, -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com