Hi! I''m new to the list and new to zfs I have the following hardware and would like opinions on implementation. Sun Enterprise T5220 FC HBA Brocade 200E 4 Gbit switch Sun 2540 FC Disk Array w/12 1TB disk drives My plan is to create a small SAN fabric with the 5220 as the initiator (additional initiators to be added later) connected to the switch and the 2540 as the target. My desire is to create 2 5disk RAID 5 sets with one hot spare each. Then using ZFS to pool the 2 sets into one 8 TB Pool with several ZFS file systems in the pool. Now I have several questions: 1) Does this plan seem ok? 2) Does anyone have experiance with the 2540? 3) I''ve read that it''s best practice to create the RAID set utilizing Hardware RAID utilities vice using ZFS raidz. Any wisdom on this? Thanks in advance. --Kenny This message posted from opensolaris.org
James C. McPherson
2008-May-16 12:40 UTC
[zfs-discuss] ZFS and Sun Disk arrays - Opinions?
Kenny wrote:> Hi! I''m new to the list and new to zfs > > I have the following hardware and would like opinions on implementation. > > Sun Enterprise T5220 FC HBA Brocade 200E 4 Gbit switch Sun 2540 FC Disk > Array w/12 1TB disk drives > > My plan is to create a small SAN fabric with the 5220 as the initiator > (additional initiators to be added later) connected to the switch and the > 2540 as the target. > > My desire is to create 2 5disk RAID 5 sets with one hot spare each. Then > using ZFS to pool the 2 sets into one 8 TB Pool with several ZFS file > systems in the pool. > > Now I have several questions: > > 1) Does this plan seem ok?There doesn''t seem to be anything inherently wrong with it :-)> 2) Does anyone have experiance with the 2540?Kinda. I worked on adding MPxIO support to the mpt driver so we could support the SAS version of this unit - the ST2530. What sort of experience are you after? I''ver never used one of these boxes in production - only ever for benchmarking and bugfixing :-) I think Robert Milkowski might have one or two of them, however.> 3) I''ve read that it''s best practice to create the RAID set utilizing > Hardware RAID utilities vice using ZFS raidz. Any wisdom on this?You''ve got a whacking great cache in the ST2540, so you might as well make use of it. Once you''ve got more questions after reading the Best Practices guide (http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide) post a followup to this thread. You _will_ have questions. You will, I just know it! :-) cheers, James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Hello James,>> 2) Does anyone have experiance with the 2540?JCM> Kinda. I worked on adding MPxIO support to the mpt driver so JCM> we could support the SAS version of this unit - the ST2530. JCM> What sort of experience are you after? I''ver never used one JCM> of these boxes in production - only ever for benchmarking and JCM> bugfixing :-) I think Robert Milkowski might have one or two JCM> of them, however. Yeah, I do have several of them (both 2530 and 2540). 2530 (SAS) - cables tend to pop-out sometimes when you are around servers... then MPxIO does not work properly if you just hot-unplug and hot-replug the sas cable... there is still 2TB LUN size limit IIRC... other than that generally it is a good value 2540 (FC) - 2TB LUN size limit IIRC, other than that it is a good value array -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
James C. McPherson
2008-May-16 14:11 UTC
[zfs-discuss] ZFS and Sun Disk arrays - Opinions?
Robert Milkowski wrote: ....> Yeah, I do have several of them (both 2530 and 2540). > > 2530 (SAS) - cables tend to pop-out sometimes when you are around > servers... then MPxIO does not work properly if you just hot-unplug > and hot-replug the sas cable...If you plug the cable back in within 20 seconds of it coming loose that might just give MPxIO a bit of a headache. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
On May 16, 2008, at 10:04 AM, Robert Milkowski wrote:> Hello James, > > >>> 2) Does anyone have experiance with the 2540? > > JCM> Kinda. I worked on adding MPxIO support to the mpt driver so > JCM> we could support the SAS version of this unit - the ST2530. > > JCM> What sort of experience are you after? I''ver never used one > JCM> of these boxes in production - only ever for benchmarking and > JCM> bugfixing :-) I think Robert Milkowski might have one or two > JCM> of them, however. > > > Yeah, I do have several of them (both 2530 and 2540).we did a try and buy of the 2510,2530 and 2540.> > > 2530 (SAS) - cables tend to pop-out sometimes when you are around > servers... then MPxIO does not work properly if you just hot-unplug > and hot-replug the sas cable... there is still 2TB LUN size limit > IIRC... other than that generally it is a good valueYeah the sff-8088 connectors are a bit rigid and clumsy, but the performance was better than everything we tested in the 2500 series.> > > 2540 (FC) - 2TB LUN size limit IIRC, other than that it is a good > value array >Echo. We like the 2540 as well, and will be buying lots of them shortly.> > > -- > Best regards, > Robert Milkowski mailto:milek at task.gda.pl > http://milek.blogspot.com >-Andy> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Robert, Friday, May 16, 2008, 3:04:48 PM, you wrote: RM> Hello James,>>> 2) Does anyone have experiance with the 2540?JCM>> Kinda. I worked on adding MPxIO support to the mpt driver so JCM>> we could support the SAS version of this unit - the ST2530. JCM>> What sort of experience are you after? I''ver never used one JCM>> of these boxes in production - only ever for benchmarking and JCM>> bugfixing :-) I think Robert Milkowski might have one or two JCM>> of them, however. RM> Yeah, I do have several of them (both 2530 and 2540). RM> 2530 (SAS) - cables tend to pop-out sometimes when you are around RM> servers... then MPxIO does not work properly if you just hot-unplug RM> and hot-replug the sas cable... there is still 2TB LUN size limit RM> IIRC... other than that generally it is a good value RM> 2540 (FC) - 2TB LUN size limit IIRC, other than that it is a good RM> value array + on both arrays you need to make sure that you tweak the array to ignore scsi cache flushes or tweak zfs to not to send them. -- Best regards, Robert mailto:milek at task.gda.pl http://milek.blogspot.com
On Fri, 16 May 2008, Kenny wrote:> Sun 2540 FC Disk Array w/12 1TB disk drivesIt is interesting that the 2540 is available with large disks now.> My desire is to create 2 5disk RAID 5 sets with one hot spare each. > Then using ZFS to pool the 2 sets into one 8 TB Pool with several > ZFS file systems in the pool. > > Now I have several questions: > > 1) Does this plan seem ok?Another option is to export each entire drive as a LUN and put 10 active drives into one zfs pool as two raidzs, or one raidz2. The other two drives can be retained as spares for the pool. If replacement drives are readily sourced on demand, you could use all 12 drives as two raidz2s. This approach does not allow other systems to use the 2540 since one host owns the pool. However, you could move the ZFS pool to another system if need be.> 2) Does anyone have experiance with the 2540?Yes. Please see my white paper at http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf which discusses my experience with ZFS and the 2540. The paper was written back in February and I have yet to experience a hickup with the 2540 or ZFS. Not even one bad block.> 3) I''ve read that it''s best practice to create the RAID set > utilizing Hardware RAID utilities vice using ZFS raidz. Any wisdom > on this?This is really a philosophical or requirements issue. The 2540 allows you create pools and then export only part of the pool as a LUN to be used by an initator. This allows you create LUNs on disks which are shared by multiple hosts (initiators), each of which has its own ZFS pool (or traditional filesystem). If you really need to divide up storage at this level, then the 2540 offers flexibility that you won''t get from ZFS. A drawback to sharing sliced pools in this way is that if there is a problem with the underlying disks, then multiple hosts may be impacted during recovery. The 2540 CAM provides a 4-disk RAID5 config which claims to be tuned for ZFS. Someone on the list created three 4-disk RAID5 LUNs this way and put them all in one ZFS pool, obtaining very good performance. If one of those LUNs was to irreparably fail, his entire pool would be toast. ZFS experts will tell you that you should not be trusting the 2540 or its firmware to catch all errors and so there should always be redundancy (e.g. mirroring) at the ZFS level. By exporting each 2540 disk as a LUN, then any of the redundancy schemes supported by ZFS (mirror, raidz, raidz2) can be used from the initiator, essentially ignoring the ones built into the 2540. While the 2540''s CAM interface is nice, you will find that it is far slower than ZFS is at incorporating your disks (25 tedious minutes in the CAM admin tool vs less than a second for ZFS). Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 16 May 2008, James C. McPherson wrote:> >> 3) I''ve read that it''s best practice to create the RAID set utilizing >> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this? > > You''ve got a whacking great cache in the ST2540, so you might as > well make use of it.Exporting each disk as a LUN for use by ZFS does not cause the 2540 to disable its cache. In fact, it is clear that this cache is quite valuable to ZFS write performance when NFS is involved. I am able to obtain 90MB/second NFS write performance from a single NFS client and using the 2540. Due to the inherent design of ZFS, it is not necessary for RAID writes to be synchronized as they must be for traditional mirroring or RAID5. If there is a power loss or crash, ZFS will discover where it left off, and bring all redundant copies to a coherent state. The 2540''s cache will help protect against losing data if there is a power fail. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
I run 3510FC and 2540 units in pairs. I build 2 5-disk RAID5 LUNs in each array, with 2 disks as global spares. Each array has dual controllers and I''m doing multipath. Then from the server I have access to 2 LUNs from 2 arrays, and I build a ZFS RAID-10 set from these 4 LUNs being sure each mirror pair is constructed with LUNs from both arrays. Thus I can survive a complete failure of one array and multiple other failures and keep on trucking. Performance is quite good since using this in /etc/system: set zfs:zfs_nocacheflush = 1 And since recent ZFS patches for 10u4 which fixed FSYNC performance issues my arrays and servers are hardly breaking a sweat. I very much like that the arrays can handle lower-level problems for me like sparing and ZFS ensures correctness on top of that. This is for Cyrus mail-stores so availability and correctness are paramount in case you are wondering if all this belt & suspenders paranoia is worthwhile. If/when ZFS acquires a method to ensure that spare#1 in chassis#1 only gets used to replace failed disks in chassis#1 then I''ll reconsider my position. Currently though there is no mechanism to ensure this so I could easily see a spare being pulled from the other chassis and leaving me with an undesirable dependency if I were doing ZFS with JBOD. This message posted from opensolaris.org
On Fri, 16 May 2008, Vincent Fox wrote:> If/when ZFS acquires a method to ensure that spare#1 in chassis#1 > only gets used to replace failed disks in chassis#1 then I''ll > reconsider my position. Currently though there is no mechanism to > ensure this so I could easily see a spare being pulled from the > other chassis and leaving me with an undesirable dependency if I > were doing ZFS with JBOD.Good point! However, I think that the spare is only used until the original is re-constructed, so its usage should not be very long. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
James C. McPherson
2008-May-16 23:33 UTC
[zfs-discuss] ZFS and Sun Disk arrays - Opinions?
Bob Friesenhahn wrote:> On Fri, 16 May 2008, James C. McPherson wrote: >>> 3) I''ve read that it''s best practice to create the RAID set utilizing >>> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this? >> You''ve got a whacking great cache in the ST2540, so you might as >> well make use of it. > > Exporting each disk as a LUN for use by ZFS does not cause the 2540 to > disable its cache. In fact, it is clear that this cache is quite > valuable to ZFS write performance when NFS is involved. I am able to > obtain 90MB/second NFS write performance from a single NFS client and > using the 2540. > > Due to the inherent design of ZFS, it is not necessary for RAID writes > to be synchronized as they must be for traditional mirroring or RAID5. > If there is a power loss or crash, ZFS will discover where it left > off, and bring all redundant copies to a coherent state. The 2540''s > cache will help protect against losing data if there is a power fail.Hi Bob, You''ve made an assumption about what I wrote. That assumption is incorrect. Kenny, in addition, did not say that he was or was not going to do what you suggested, and I suggested to him that he go and look into the ZFS Best Practices wiki to get some ideas. I''m very, very well aware of the design and behaviour of ZFS, I have been using it since the build it was first integrated. I am also quite well aware of the design and behaviour of the raid engine in the ST2530. Please re-read my email to Kenny, and don''t put in words that I didn''t write. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
On Sat, 17 May 2008, James C. McPherson wrote:> Bob Friesenhahn wrote: >> On Fri, 16 May 2008, James C. McPherson wrote: >>>> 3) I''ve read that it''s best practice to create the RAID set utilizing >>>> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this? >>> You''ve got a whacking great cache in the ST2540, so you might as >>> well make use of it. > > Hi Bob, > You''ve made an assumption about what I wrote. That assumption > is incorrect. Kenny, in addition, did not say that he was orMy assumption based on your "You''ve got a whacking great cache in the ST2540, so you might as well make use of it." was that it was intended to imply that if the Hardware RAID utilities were not used that the 2540''s NV write-cache would not be available/useful.> I am also quite well aware of the design and behaviour of the > raid engine in the ST2530.Since there seems to be no specification of the internal architecture of the 2530 and 2540 (quite odd for a Sun product!), perhaps you can create a whitepaper which describes this architecture so that Sun customers can better understand how to use the product. I have nothing but praise for the two Sun engineers who helped me understand and optimize for the 2540 back in February. Most Sun engineers on this list are very helpful and we are very thankful for their kind assistance. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
James C. McPherson
2008-May-16 23:59 UTC
[zfs-discuss] ZFS and Sun Disk arrays - Opinions?
Bob Friesenhahn wrote:> On Sat, 17 May 2008, James C. McPherson wrote: > >> Bob Friesenhahn wrote: >>> On Fri, 16 May 2008, James C. McPherson wrote: >>>>> 3) I''ve read that it''s best practice to create the RAID set utilizing >>>>> Hardware RAID utilities vice using ZFS raidz. Any wisdom on this? >>>> You''ve got a whacking great cache in the ST2540, so you might as >>>> well make use of it. >> >> Hi Bob, >> You''ve made an assumption about what I wrote. That assumption >> is incorrect. Kenny, in addition, did not say that he was or > > My assumption based on your "You''ve got a whacking great cache in the > ST2540, so you might as well make use of it." was that it was intended > to imply that if the Hardware RAID utilities were not used that the > 2540''s NV write-cache would not be available/useful.Indeed. And there is absolutely no justification for that assumption. I had hoped that the following sentence suggesting a perusal of the Best Practices would have made it clear that it is indeed possible (I would say, recommended) to maximise the usage of a cache and ZFS'' specific design features.>> I am also quite well aware of the design and behaviour of the >> raid engine in the ST2530. > > Since there seems to be no specification of the internal architecture of > the 2530 and 2540 (quite odd for a Sun product!), perhaps you can create > a whitepaper which describes this architecture so that Sun customers can > better understand how to use the product.I''m not the person to do that, but I will forward your suggestion on to somebody who is better placed to do so. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
My thanks to all for their replies. Now for a few responces... James McP. - Yes I have indeed read the Bset Practices Guide and have a couple of questions for a new thread. <grin> Thanks for the suggestion about the cache I didn''t know about this and will research more. Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. This has me a little concerned esp. since I have 1 TB drives being delivered! Also thanks for the scsi cache flushing heads up, yet another item to lookup! <grin> Bob F. Thanks for the white paper and insight into the CAM interface. I''m going to re-read the Best Practices guide, Bob''s White paper, and hopefully find the CAM documentation. Then I can return with more intelligent questions. Thanks again to all. --Kenny This message posted from opensolaris.org
On Mon, 19 May 2008, Kenny wrote:> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. > This has me a little concerned esp. since I have 1 TB drives being > delivered! Also thanks for the scsi cache flushing heads up, yet > another item to lookup! <grin>I am not sure if this LUN size limit really exists, or if it exists, in which cases it actually applies. On my drive array, I created a 3.6GB RAID-0 pool with all 12 drives included during the testing process. Unfortunately, I don''t recall if I created a LUN using all the space. I don''t recall ever seeing mention of a 2TB limit in the CAM user interface or in the documentation. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Mon, 19 May 2008, Kenny wrote: > > >> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. >> This has me a little concerned esp. since I have 1 TB drives being >> delivered! Also thanks for the scsi cache flushing heads up, yet >> another item to lookup! <grin> >> > > I am not sure if this LUN size limit really exists, or if it exists, > in which cases it actually applies. On my drive array, I created a > 3.6GB RAID-0 pool with all 12 drives included during the testing > process. Unfortunately, I don''t recall if I created a LUN using all > the space. > > I don''t recall ever seeing mention of a 2TB limit in the CAM user > interface or in the documentation.The Solaris LUN limit is gone if you''re using Solaris 10 and recent patches. The array limit(s) are tied to the type of array you''re using. (Which type is this again?) CAM shouldn''t be enforcing any limits of its own but only reporting back when the array complains.
The limitation existed in every Sun branded Engenio array we tested - 2510,2530,2540,6130,6540. This limitation is on volumes. You will not be able to present a lun larger than that magical 1.998TB. I think it is a combination of both in CAM and the firmware. Can''t do it with sscs either... Warm and fuzzy: Sun engineers told me they would have a new release of CAM (and firmware bundle) in late June which would "resolve" this limitation. Or just do ZFS (or even SVM) setup like Bob and I did. Its actually pretty nice because the traffic will split to both controllers giving you theoretically more throughput so long as MPxIO is functioning properly. Only (minor) downside is parity is being transmitted from the host to the disks rather than living on the controller entirely. -Andy ________________________________ From: zfs-discuss-bounces at opensolaris.org on behalf of Torrey McMahon Sent: Mon 5/19/2008 1:59 PM To: Bob Friesenhahn Cc: zfs-discuss at opensolaris.org; Kenny Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions? Bob Friesenhahn wrote:> On Mon, 19 May 2008, Kenny wrote: > > >> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. >> This has me a little concerned esp. since I have 1 TB drives being >> delivered! Also thanks for the scsi cache flushing heads up, yet >> another item to lookup! <grin> >> > > I am not sure if this LUN size limit really exists, or if it exists, > in which cases it actually applies. On my drive array, I created a > 3.6GB RAID-0 pool with all 12 drives included during the testing > process. Unfortunately, I don''t recall if I created a LUN using all > the space. > > I don''t recall ever seeing mention of a 2TB limit in the CAM user > interface or in the documentation.The Solaris LUN limit is gone if you''re using Solaris 10 and recent patches. The array limit(s) are tied to the type of array you''re using. (Which type is this again?) CAM shouldn''t be enforcing any limits of its own but only reporting back when the array complains. _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 19 May 2008, Andy Lubel wrote:> Or just do ZFS (or even SVM) setup like Bob and I did. Its actually > pretty nice because the traffic will split to both controllers > giving you theoretically more throughput so long as MPxIO is > functioning properly. Only (minor) downside is parity is being > transmitted from the host to the disks rather than living on the > controller entirely.The bottleneck is in the StorageTek 2540 storage array itself rather than the connections to it. Note that for mirroring, ZFS needs to send more data (2X) over the fiber channel when writing than if the storage array was doing the RAID. Regardless, there is only a very tiny reduction of sequential write performance due to using ZFS with a LUN per disk rather than RAID-1 in the storage array. The sequential read performance is improved considerably since ZFS can intelligently load-share its reads across the mirrors without depending on the RAID array to do that. Giving more of the responsibility to ZFS allows performance to improve since ZFS is more aware of the task to be performed than the drive array is. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
The release should be out any day now. I think its being pushed to the external download site whilst we type/read. Andy Lubel wrote:> The limitation existed in every Sun branded Engenio array we tested - 2510,2530,2540,6130,6540. This limitation is on volumes. You will not be able to present a lun larger than that magical 1.998TB. I think it is a combination of both in CAM and the firmware. Can''t do it with sscs either... > > Warm and fuzzy: Sun engineers told me they would have a new release of CAM (and firmware bundle) in late June which would "resolve" this limitation. > > Or just do ZFS (or even SVM) setup like Bob and I did. Its actually pretty nice because the traffic will split to both controllers giving you theoretically more throughput so long as MPxIO is functioning properly. Only (minor) downside is parity is being transmitted from the host to the disks rather than living on the controller entirely. > > -Andy > > ________________________________ > > From: zfs-discuss-bounces at opensolaris.org on behalf of Torrey McMahon > Sent: Mon 5/19/2008 1:59 PM > To: Bob Friesenhahn > Cc: zfs-discuss at opensolaris.org; Kenny > Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions? > > > > Bob Friesenhahn wrote: > >> On Mon, 19 May 2008, Kenny wrote: >> >> >> >>> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. >>> This has me a little concerned esp. since I have 1 TB drives being >>> delivered! Also thanks for the scsi cache flushing heads up, yet >>> another item to lookup! <grin> >>> >>> >> I am not sure if this LUN size limit really exists, or if it exists, >> in which cases it actually applies. On my drive array, I created a >> 3.6GB RAID-0 pool with all 12 drives included during the testing >> process. Unfortunately, I don''t recall if I created a LUN using all >> the space. >> >> I don''t recall ever seeing mention of a 2TB limit in the CAM user >> interface or in the documentation. >> > > The Solaris LUN limit is gone if you''re using Solaris 10 and recent patches. > The array limit(s) are tied to the type of array you''re using. (Which > type is this again?) > CAM shouldn''t be enforcing any limits of its own but only reporting back > when the array complains. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > >
Hi, It''s my understanding that CAM doesn''t bundle the new ST6x40 firmware (7.1) at this point. However, the new firmware is available today by request and it does remove the 2TB limitation for the 6140 and 6540. As Andy had suggested, it does require a new version of CAM though, 6.1. The ST25x0 firmware that fixes the 2TB limitation is still coming though. Regards. -------- Original Message -------- Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions? From: Torrey McMahon <tmcmahon2 at yahoo.com> To: Andy Lubel <andy.Lubel at gtsi.com> CC: zfs-discuss at opensolaris.org, Kenny <knoe at bigfoot.com> Date: Mon May 19 15:18:51 2008> The release should be out any day now. I think its being pushed to the > external download site whilst we type/read. > > Andy Lubel wrote: > >> The limitation existed in every Sun branded Engenio array we tested - 2510,2530,2540,6130,6540. This limitation is on volumes. You will not be able to present a lun larger than that magical 1.998TB. I think it is a combination of both in CAM and the firmware. Can''t do it with sscs either... >> >> Warm and fuzzy: Sun engineers told me they would have a new release of CAM (and firmware bundle) in late June which would "resolve" this limitation. >> >> Or just do ZFS (or even SVM) setup like Bob and I did. Its actually pretty nice because the traffic will split to both controllers giving you theoretically more throughput so long as MPxIO is functioning properly. Only (minor) downside is parity is being transmitted from the host to the disks rather than living on the controller entirely. >> >> -Andy >> >> ________________________________ >> >> From: zfs-discuss-bounces at opensolaris.org on behalf of Torrey McMahon >> Sent: Mon 5/19/2008 1:59 PM >> To: Bob Friesenhahn >> Cc: zfs-discuss at opensolaris.org; Kenny >> Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions? >> >> >> >> Bob Friesenhahn wrote: >> >> >>> On Mon, 19 May 2008, Kenny wrote: >>> >>> >>> >>> >>>> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. >>>> This has me a little concerned esp. since I have 1 TB drives being >>>> delivered! Also thanks for the scsi cache flushing heads up, yet >>>> another item to lookup! <grin> >>>> >>>> >>>> >>> I am not sure if this LUN size limit really exists, or if it exists, >>> in which cases it actually applies. On my drive array, I created a >>> 3.6GB RAID-0 pool with all 12 drives included during the testing >>> process. Unfortunately, I don''t recall if I created a LUN using all >>> the space. >>> >>> I don''t recall ever seeing mention of a 2TB limit in the CAM user >>> interface or in the documentation. >>> >>> >> The Solaris LUN limit is gone if you''re using Solaris 10 and recent patches. >> The array limit(s) are tied to the type of array you''re using. (Which >> type is this again?) >> CAM shouldn''t be enforcing any limits of its own but only reporting back >> when the array complains. >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> >> >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Hi All ; 2 TB limit on the 6000 series will be removed when we release CAM 6,1 and cyrstall firmware. I cant give an actual date at the moment but it''s pretty close. The same will happen for 2500 series but it will take some more time. Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at Sun.COM -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Andy Lubel Sent: 20 May?s 2008 Sal? 00:30 To: Torrey McMahon; Bob Friesenhahn Cc: zfs-discuss at opensolaris.org; Kenny Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions? The limitation existed in every Sun branded Engenio array we tested - 2510,2530,2540,6130,6540. This limitation is on volumes. You will not be able to present a lun larger than that magical 1.998TB. I think it is a combination of both in CAM and the firmware. Can''t do it with sscs either... Warm and fuzzy: Sun engineers told me they would have a new release of CAM (and firmware bundle) in late June which would "resolve" this limitation. Or just do ZFS (or even SVM) setup like Bob and I did. Its actually pretty nice because the traffic will split to both controllers giving you theoretically more throughput so long as MPxIO is functioning properly. Only (minor) downside is parity is being transmitted from the host to the disks rather than living on the controller entirely. -Andy ________________________________ From: zfs-discuss-bounces at opensolaris.org on behalf of Torrey McMahon Sent: Mon 5/19/2008 1:59 PM To: Bob Friesenhahn Cc: zfs-discuss at opensolaris.org; Kenny Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions? Bob Friesenhahn wrote:> On Mon, 19 May 2008, Kenny wrote: > > >> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. >> This has me a little concerned esp. since I have 1 TB drives being >> delivered! Also thanks for the scsi cache flushing heads up, yet >> another item to lookup! <grin> >> > > I am not sure if this LUN size limit really exists, or if it exists, > in which cases it actually applies. On my drive array, I created a > 3.6GB RAID-0 pool with all 12 drives included during the testing > process. Unfortunately, I don''t recall if I created a LUN using all > the space. > > I don''t recall ever seeing mention of a 2TB limit in the CAM user > interface or in the documentation.The Solaris LUN limit is gone if you''re using Solaris 10 and recent patches. The array limit(s) are tied to the type of array you''re using. (Which type is this again?) CAM shouldn''t be enforcing any limits of its own but only reporting back when the array complains. _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss