Hi, I am trying to find out some definite answers on what needs to be done on an STK 2540 to set the Ingnore Cache Sync Option. The best I could find is Bob''s "Sun StorageTek 2540 / ZFS Performance Summary" (Dated Feb 28, 2008, thank you, Bob), in which he quotes a posting of Joel Miller: To set new values: service -d arrayname -c set -q nvsram region=0xf2 offset=0x17 value=0x01 host=0x00 service -d arrayname -c set -q nvsram region=0xf2 offset=0x18 value=0x01 host=0x00 service -d arrayname -c set -q nvsram region=0xf2 offset=0x21 value=0x01 host=0x00 Host region 00 is Solaris (w/Traffic Manager) Is this information still current for F/W 07.35.44.10 ? I have an LSI/Sun presentation stating that it should be sufficient to set byte 0x21 - what is correct? Bonus question: Is there a way to determine the setting which is currently active, if I don''t know if the controller has been booted since the nvsram potentially got modified? Thank you, Nils
On Tue, 13 Oct 2009, Nils Goroll wrote:> I am trying to find out some definite answers on what needs to be done on an > STK 2540 to set the Ingnore Cache Sync Option. The best I could find is Bob''s > "Sun StorageTek 2540 / ZFS Performance Summary" (Dated Feb 28, 2008, thank > you, Bob), in which he quotes a posting of Joel Miller:I should update this paper since the performance is now radically different and the StorageTek 2540 CAM configurables have changed.> Is this information still current for F/W 07.35.44.10 ?I suspect that the settings don''t work the same as before, but don''t know how to prove it.> Bonus question: Is there a way to determine the setting which is currently > active, if I don''t know if the controller has been booted since the nvsram > potentially got modified?>From what I can tell, the controller does not forget these settingsdue to a reboot or firmware update. However, new firmware may not provide the same interpretation of the values. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Hi Bob and all,> I should update this paper since the performance is now radically > different and the StorageTek 2540 CAM configurables have changed.That would be great, I think you''d do the community (and Sun, probably) a big favor.>> Is this information still current for F/W 07.35.44.10 ? > > I suspect that the settings don''t work the same as before, but don''t > know how to prove it.So this sounds like we need to wait for someone to come with a definite answer.
Hi Bob and all,> So this sounds like we need to wait for someone to come with a definite > answer.I''ve received some helpful information on this: > Byte 17 is for "Ignore Force Unit Access". > Byte 18 is for Ignore Disable Write Cache. > Byte 21 is for Ignore Cache Sync. > > Change ALL settings to 1 to make sure all "bad" commands are ignored. > > Byte 21 is the most important one, the other two settings are for safety. note: Personally, I think that talking about "safety" in this context can be a little misleading, my understanding of what is meant here is to make sure that the cache is always being used - which can mean the contrary to (data) safety (I''ve just learned from wikipedia that "Force Unit Access" means to bypass any read cache). > Newer Solaris (05/08 and higher) should automatically detect a Sun Storage > array and should handle the ICS correctly without any modification be reading > the Sync-NV bit. Can anyone make a definite statement on this? My understanding is that it does NOT yet work as it should, see also: http://www.opensolaris.org/jive/thread.jspa?messageID=245256 In other words, my understanding is that we DO still need the Hacks on the 61xx/25xx or zfs:zfs_nocacheflush=1 for optimal performance. Regarding my bonus question: I haven''t found yet a definite answer if there is a way to read the currently active controller setting. I still assume that the nvsram settings which can be read with service -d <arrayname> -c read -q nvsram region=0xf2 host=0x00 do not necessarily reflect the current configuration and that the only way to make sure the controller is running with that configuration is to reset it. Nils
On Tue, 13 Oct 2009, Nils Goroll wrote:> > Regarding my bonus question: I haven''t found yet a definite answer if there > is a way to read the currently active controller setting. I still assume that > the nvsram settings which can be read with > > service -d <arrayname> -c read -q nvsram region=0xf2 host=0x00 > > do not necessarily reflect the current configuration and that the only way to > make sure the controller is running with that configuration is to reset it.I believe that in the STK 2540, the controllers operate Active/Active except that each controller is Active for half the drives and Standby for the others. Each controller has a copy of the configuration information. Whichever one you communicate with is likely required to mirror the changes to the other. In my setup I load-share the fiber channel traffic by assigning six drives as active on one controller and six drives as active on the other controller, and the drives are individually exported with a LUN per drive. I used CAM to do that. MPXIO sees the changes and does map 1/2 the paths down each FC link for more performance than one FC link offers. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Hi Bob,>> Regarding my bonus question: I haven''t found yet a definite answer if >> there is a way to read the currently active controller setting. I >> still assume that the nvsram settings which can be read with >> >> service -d <arrayname> -c read -q nvsram region=0xf2 host=0x00 >> >> do not necessarily reflect the current configuration and that the only >> way to make sure the controller is running with that configuration is >> to reset it. > > I believe that in the STK 2540, the controllers operate Active/Active > except that each controller is Active for half the drives and Standby > for the others. Each controller has a copy of the configuration > information. Whichever one you communicate with is likely required to > mirror the changes to the other.I would assume the same. My question was more along the lines of "does ''service read'' read the stored config or rather the active config, or are the two always the same". Anyway, this subtle detail might not make a difference in most scenarios. Nils
7.x FW on 2500 and 6000 series doesnot operate the same way as 6.x FW does. So on some/most loads ignore cache synch commands option may not improve performance as expected. Best regards Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bob Friesenhahn Sent: Tuesday, October 13, 2009 6:05 PM To: Nils Goroll Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS) On Tue, 13 Oct 2009, Nils Goroll wrote:> I am trying to find out some definite answers on what needs to be done onan> STK 2540 to set the Ingnore Cache Sync Option. The best I could find isBob''s> "Sun StorageTek 2540 / ZFS Performance Summary" (Dated Feb 28, 2008, thank> you, Bob), in which he quotes a posting of Joel Miller:I should update this paper since the performance is now radically different and the StorageTek 2540 CAM configurables have changed.> Is this information still current for F/W 07.35.44.10 ?I suspect that the settings don''t work the same as before, but don''t know how to prove it.> Bonus question: Is there a way to determine the setting which is currently> active, if I don''t know if the controller has been booted since the nvsram> potentially got modified?>From what I can tell, the controller does not forget these settingsdue to a reboot or firmware update. However, new firmware may not provide the same interpretation of the values. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Bob; In all 2500 and 6000 series you can assign raid set''s to a controller and that controller becomes the owner of the set. Generaly not force drives switching between controllers always one controller owns a disk, and other waits in standby. Some disks use ALUA and re-route traffic coming to the "not preferred" controller to preferred controller. While some companies market this a "true" active active set up, this reduces the performance significantly if the host is not %100 ALUA aware. While this architacture solves the problem of setting up MPXIO on hosts. It''s likely that sometime in future Sun may release a FW to enable ALUA on controllers but this definetly wont improve performance. The advantage of 2540 against it''s bigger brothers (6140 which is EOL''ed) and competitors 2540 do use dedicated data paths for cache mirroring just like higher end unit disks (6180,6580, 6780) improving write performance significantly. Spliting load between controllers can most of the time increase performance, but you do not need to split in two equal partitions. Also do not forget that first tray have dedicated data lines to the controller so generaly it''s wise not to mix those drives with other drives on other trays. Best regards Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bob Friesenhahn Sent: Tuesday, October 13, 2009 10:59 PM To: Nils Goroll Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS) On Tue, 13 Oct 2009, Nils Goroll wrote:> > Regarding my bonus question: I haven''t found yet a definite answer ifthere> is a way to read the currently active controller setting. I still assumethat> the nvsram settings which can be read with > > service -d <arrayname> -c read -q nvsram region=0xf2 host=0x00 > > do not necessarily reflect the current configuration and that the only wayto> make sure the controller is running with that configuration is to resetit. I believe that in the STK 2540, the controllers operate Active/Active except that each controller is Active for half the drives and Standby for the others. Each controller has a copy of the configuration information. Whichever one you communicate with is likely required to mirror the changes to the other. In my setup I load-share the fiber channel traffic by assigning six drives as active on one controller and six drives as active on the other controller, and the drives are individually exported with a LUN per drive. I used CAM to do that. MPXIO sees the changes and does map 1/2 the paths down each FC link for more performance than one FC link offers. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Oct 26, 2009 at 09:58:05PM +0200, Mertol Ozyoney wrote:> In all 2500 and 6000 series you can assign raid set''s to a controller and > that controller becomes the owner of the set.When I configured all 32-drives on a 6140 array and the expansion chassis, CAM automatically split the drives amongst controllers evenly.> The advantage of 2540 against it''s bigger brothers (6140 which is EOL''ed) > and competitors 2540 do use dedicated data paths for cache mirroring just > like higher end unit disks (6180,6580, 6780) improving write performance > significantly. > > Spliting load between controllers can most of the time increase performance, > but you do not need to split in two equal partitions. > > Also do not forget that first tray have dedicated data lines to the > controller so generaly it''s wise not to mix those drives with other drives > on other trays.But, if you have an expansion chassis, and create a zpool with drives on the first tray and subsequent trays, what''s the difference? You cannot tell zfs which vdev to assign writes to so it seems pointless to balance your pool based on the chassis when reads/writes are potentially spread across all vdevs.> Best regards > Mertol > > > > > Mertol Ozyoney > Storage Practice - Sales Manager > > Sun Microsystems, TR > Istanbul TR > Phone +902123352200 > Mobile +905339310752 > Fax +902123352222 > Email mertol.ozyoney at sun.com > > > > -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bob Friesenhahn > Sent: Tuesday, October 13, 2009 10:59 PM > To: Nils Goroll > Cc: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] STK 2540 and Ignore Cache Sync (ICS) > > On Tue, 13 Oct 2009, Nils Goroll wrote: > > > > Regarding my bonus question: I haven''t found yet a definite answer if > there > > is a way to read the currently active controller setting. I still assume > that > > the nvsram settings which can be read with > > > > service -d <arrayname> -c read -q nvsram region=0xf2 host=0x00 > > > > do not necessarily reflect the current configuration and that the only way > to > > make sure the controller is running with that configuration is to reset > it. > > I believe that in the STK 2540, the controllers operate Active/Active > except that each controller is Active for half the drives and Standby > for the others. Each controller has a copy of the configuration > information. Whichever one you communicate with is likely required to > mirror the changes to the other. > > In my setup I load-share the fiber channel traffic by assigning six > drives as active on one controller and six drives as active on the > other controller, and the drives are individually exported with a LUN > per drive. I used CAM to do that. MPXIO sees the changes and does > map 1/2 the paths down each FC link for more performance than one FC > link offers. > > Bob > -- > Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- albert chin (china at thewrittenword.com)