Hello again... Now that I''ve got my 2540 up and running. I''m considering which configuration is best. I have a proposed config and wanted your opinions and comments on it. Background.... I have a requirement to host syslog data from approx 30 servers. Currently the data is about 3.5TB in size spread across several servers. The end game is to have the data in one location with some redundancy built-in. I have a SUN 2540 Disk Array with 12 1 TB (931GB metric) drives connected to a SUN T5220 server (32GB RAM). My plan was to maximize useable space on the disk array by creating each disk as a volume (LUN) and having ZFS on the server raidz 11 drives and then adding the 12th drive as a spare. We have SUN service contracts in place to replace drives as soon as they go down. With this config, I have a single zpool showing 9.06 T available. Now questions..... 1) I didn''t do raid2 because I didn''t want to lose the space. Is this a bas idea?? 2) I did only one vdev (all 11 LUNs) and added the spare. Would it be better to break this up in to 2 vdevs (one w/ 6 LUNs and the other w/ 5)?? Why? 3) I have intentionally skipped the hardware hotspare and RAID methods. Is this a good idea?? What would be the best method to intergrate both hardware and software 4) A fellow admin here voiced concern with having ZFS handle the spare and raid functions. Specifically that the overhead processing would affect performance. Does anyone have experiance with server performance in this manner? 5) If I wanted to add an additional disk tray in the near future (12 more 1TB disk), what would be the recommended method? I was thinking of simply createing additional vdevs and adding them to the zpool. Thanks in advance for the discussions!! Regards, --Kenny -- This message posted from opensolaris.org
On Fri, 29 Aug 2008, Kenny wrote:> > 1) I didn''t do raid2 because I didn''t want to lose the space. Is > this a bas idea??Raidz2 is the most reliable vdev configuration other than triple-mirror. The pool is only as strong as its weakest vdev. In private email I suggested using all 12 drives in two raidz2 vdevs. Other than due to natural disaster or other physical mishap, the probability that enough drives will independently fail to cause data loss in raidz2 is similar to winning the state lottery jackpot. Your Sun service contract should be able to get you a replacement drive by the next day. A lot depends on if there are system administrators paying attention to the system who can take care of issues right away. If system administration is spotty or there is no one on site, then the ZFS spare is much more useful. Using more vdevs provides more multi-user performance, which is important to your logging requirements. If you do use the two raidz2 vdevs, then if you pay attention to how MPxIO works, you can balance the load across your two fiber channel links for best performance. Each raidz2 vdev can be served (by default) by a differente FC link. If you do enable compression, then that will surely make up for the additional space overhead of two raidz2 vdevs.> 3) I have intentionally skipped the hardware hotspare and RAID > methods. Is this a good idea?? What would be the best method to > intergrate both hardware and softwareWith the hardware JBOD approach, having the 2540 manage the hot spare would not make sense.> 4) A fellow admin here voiced concern with having ZFS handle the > spare and raid functions. Specifically that the overhead processing > would affect performance. Does anyone have experiance with server > performance in this manner?Having ZFS manage the spare costs nothing. There will be additional overhead when building the replacement drive, but this overhead would be seen if the drive array handled it too. Regardless, the drive array does not have the required information to build the drive. ZFS does have that information so ZFS should be in charge of the spare drive.> 5) If I wanted to add an additional disk tray in the near future (12 > more 1TB disk), what would be the recommended method? I was > thinking of simply createing additional vdevs and adding them to the > zpool.That is a sensible approach. If you know you will be running out of space, then it is best to install the additional hardware sooner than later since otherwise most of the data will be on the vdevs which were active first. ZFS does not currently provide a way to re-write a pool so that it is better balanced across vdevs. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 29 Aug 2008, Bob Friesenhahn wrote:> > If you do use the two raidz2 vdevs, then if you pay attention to how > MPxIO works, you can balance the load across your two fiber channel > links for best performance. Each raidz2 vdev can be served (by > default) by a differente FC link.As a follow-up, here is a small script which will show how MPxIO is creating paths to your devices. The output of this when all paths and devices are healthy may be useful for deciding how to create your storage pool since then you can load-balance the I/O: #!/bin/sh # Test path access to multipathed devices devs=`mpathadm list lu | grep /dev/rdsk/` for dev in $devs do echo "=== $dev ===" mpathadm show lu $dev | egrep ''(Access State)|(Current Load Balance)'' done Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Fri, 29 Aug 2008, Bob Friesenhahn wrote: > >> If you do use the two raidz2 vdevs, then if you pay attention to how >> MPxIO works, you can balance the load across your two fiber channel >> links for best performance. Each raidz2 vdev can be served (by >> default) by a differente FC link. >> > > As a follow-up, here is a small script which will show how MPxIO is > creating paths to your devices. The output of this when all paths and > devices are healthy may be useful for deciding how to create your > storage pool since then you can load-balance the I/O: > > #!/bin/sh > # Test path access to multipathed devices > devs=`mpathadm list lu | grep /dev/rdsk/` > for dev in $devs > do > echo "=== $dev ===" > mpathadm show lu $dev | egrep ''(Access State)|(Current Load Balance)'' > done >What would one look for to decide what vdev to place each LUN? All mine have the same Current Load Balance value: round robin. -Kyle> Bob > =====================================> Bob Friesenhahn > bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ > GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Fri, 29 Aug 2008, Kyle McDonald wrote:>> > What would one look for to decide what vdev to place each LUN? > > All mine have the same Current Load Balance value: round robin.That is a good question and I will have to remind myself of the answer. The "round robin" is good because that means that there are two working paths to the device. There are two "Access State:" lines printed. One is the status of the first path (''active'' means used to transmit data), and the other is the status of the second path. The controllers on the 2540 each "own" six of the drives by default (they operate active/standby at the drive level) so presumably (only an assumption) MPxIO directs traffic to the controller which has best access to the drive. Assuming that you use a pool design which allows balancing, you would want to choose six disks which have ''active'' in the first line, and six disks which have ''active'' in the second line, and assure that your pool or vdev design takes advantage of this. For example, my pool uses mirrored devices so I would split my mirrors so that one device is from the first set, and the other device is from the second set. If you choose to build your pool with two raidz2s, then you could put all the devices active on the first fiber channel interface into the first raidz2, and the rest in the other. This way you get balancing due to the vdev load sharing. Another option with raidz2 is to make sure that half of the six disks are from each set so that writes to the vdev produce distributed load across the interfaces. The reason why you might want to prefer load sharing at the vdev level is that if there is a performance problem with one vdev, the other vdev should still perform well and take more of the load. The reason why you might want to load share within a vdev is that I/Os to the vdev might be more efficient. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Personally I''d go for an 11 disk raid-z2, with one hot spare. You loose some capacity, but you''ve got more than enough for your current needs, and with 1TB disks single parity raid means a lot of time with your data unprotected when one fails. You could split this into two raid-z2 sets if you wanted, that would have a bit better performance, but if you can cope with the speed of a single pool for now I''d be tempted to start with that. It''s likely that by Christmas you''ll be able to buy flash devices to use as read or write cache with ZFS, at which point the speed of the disks becomes academic for many cases. Adding a further 12 disks sounds fine, just as you suggest. You can add another 11 disk raid-z2 set to your pool very easily. ZFS can''t yet restripe your existing data across the new disks, so you''ll have some data on the old 12 disk array, some striped across all 24, and some on the new array. ZFS probably does add some overhead compared to hardware raid, but unless you have a lot of load on that box I wouldn''t expect it to be a problem. I don''t know the T5220 servers though, so you might want to double check that. I do agree that you don''t want to use the hardware raid though, ZFS has plenty of advantages and it''s best to let it manage the whole lot. Could you do me a favour though and see how ZFS copes on that array if you just pull a disk while the ZFS pool is running? I''ve had some problems on a home built box after pulling disks, I suspect a proper raid array will cope fine but haven''t been able to get that tested yet. thanks, Ross -- This message posted from opensolaris.org
With the restriping: wouldn''t it be as simple as creating a new folder/dataset/whatever on the same pool and doing an rsync to the same pool/new location. This would obviously cause a short downtime to switch over and delete the old dataset, but seems like it should work fine. If you''re doubling the pool size, space shouldn''t be an issue. On 8/31/08, Ross <myxiplx at hotmail.com> wrote:> Personally I''d go for an 11 disk raid-z2, with one hot spare. You loose > some capacity, but you''ve got more than enough for your current needs, and > with 1TB disks single parity raid means a lot of time with your data > unprotected when one fails. > > You could split this into two raid-z2 sets if you wanted, that would have a > bit better performance, but if you can cope with the speed of a single pool > for now I''d be tempted to start with that. It''s likely that by Christmas > you''ll be able to buy flash devices to use as read or write cache with ZFS, > at which point the speed of the disks becomes academic for many cases. > > Adding a further 12 disks sounds fine, just as you suggest. You can add > another 11 disk raid-z2 set to your pool very easily. ZFS can''t yet > restripe your existing data across the new disks, so you''ll have some data > on the old 12 disk array, some striped across all 24, and some on the new > array. > > ZFS probably does add some overhead compared to hardware raid, but unless > you have a lot of load on that box I wouldn''t expect it to be a problem. I > don''t know the T5220 servers though, so you might want to double check that. > > I do agree that you don''t want to use the hardware raid though, ZFS has > plenty of advantages and it''s best to let it manage the whole lot. Could > you do me a favour though and see how ZFS copes on that array if you just > pull a disk while the ZFS pool is running? I''ve had some problems on a home > built box after pulling disks, I suspect a proper raid array will cope fine > but haven''t been able to get that tested yet. > > thanks, > > Ross > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Sun, 31 Aug 2008, Ross wrote:> You could split this into two raid-z2 sets if you wanted, that would > have a bit better performance, but if you can cope with the speed of > a single pool for now I''d be tempted to start with that. It''s > likely that by Christmas you''ll be able to buy flash devices to use > as read or write cache with ZFS, at which point the speed of the > disks becomes academic for many cases.We have not heard how this log server is going to receive the log data. Receiving the logs via BSD logging protocol is much different than receiving the logs via a NFS mount. If the logs are received via BSD logging protocol then the writes will be asynchronous and there is no need at all for a NV write cache. If the logs are received via NFS, then the writes are synchronous so there may be need for a NV write cache in order to maintain adequate performance. Luckily the StorageTek 2540 provides a reasonable NV write cache already. Without performing any actual testing to prove it, I would assume that two raidz2 sets will offer almost 2X the transactional performance of one bit raidz2 set, which may be important for a logging server which is receiving simultaneous input from many places. For reliability, I definitely recommend something like the BSD logging protocol if it can be used since it is more likely to capture all of the logs if there is a problem. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Hello Bob, Friday, August 29, 2008, 7:25:14 PM, you wrote: BF> On Fri, 29 Aug 2008, Kyle McDonald wrote:>>> >> What would one look for to decide what vdev to place each LUN? >> >> All mine have the same Current Load Balance value: round robin.BF> That is a good question and I will have to remind myself of the BF> answer. The "round robin" is good because that means that there are BF> two working paths to the device. There are two "Access State:" lines Assuming that you have only two paths and each one connected to different controller you will be using only one path for each LUN as 25x0 is an asymmetric disk array. Sometimes people tend to create several luns and assign some of them to one controller and some of them to another one. Now if you do striping or raid-10 between these luns you will end-up using both controllers which will improve performance for some workloads. The drawback is that in case one controller fails you may expect performance degradation. -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
A few quick notes. 2540''s first 12 drives are extremely fast due to the fact that they have direct unshared connections. I do not mean that additional disks are slow, I want to say that first 12 is extremely fast, compared to any other disk system. So although it''s a little bit expansive but it could be a lot faster to add a second 2540 , than adding a second drive expansion. We generally use a few 2540''s with 12 drives running in parallel for extreme performances. Agin with the additional disk tray 2540 will still perfom quite good, for the extreme performance.... Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Ross Sent: Sunday, August 31, 2008 12:04 PM To: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Proposed 2540 and ZFS configuration Personally I''d go for an 11 disk raid-z2, with one hot spare. You loose some capacity, but you''ve got more than enough for your current needs, and with 1TB disks single parity raid means a lot of time with your data unprotected when one fails. You could split this into two raid-z2 sets if you wanted, that would have a bit better performance, but if you can cope with the speed of a single pool for now I''d be tempted to start with that. It''s likely that by Christmas you''ll be able to buy flash devices to use as read or write cache with ZFS, at which point the speed of the disks becomes academic for many cases. Adding a further 12 disks sounds fine, just as you suggest. You can add another 11 disk raid-z2 set to your pool very easily. ZFS can''t yet restripe your existing data across the new disks, so you''ll have some data on the old 12 disk array, some striped across all 24, and some on the new array. ZFS probably does add some overhead compared to hardware raid, but unless you have a lot of load on that box I wouldn''t expect it to be a problem. I don''t know the T5220 servers though, so you might want to double check that. I do agree that you don''t want to use the hardware raid though, ZFS has plenty of advantages and it''s best to let it manage the whole lot. Could you do me a favour though and see how ZFS copes on that array if you just pull a disk while the ZFS pool is running? I''ve had some problems on a home built box after pulling disks, I suspect a proper raid array will cope fine but haven''t been able to get that tested yet. thanks, Ross -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James C. McPherson
2008-Sep-01 22:27 UTC
[zfs-discuss] Proposed 2540 and ZFS configuration
Robert Milkowski wrote:> Hello Bob, > > Friday, August 29, 2008, 7:25:14 PM, you wrote: > > BF> On Fri, 29 Aug 2008, Kyle McDonald wrote: >>> What would one look for to decide what vdev to place each LUN? >>> >>> All mine have the same Current Load Balance value: round robin. > > BF> That is a good question and I will have to remind myself of the > BF> answer. The "round robin" is good because that means that there are > BF> two working paths to the device. There are two "Access State:" lines > > > Assuming that you have only two paths and each one connected to > different controller you will be using only one path for each LUN as > 25x0 is an asymmetric disk array. > > Sometimes people tend to create several luns and assign some of them > to one controller and some of them to another one. Now if you do > striping or raid-10 between these luns you will end-up using both > controllers which will improve performance for some workloads. > > The drawback is that in case one controller fails you may expect > performance degradation.And sometimes that failover can happen at the most inconvenient moment, too. Another reason why I prefer active-active arrays :) (Just wish I could afford one for home use!) Still, if you''re using MPxIO that failover shouldn''t be a problem. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
On Mon, Sep 1, 2008 at 5:18 PM, Mertol Ozyoney <Mertol.Ozyoney at sun.com> wrote:> A few quick notes. > > 2540''s first 12 drives are extremely fast due to the fact that they have > direct unshared connections. I do not mean that additional disks are slow, I > want to say that first 12 is extremely fast, compared to any other disk > system. > > So although it''s a little bit expansive but it could be a lot faster to add > a second 2540 , than adding a second drive expansion. > > We generally use a few 2540''s with 12 drives running in parallel for extreme > performances. > > Agin with the additional disk tray 2540 will still perfom quite good, for > the extreme performance....For this application the 2540 is overkill and a poor fit. I''d recommend a J4xxx series JBOD array and and matching SAS controller(s). With enough memory in the ZFS host, you don''t need hardware RAID with buffer RAM. Spend your dollars where you''ll get the best payback - buy more drives and max out the RAM on the ZFS host!! In fact, if it''s not too late, I''d return the 2540.... Regards, -- Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
That''s exactly what I said in a private email. J4200 or J4400 can offer better price/performance. However the price difference is not as much as you think. Besides 2540 have a few function that can not be found on J series , like SAN connectivity, internal redundant raid controllers [redundancy is good and you can make use of the controllers when connected to some other hosts ilke windows servers] , ability to change stripe size/raid level and other paramters on the go etc.. Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at sun.com -----Original Message----- From: Al Hopper [mailto:al at logical-approach.com] Sent: Tuesday, September 02, 2008 3:53 AM To: Mertol.Ozyoney at Sun.COM Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Proposed 2540 and ZFS configuration On Mon, Sep 1, 2008 at 5:18 PM, Mertol Ozyoney <Mertol.Ozyoney at sun.com> wrote:> A few quick notes. > > 2540''s first 12 drives are extremely fast due to the fact that they have > direct unshared connections. I do not mean that additional disks are slow,I> want to say that first 12 is extremely fast, compared to any other disk > system. > > So although it''s a little bit expansive but it could be a lot faster toadd> a second 2540 , than adding a second drive expansion. > > We generally use a few 2540''s with 12 drives running in parallel forextreme> performances. > > Agin with the additional disk tray 2540 will still perfom quite good, for > the extreme performance....For this application the 2540 is overkill and a poor fit. I''d recommend a J4xxx series JBOD array and and matching SAS controller(s). With enough memory in the ZFS host, you don''t need hardware RAID with buffer RAM. Spend your dollars where you''ll get the best payback - buy more drives and max out the RAM on the ZFS host!! In fact, if it''s not too late, I''d return the 2540.... Regards, -- Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
On Tue, 2 Sep 2008, Mertol Ozyoney wrote:> That''s exactly what I said in a private email. J4200 or J4400 can offer > better price/performance. However the price difference is not as much as you > think. Besides 2540 have a few function that can not be found on J series , > like SAN connectivity, internal redundant raid controllers [redundancy is > good and you can make use of the controllers when connected to some other > hosts ilke windows servers] , ability to change stripe size/raid level and > other paramters on the go etc..It seems that the cost is the base chassis cost and then the price that Sun charges per disk drive. Unless you choose cheap SATA drives, or do not fully populate the chassis, the chassis cost will not be a big factor. Compared with other Sun products and other vendors (e.g. IBM), Sun is fairly competitive with its disk drive pricing for the 2540. The fiber channel can be quite a benefit since it does not care about distance and offers a bit more bandwidth than SAS. With SAS, the server and the drive array pretty much need to be in the same rack and close together as well. A drawback of fiber channel is that the host adaptor is much more (3X to 4X) more expensive. A big difference between the J series and the 2540 is how Sun sells it. The J series is sold in a minimal configuration with the user adding drives as needed whereas the 2540 is sold in certain pre-configured maximal configurations. This means that the starting cost for the J series is much lower. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob, I used your script (thanks) but I fail to see which controller controls which disk... Your white paper shows six luns with the active state first and then six with the active state second, however mine all show active state first. Yes, I''ve verified that both controllers are up and CAM see them both. Mpathadm reports 4 paths to each lun. Active state output..... bash-3.00# ./mpath.sh === /dev/rdsk/c6t600A0B800049E81C000003BF48BD0510d0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003BC48BD04D2d0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003B948BD0494d0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003B648BD044Ed0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003B348BD03FAd0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003B048BD03BAd0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003AD48BD0376d0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003AA48BD0338d0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003A748BD02FAd0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003A448BD02BAd0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003A148BD0276d0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003A448BD02BAd0s2 == Current Load Balance: round-robin Access State: active Access State: standby === /dev/rdsk/c6t600A0B800049E81C000003A148BD0276d0s2 == Current Load Balance: round-robin Access State: active Access State: standby Suggestions on where I might have messed up?? Thanks!! --Kenny -- This message posted from opensolaris.org
On Tue, Sep 2, 2008 at 11:44, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> The fiber channel ... offers a bit more bandwidth than SAS.The bandwidth part of this statement is not accurate. SAS uses wide ports composed of (usually, other widths are possible) four 3 gbit links. Each of these has a data rate up to 300 MB/s (not 375, due to 8/10b coding). Thus, a "single" SAS cable carries 1.2GB/s, while a single FC link carries 400 MB/s. SAS links can be up to 8 meters long, although of course this does not compete with the km-long links FC can achieve. Will
On Tue, 2 Sep 2008, Kenny wrote:> > I used your script (thanks) but I fail to see which controller > controls which disk... Your white paper shows six luns with the > active state first and then six with the active state second, > however mine all show active state first. > > Yes, I''ve verified that both controllers are up and CAM see them > both. Mpathadm reports 4 paths to each lun.This is very interesting. You are saying that ''mpathadm list lu'' produces ''Total Path Count: 4'' and ''Operational Path Count: 4''? Do you have four FC connections between the server and the array? What text is output from ''mpathadm show lu'' for one of the LUNs?> Suggestions on where I might have messed up??Perhaps not at all. The approach used in my white paper is based on suppositions from my self since Sun has not offered any architectural documentation on the 2540 or the internal workings of MPxIO. If you do have four FC connections it may be that MPxIO works differently and the trick I exploited is not valid. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/