David J. Orman
2007-Jan-20 01:59 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Hi, I''m looking at Sun''s 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users. Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2. It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion. What I''m looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too. So, anybody have any good suggestions for these two things: #1 - SAS/SATA PCI-E card that would work with the Sun X2200M2. #2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives. Basically, I''m trying to get around using Sun''s extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists. Cheers, David This message posted from opensolaris.org
Jason J. W. Williams
2007-Jan-20 02:24 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Hi David, I don''t know if your company qualifies as a startup under Sun''s regs but you can get an X4500/Thumper for $24,000 under this program: http://www.sun.com/emrkt/startupessentials/ Best Regards, Jason On 1/19/07, David J. Orman <ormandj at corenode.com> wrote:> Hi, > > I''m looking at Sun''s 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users. > > Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2. > > It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion. > > What I''m looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too. > > So, anybody have any good suggestions for these two things: > > #1 - SAS/SATA PCI-E card that would work with the Sun X2200M2. > #2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives. > > Basically, I''m trying to get around using Sun''s extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists. > > Cheers, > David > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Erik Trimble
2007-Jan-20 02:47 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On Fri, 2007-01-19 at 17:59 -0800, David J. Orman wrote:> Hi, > > I''m looking at Sun''s 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users. > > Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2. > > It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion. > > What I''m looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too. > > So, anybody have any good suggestions for these two things: > > #1 - SAS/SATA PCI-E card that would work with the Sun X2200M2. > #2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives. > > Basically, I''m trying to get around using Sun''s extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists. > > Cheers, > DavidNot to be picky, but the X2100 and X2200 series are NOT designed/targeted for disk serving (they don''t even have redundant power supplies). They''re compute-boxes. The X4100/X4200 are what you are looking for to get a flexible box more oriented towards disk i/o and expansion. That said (if you''re set on an X2200 M2), you are probably better off getting a PCI-E SCSI controller, and then attaching it to an external SCSI->SATA JBOD. There are plenty of external JBODs out there which use Ultra320/Ultra160 as a host interface and SATA as a drive interface. Sun will sell you a supported SCSI controller with the X2200 M2 (the "Sun StorageTek PCI-E Dual Channel Ultra320 SCSI HBA"). SCSI is far better for a host attachment mechanism than eSATA if you plan on doing more than a couple of drives, which it sounds like you are. While the SCSI HBA is going to cost quite a bit more than an eSATA HBA, the external JBODs run about the same, and the total difference is going to be $300 or so across the whole setup (which will cost you $5000 or more fully populated). So the cost to use SCSI vs eSATA as the host- attach is a rounding error. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Dan Mick
2007-Jan-20 06:01 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
David J. Orman wrote:> Hi, > > I''m looking at Sun''s 1U x64 server line, and at most they support two drives. This is fine for the root OS install, but obviously not sufficient for many users. > > Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ X2200M2. > > It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, half length PCI-Express slots" for expansion. > > What I''m looking for is a SAS/SATA card that would allow me to add an external SATA enclosure (or some such device) to add storage. The supported list on the HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be *ideal*, but I can settle for normal SATA too. > > So, anybody have any good suggestions for these two things: > > #1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.Scouting around a bit, I see SIIG has a 3132 chip, for which they make a card, eSATA II, available in PCIe and PCIe ExpressCard formfactors. I can''t promise, but chances seem good that it''s supported by si3124 driver in Solaris: si3124 "pci1095,3124" si3124 "pci1095,3132" Street price for the PCIe card is $30-35. Also, the first hit for "PCIe eSATA" was a card based on the JMicron JMB 360, which is supposed to support AHCI, and so should be supported by the brand-new ahci driver (just back in snv_56). Street prices for the card most popular were showing as $29.99 quantity 1. I don''t know whether either of these will work, but it looks promising. I also don''t know about eSATA vs. SCSI. Keep in mind that you''ll only be able to support two drives with the SIIG card, and one with the other one; port multipliers may or may not be working yet.> #2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap of drives. > > Basically, I''m trying to get around using Sun''s extremely expensive storage solutions while waiting on them to release something reasonable now that ZFS exists. > > Cheers, > David > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-Jan-20 07:23 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 19, 2007 5:59:13 PM -0800 "David J. Orman" <ormandj at corenode.com> wrote:> card that supports SAS would be *ideal*,Except that SAS support on Solaris is not very good. One major problem is they treat it like scsi when instead they should treat it like FC (or native SATA).> So, anybody have any good suggestions for these two things: > ># 1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.I had the lsilogic 3442-E working on x86 but not reliably. That is the only SAS controller Sun supports AFAIK.># 2 - Rack-mountable external enclosure for SAS/SATA drives, supporting ># hot swap of drives.promise vtrak j300s is the cheapest one i''ve found. adaptec''s been advertising one forever (6+ months?) but it''s not in production, at least you won''t be able to find one without hard drives, and you won''t be able to find the dual controller model.> Basically, I''m trying to get around using Sun''s extremely expensive > storage solutions while waiting on them to release something reasonable > now that ZFS exists.thumper (x4500) seems pretty reasonable ($/GB). -frank
Frank Cusack
2007-Jan-20 07:28 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 19, 2007 6:47:30 PM -0800 Erik Trimble <Erik.Trimble at Sun.COM> wrote:> Not to be picky, but the X2100 and X2200 series are NOT > designed/targeted for disk serving (they don''t even have redundant power > supplies). They''re compute-boxes. The X4100/X4200 are what you are > looking for to get a flexible box more oriented towards disk i/o and > expansion.But x4100/x4200 only accept expensive 2.5" SAS drives, which have small capacities. That doesn''t seem oriented towards disk serving. -frank
Shannon Roddy
2007-Jan-20 08:16 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> > thumper (x4500) seems pretty reasonable ($/GB). > > -frankI am always amazed that people consider thumper to be reasonable in price. 450% or more markup per drive from street price in July 2006 numbers doesn''t seem reasonable to me, even after subtracting the cost of the system. I like the x4500, I wish I had one. But, I can''t pay what Sun wants for it. So, instead, I am stuck buying lower end Sun systems and buying third party SCSI/SATA JBODs. I like Sun. I like their products, but I can''t understand their storage pricing most of the time. -Shannon
Frank Cusack
2007-Jan-20 08:31 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 20, 2007 2:16:45 AM -0600 Shannon Roddy <sroddy at ligo-la.caltech.edu> wrote:> Frank Cusack wrote: >> >> thumper (x4500) seems pretty reasonable ($/GB). >> >> -frank > > > I am always amazed that people consider thumper to be reasonable in > price. 450% or more markup per drive from street price in July 2006 > numbers doesn''t seem reasonable to me, even after subtracting the cost > of the system. I like the x4500, I wish I had one. But, I can''t pay > what Sun wants for it. So, instead, I am stuck buying lower end Sun > systems and buying third party SCSI/SATA JBODs.But what data throughput do you get? Thumper is phenomenal. It is ashame (for the consumer) that it''s not available without drives. Sun has always had an obscene markup on drives. -frank
Shannon Roddy
2007-Jan-20 09:02 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> > It is ashame (for the consumer) that it''s not available without drives. > Sun has always had an obscene markup on drives. > > -frankTo me, hard drives today are as much a commodity item as network cable, GBICs, NICs, DVD drives, etc. Sun should not be marking them up at the rate that they do. I would be happy to buy a Thumper at whatever engineering cost they have calculated in for the system without the drives. For sun to charge 4-8 times street price for hard drives that they order just the same as I do from the same manufacturers that I order from is infuriating. It doesn''t make that much difference on a two drive x2100, but when you are talking about 48 drives in a thumper, it makes paying that markup just insane. I still buy my x2100s without drives because of the same reason though. My local Sun service guy out here hears this from me all the time, and it is probably the only complaint I really have about Sun. I pay ~$1k/TB right now for my ZFS JBOD storage. It is mostly just bulk storage (user home directories & the like) and does not require huge bandwidth. So, when Jonathan Schwartz decides to sell them without drives, maybe I''ll buy a few just to have a nicely engineered system instead of my cabling mess currently in the racks. -Shannon
Anton B. Rang
2007-Jan-20 14:45 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
> To me, hard drives today are as much a commodity item as network cable, > GBICs, NICs, DVD drives, etc.They are and they aren''t. Reliability, particularly in high-heat & vibration environments, can vary quite a bit.> For sun to charge 4-8 times street price for hard drives that they order just the same > as I do from the same manufacturers that I order from is infuriating.I won''t argue with that, I remember when all the vendors were doing that. Maybe they still are, at least the ones who still sell drives. :-) But in the particular case of a Thumper, I think Sun is doing the right thing by selling only qualified drives. That is a very dense case. Not every drive with the right form factor will work reliably in it. Even drives which work in another dense case may not work reliably because the heat & vibration profile is different. That''s a separate issue from the price charged for the drives; but I''d be very hesitant to sell and support a system without drives if I knew that only certain drives would work without "cooking" or excessive seek errors. This message posted from opensolaris.org
Ed Gould
2007-Jan-20 18:12 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Shannon Roddy wrote:> For sun to charge 4-8 times street price for hard drives that > they order just the same as I do from the same manufacturers that I > order from is infuriating.Are you sure they''re really the same drives? Mechanically, they probably are, but last I knew (I don''t work in the Storage part of Sun, so I have no particular knowledge about current practices), Sun and other systems vendors (I know both Apple and DEC did) had custom firmware in the drives they resell. One reason for this is that the systems vendors qualified the drives with a particular firmware load, and did not buy just the latest firmware that the drive manufacturer wanted to ship, for quality-control reasons. At least some of the time, there were custom functionality changes as well. -- --Ed
Rich Teer
2007-Jan-20 18:18 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On Fri, 19 Jan 2007, Frank Cusack wrote:> But x4100/x4200 only accept expensive 2.5" SAS drives, which have > small capacities. [...]... and only 2 or 4 drives each. Hence my blog entry a while back, wishing for a Sun-badged 1U SAS JBOD with room for 8 drives. I''m amazed that Sun hasn''t got a product to fill this obvious (to me at least) hole in their storage catalogue. -- Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member President, Rite Online Inc. Voice: +1 (250) 979-1638 URL: http://www.rite-group.com/rich
Jason J. W. Williams
2007-Jan-20 18:49 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Hi Shannon, The markup is still pretty high on a per-drive basis. That being said, $1-2/GB is darn low for the capacity in a server. Plus, you''re also paying for having enough HyperTransport I/O to feed the PCI-E I/O. Does anyone know what problems they had with the 250GB version of the Thumper that caused them to pull it? Best Regards, Jason On 1/20/07, Shannon Roddy <sroddy at ligo-la.caltech.edu> wrote:> Frank Cusack wrote: > > > > thumper (x4500) seems pretty reasonable ($/GB). > > > > -frank > > > I am always amazed that people consider thumper to be reasonable in > price. 450% or more markup per drive from street price in July 2006 > numbers doesn''t seem reasonable to me, even after subtracting the cost > of the system. I like the x4500, I wish I had one. But, I can''t pay > what Sun wants for it. So, instead, I am stuck buying lower end Sun > systems and buying third party SCSI/SATA JBODs. I like Sun. I like > their products, but I can''t understand their storage pricing most of the > time. > > -Shannon > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
David J. Orman
2007-Jan-20 20:59 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
> Hi David, > > I don''t know if your company qualifies as a startup > under Sun''s regs > but you can get an X4500/Thumper for $24,000 under > this program: > http://www.sun.com/emrkt/startupessentials/ > > Best Regards, > JasonI''m already a part of the Startup Essentials program. Perhaps I should have been more clear, my apologies, I am not looking for 48 drives worth of storage. This is beyond our means to purchase at this point, regardless of the $/GB. I do agree, it is quite a good deal. I was talking about the huge gap in storage solutions from Sun for the middle-ground. While $24,000 is a wonderful deal, it''s absolute overkill for what I''m thinking about doing. I was looking for more around 6-8 drives. David This message posted from opensolaris.org
David J. Orman
2007-Jan-20 21:07 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
> On Fri, 19 Jan 2007, Frank Cusack wrote: > > > But x4100/x4200 only accept expensive 2.5" SAS > drives, which have > > small capacities. [...] > > ... and only 2 or 4 drives each. Hence my blog entry > a while back, > wishing for a Sun-badged 1U SAS JBOD with room for 8 > drives. I''m > amazed that Sun hasn''t got a product to fill this > obvious (to me > at least) hole in their storage catalogue. > > -- > Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member > > President, > Rite Online Inc. > > Voice: +1 (250) 979-1638 > URL: http://www.rite-group.com/rich > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >This is exactly what I am looking for. I apparently was not clear in my original post. I am looking for a 6-8 drive external solution to tie into Sun servers. The existing Sun solutions at this range are very expensive. For instance, the 3511 is ~$37000 for 12x500gb drives. I can buy good quality Seagate drives for $200 each. That comes to the grand total of $2400. Somehow I doubt the enclosure/drive controllers are worth ~$34,000. It''s an insane markup. That''s why I was asking for an external JBOD solution. The Sun servers I''ve looked at are all priced excellently, and I''d love to use them - but the storage solutions are a bit crazy. Not to mention, I don''t want to get tied into FC, seeing as 10gE is around the corner. I''d rather use some kind of external interface that''s reasonable. On that note, I''ve recently read it might be the case that the 1u sun servers do not have hot-swappable disk drives... is this really true? That makes this whole plan silly, I can just go out and buy a Supermicro machine and save money all around, and have the 6-8 drives in the same box as the server. Thanks, David This message posted from opensolaris.org
Marion Hakanson
2007-Jan-20 21:18 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
ormandj at corenode.com said:> I was talking about the huge gap in storage solutions from Sun for the > middle-ground. While $24,000 is a wonderful deal, it''s absolute overkill for > what I''m thinking about doing. I was looking for more around 6-8 drives.How about a Sun V40z? It''s available with up to 6 drives (300GB ea), and a low-end configuration (cpu/ram-wise) might not be out of your price range, depending on your discount. There are plenty of slots if you want to later add external enclosures, too. Of course, Dell probably has cheaper 64-bit systems with 6 internal drives available too. Regards, Marion
Frank Cusack
2007-Jan-20 22:18 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On January 20, 2007 1:07:27 PM -0800 "David J. Orman" <ormandj at corenode.com> wrote:> On that note, I''ve recently read it might be the case that the 1u sun > servers do not have hot-swappable disk drives... is this really true?Only for the x2100 (and x2100m2). It''s not that the hardware isn''t hot-swappable, it''s that Solaris doesn''t support it. If you run Windows you will get hot swap. -frank
Erik Trimble
2007-Jan-20 22:20 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> On January 19, 2007 6:47:30 PM -0800 Erik Trimble > <Erik.Trimble at Sun.COM> wrote: >> Not to be picky, but the X2100 and X2200 series are NOT >> designed/targeted for disk serving (they don''t even have redundant power >> supplies). They''re compute-boxes. The X4100/X4200 are what you are >> looking for to get a flexible box more oriented towards disk i/o and >> expansion. > > But x4100/x4200 only accept expensive 2.5" SAS drives, which have > small capacities. That doesn''t seem oriented towards disk serving. > > -frankThose are boot drives, and for those with small amounts of data (and, you get 73gb and soon 143gb drives in that form factor, which isn''t really any different than typical 3.5" SCSI drive sizes). No, I was talking about the internal architecture. The X4100/X4200 have multiple independent I/O buses, with lots of PCI-E and PCI-X slots. So if you were looking to hook up external storage (which was the original poster''s intent), the X4100/X4200 is a much better match. -Erik -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Erik Trimble
2007-Jan-20 22:42 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Rich Teer wrote:> On Fri, 19 Jan 2007, Frank Cusack wrote: > > >> But x4100/x4200 only accept expensive 2.5" SAS drives, which have >> small capacities. [...] >> > > ... and only 2 or 4 drives each. Hence my blog entry a while back, > wishing for a Sun-badged 1U SAS JBOD with room for 8 drives. I''m > amazed that Sun hasn''t got a product to fill this obvious (to me > at least) hole in their storage catalogue. >The Sun 3120 does 4 x 3.5" SCSI drives in a 1U, and the Sun 3320 does 12 x 3.5" in 2U. Both come in JBOD configs (and the 3320 has HW Raid if you want it). Yes, I''m certain that having 8-10 SAS drives in a 1U might be useful; HP thinks so: the MSA50 (http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/ma50/index.html) But, given that Sun doesn''t seem to be really targeting Small Business right now (at least, it appears that way), the 3120 works quite well, feature-wise, for Medium Business/Enterprise areas.. I priced out the HP MSA-series vs the Sun StorageTek 3000-series, and the HP stuff is definitely cheaper. By a noticable amount. So I''d say Sun has less of a hardware selection gap, than a pricing gap. The current "low end" of the Sun line just isn''t cheap enough. Of course the opinions expressed herein are my own, and I have no special knowledge of anything relevant to this discussion. (TM) :-) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Erik Trimble
2007-Jan-20 22:48 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Frank Cusack wrote:> On January 20, 2007 1:07:27 PM -0800 "David J. Orman" > <ormandj at corenode.com> wrote: >> On that note, I''ve recently read it might be the case that the 1u sun >> servers do not have hot-swappable disk drives... is this really true? > > Only for the x2100 (and x2100m2). It''s not that the hardware isn''t > hot-swappable, it''s that Solaris doesn''t support it. If you run > Windows you will get hot swap. > > -frank > _______________________________________________I believe this also applies to the X2200 M2 as well. Essentially, all the low-end x64 servers using SATA have Nvidia chipsets which theoretically support Hot-swap of SATA; as noted, the Windows drivers do support this feature, while the Solaris 10 drivers don''t (and, I don''t know if there are plans to add this feature or not). Personally, I''ve always been a bit nervous of using chipset-based RAID and expecting Hot-swap to actually, particularly with SATA. I''ve been bitten on various different (non-Sun) hardware trying this, and it has made me gun-shy of thinking I can actually pull a SATA drive while its mirror is still mounted... -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Rich Teer
2007-Jan-20 23:43 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On Sat, 20 Jan 2007, Erik Trimble wrote:> The Sun 3120 does 4 x 3.5" SCSI drives in a 1U, and the Sun 3320 does 12 x > 3.5" in 2U. Both come in JBOD configs (and the 3320 has HW Raid if you want > it).Yep; I know about those products. But the entry level 3120 (with 2 x 73GB disks) has a list price of $5K! I''m a Sun supporter, but those kind of prices are akin to daylight robbery! Or, to put it another way, the list price of that simple JBOD is more than twice as expensive as the X4100--a server it woulr probably be connected to! But more to the point, SAS seems to be future, so it would be really nice to have a Sun SAS JBOD array. As I said in my blog about this, if Sun could produce an 8-drive SAS 1U JBOD array, with a starting price (say, 2 x 36GB drives with 2 hot swapable PSUs) of $2K, they''d sell ''em by the truck load. I mean let''s be honest: when we''re talking about low end JBOD arrays, we''re talking about one or two PSUs, some mechanism for holding the drives, a bit of electronics, and a metal case to put it all in. No expensive rocket science necessary.> Yes, I''m certain that having 8-10 SAS drives in a 1U might be useful; HP > thinks so: the MSA50 > (http://h18004.www1.hp.com/storage/disk_storage/msa_diskarrays/drive_enclosures/ma50/index.html)Yep, that''s what I''m thinking of, only in a nice case that is similar to the X4100 (for economies of scale and pretty data centers).> But, given that Sun doesn''t seem to be really targeting Small Business right > now (at least, it appears that way), the 3120 works quite well, feature-wise, > for Medium Business/Enterprise areas..But that''s the point: Sun IS targeting Small Business: that''s what the Sun Startup Essentials program is all about! Not to mention the programs aimed at developers. Agreed, Sun isn''t targeting the mum and dad kind of business, but there are a huge number of businesses that need more storage than will fit into an X4200/T2000 but less than what''s available with (say) the 3320.> of a hardware selection gap, than a pricing gap. The current "low end" of the > Sun line just isn''t cheap enough.Couldn''t agree more. -- Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member President, Rite Online Inc. Voice: +1 (250) 979-1638 URL: http://www.rite-group.com/rich
Richard Elling
2007-Jan-21 01:58 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Frank Cusack wrote:> On January 20, 2007 1:07:27 PM -0800 "David J. Orman" > <ormandj at corenode.com> wrote: >> On that note, I''ve recently read it might be the case that the 1u sun >> servers do not have hot-swappable disk drives... is this really true?Yes.> Only for the x2100 (and x2100m2). It''s not that the hardware isn''t > hot-swappable, it''s that Solaris doesn''t support it. If you run > Windows you will get hot swap.No. To be clear, Sun defines "hot swap" as a device which can be inserted or removed without system administration tasks required. Sun defines "hot plug" as a device which can be inserted or removed without causing damage or interruption to a running system, but which may require system administration. The vast majority of the disks Sun sells are hot pluggable. That said, this definition is not always used consistently, as is the case with the x2100. I filed a bug against the docs in this case, and unfortunately it was closed as "will not fix." :-( -- richard
Richard Elling
2007-Jan-21 02:08 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> On January 19, 2007 5:59:13 PM -0800 "David J. Orman" > <ormandj at corenode.com> wrote: >> card that supports SAS would be *ideal*, > > Except that SAS support on Solaris is not very good. > > One major problem is they treat it like scsi when instead they should > treat it like FC (or native SATA).uhmm... SAS is serial attached SCSI, why wouldn''t we treat it like SCSI? BTW, the sd driver and ssd (SCSI over fibre channel) drivers have the same source. SATA will also use the sd driver, as Pawel describes in his blogs on the SATA framework at http://blogs.sun.com/pawelblog What I gather from this is that today, SATA drives will either look like IDE drives or SCSI drives, to some extent. When they look like IDE drives, you don''t get all of the cfgadm or luxadm management options and you have to do thinks like hot plug in a more-rather-than-less manual mode. When they look like SCSI drives, then you''ll also get the more-automatic hot plug features. -- richard
Rich Teer
2007-Jan-21 02:12 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On Sat, 20 Jan 2007, Richard Elling wrote:> To be clear, Sun defines "hot swap" as a device which can be inserted or > removed without system administration tasks required. > > Sun defines "hot plug" as a device which can be inserted or removed without > causing damage or interruption to a running system, but which may require > system administration. The vast majority of the disks Sun sells are hot > pluggable.OK; given the above definitions, could you please confirm one way or another that the disks in the X2100 are hot pluggable? In other words, if I have a pair of mirrored drives in an X2100 and one of those drives dies, can I take out and replace the defective drive without down time? -- Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member President, Rite Online Inc. Voice: +1 (250) 979-1638 URL: http://www.rite-group.com/rich
Toby Thain
2007-Jan-21 02:15 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On 20-Jan-07, at 8:48 PM, Erik Trimble wrote:> Frank Cusack wrote: >> On January 20, 2007 1:07:27 PM -0800 "David J. Orman" >> <ormandj at corenode.com> wrote: >>> On that note, I''ve recently read it might be the case that the 1u >>> sun >>> servers do not have hot-swappable disk drives... is this really >>> true? >> >> Only for the x2100 (and x2100m2). It''s not that the hardware isn''t >> hot-swappable, it''s that Solaris doesn''t support it. If you run >> Windows you will get hot swap. >> >> -frank >> _______________________________________________ > I believe this also applies to the X2200 M2 as well. Essentially, > all the low-end x64 servers using SATA have Nvidia chipsets which > theoretically support Hot-swap of SATA; as noted, the Windows > drivers do support this feature, while the Solaris 10 drivers don''t > (and, I don''t know if there are plans to add this feature or not). > Personally, I''ve always been a bit nervous of using chipset-based > RAID and expecting Hot-swap to actually, particularly with SATA. > I''ve been bitten on various different (non-Sun) hardware trying > this, and it has made me gun-shy of thinking I can actually pull a > SATA drive while its mirror is still mounted...Some of us don''t give a damn about "chipset RAID" and want hotswap/ hotplug drives with SVM and/or ZFS. Richard Elling wrote:> To be clear, Sun defines "hot swap" as a device which can be > inserted or > removed without system administration tasks required. > > Sun defines "hot plug" as a device which can be inserted or removed > without > causing damage or interruption to a running system, but which may > require > system administration. The vast majority of the disks Sun sells > are hot > pluggable.To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under Solaris. Replacing a failed drive requires a reboot. --Toby> > -- > Erik Trimble > Java System Support > Mailstop: usca22-123 > Phone: x17195 > Santa Clara, CA > Timezone: US/Pacific (GMT-0800) > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Toby Thain
2007-Jan-21 02:55 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On 21-Jan-07, at 12:12 AM, Rich Teer wrote:> On Sat, 20 Jan 2007, Richard Elling wrote: > >> To be clear, Sun defines "hot swap" as a device which can be >> inserted or >> removed without system administration tasks required. >> >> Sun defines "hot plug" as a device which can be inserted or >> removed without >> causing damage or interruption to a running system, but which may >> require >> system administration. The vast majority of the disks Sun sells >> are hot >> pluggable. > > OK; given the above definitions, could you please confirm one way > or another that the disks in the X2100 are hot pluggable? In other > words, if I have a pair of mirrored drives in an X2100 and one of > those drives dies, can I take out and replace the defective drive > without down time?NO - unless you''re running Windows AND "chipset RAID" (or whatever you want to call it). This is easily verified by experiment with Solaris 10. More information via links I posted earlier in this thread, or buried in X2100 documentation. --Toby> > -- > Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member > > President, > Rite Online Inc. > > Voice: +1 (250) 979-1638 > URL: http://www.rite-group.com/rich > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James C. McPherson
2007-Jan-21 09:17 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> On January 19, 2007 5:59:13 PM -0800 "David J. Orman" > <ormandj at corenode.com> wrote: >> card that supports SAS would be *ideal*, > > Except that SAS support on Solaris is not very good. > One major problem is they treat it like scsi when instead they should > treat it like FC (or native SATA).Uh ... you do know that the second "S" in SAS stands for "serial-attached SCSI", right? Native SATA is a subset of native SAS, too. What I''m intrigued by is your assertion that we should treat SAS the same way we treat FC. Would you please expand upon this, because I''m really interested in what your thoughts are..... since I work on Sun''s SAS driver :) I would also like to get some feedback on what you and others would like to see for Sun''s SAS support. Not guaranteeing anything, but I''m happy to act as a channel to the relevant people who have signoff on things like this. cheers, James C. McPherson -- Solaris kernel software engineer Sun Microsystems
Casper.Dik at Sun.COM
2007-Jan-21 10:36 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
>That said, this definition is not always used consistently, as is the case >with the x2100. I filed a bug against the docs in this case, and unfortunately >it was closed as "will not fix." :-(In the context of a hardware platform it makes little sense to distinguish between hot-plug and hot-swap. The distinction is purely based on the capabilities of the software. Casper
Casper.Dik at Sun.COM
2007-Jan-21 10:37 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
>What I gather from this is that today, SATA drives will either look like IDE >drives or SCSI drives, to some extent. When they look like IDE drives, you >don''t get all of the cfgadm or luxadm management options and you have to do >thinks like hot plug in a more-rather-than-less manual mode. When they look >like SCSI drives, then you''ll also get the more-automatic hot plug features.In the one case they''re running the controller in compatibility mode; for the other case you''ll need to appropriate SATA controller driver. Casper
Al Hopper
2007-Jan-21 13:38 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On Sun, 21 Jan 2007, James C. McPherson wrote: ... snip ....> Would you please expand upon this, because I''m really interested > in what your thoughts are..... since I work on Sun''s SAS driver :)Hi James - just the man I have a couple of questions for... :) Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a generic ZFS/JBOD SATA controller? There are a few white-box hackers on this list looking for a solid/reliable SATA HBA with a PCI-e (PCI Express) connector - rather than the rock-solid Supermicro/Marvel board which is only available with a 64-bit PCI-X connector at the moment. Thanks, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 OpenSolaris Governing Board (OGB) Member - Feb 2006
James C. McPherson
2007-Jan-21 22:43 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Al Hopper wrote:> On Sun, 21 Jan 2007, James C. McPherson wrote: > ... snip .... >> Would you please expand upon this, because I''m really interested >> in what your thoughts are..... since I work on Sun''s SAS driver :) > Hi James - just the man I have a couple of questions for... :) > Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a > generic ZFS/JBOD SATA controller? > There are a few white-box hackers on this list looking for a > solid/reliable SATA HBA with a PCI-e (PCI Express) connector - rather than > the rock-solid Supermicro/Marvel board which is only available with a > 64-bit PCI-X connector at the moment.Hi Al, according to the 3041E-R 2page pdf which I found at http://www.lsi.com/documentation/storage/scg/hbas/sas/lsisas3041e-r_pb.pdf the SAS asic is the LSISAS1064E which to the best of my knowledge, is supported with the mpt driver. So the answer to your question is "I don''t see why not" :) That chip is also the onboard controller with the T1000, T2000, Ultra25 and Ultra45. cheers, James C. McPherson -- Solaris kernel software engineer Sun Microsystems
Frank Cusack
2007-Jan-22 16:15 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 21, 2007 7:38:01 AM -0600 Al Hopper <al at logical-approach.com> wrote:> On Sun, 21 Jan 2007, James C. McPherson wrote: > > ... snip .... >> Would you please expand upon this, because I''m really interested >> in what your thoughts are..... since I work on Sun''s SAS driver :) > > Hi James - just the man I have a couple of questions for... :) > > Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as a > generic ZFS/JBOD SATA controller?It does (I''ve used it). Kind of. When I''ve had it attached to an external JBOD, it works fine with only 1 or 2 drives, but when the JBOD (promise j300s) is fully populated with 12 drives, it flakes out (I/O errors). Windows had no problems. It works better with the LSI drivers than the Sun mpt driver. Sorry I don''t remember many more details than that. You can search on comp.unix.solaris for a thread a few months ago about it. It only works on x86. I ended up selling it and the JBOD, just couldn''t get it working reliably. -frank
Frank Cusack
2007-Jan-22 16:44 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 22, 2007 8:15:46 AM -0800 Frank Cusack <fcusack at fcusack.com> wrote:> On January 21, 2007 7:38:01 AM -0600 Al Hopper <al at logical-approach.com> > wrote: >> On Sun, 21 Jan 2007, James C. McPherson wrote: >> >> ... snip .... >>> Would you please expand upon this, because I''m really interested >>> in what your thoughts are..... since I work on Sun''s SAS driver :) >> >> Hi James - just the man I have a couple of questions for... :) >> >> Will the LsiLogic 3041E-R (4-port internal, SAS/SATA, PCI-e) HBA work as >> a generic ZFS/JBOD SATA controller? > > It does (I''ve used it). Kind of.eh, sorry, i had a 3042E-R. I think that was the model #. Same thing though, just with 2 external and 2 internal ports instead of 4 internal. I also had the PCI-X version and had the same issues.> When I''ve had it attached to an external JBOD, it works fine with only > 1 or 2 drives, but when the JBOD (promise j300s) is fully populated > with 12 drives, it flakes out (I/O errors). Windows had no problems. > > It works better with the LSI drivers than the Sun mpt driver. > > Sorry I don''t remember many more details than that. You can search > on comp.unix.solaris for a thread a few months ago about it. > > It only works on x86. > > I ended up selling it and the JBOD, just couldn''t get it working reliably. > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-Jan-22 16:47 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 20, 2007 6:08:07 PM -0800 Richard Elling <Richard.Elling at Sun.COM> wrote:> Frank Cusack wrote: >> On January 19, 2007 5:59:13 PM -0800 "David J. Orman" >> <ormandj at corenode.com> wrote: >>> card that supports SAS would be *ideal*, >> >> Except that SAS support on Solaris is not very good. >> >> One major problem is they treat it like scsi when instead they should >> treat it like FC (or native SATA). > > uhmm... SAS is serial attached SCSI, why wouldn''t we treat it like SCSI?On January 21, 2007 8:17:10 PM +1100 "James C. McPherson" <James.McPherson at Sun.COM> wrote:> Uh ... you do know that the second "S" in SAS stands for > "serial-attached SCSI", right?Uh ... you do know that the SCSI part of SAS refers to the command set, right? And not the physical topology and associated things. (Please forgive any terminology errors, you know what I mean.) That seems like saying, "Uh ... you do know that there is no SCSI in FC, right?" (Yet FC is still SCSI.)> Would you please expand upon this, because I''m really interested > in what your thoughts are..... since I work on Sun''s SAS driver :)SAS is limited, by the Solaris driver, to 16 devices. Not even that, it''s limited to devices with SCSI id''s 0-15, so if you have 16 drives and they start at id 10, well you only get access to 6 of them. But SAS doesn''t even really have scsi target id''s. It has WWN-like identifiers. I guess HBAs do some kind of mapping but it''s not reliable and can change, and inspecting or hardcoding device->id mappings requires changing settings in the card''s BIOS/OF. Also, the HBA may renumber devices. That can be a big problem. It would be better to use the SASAddress the way the fibre channel drivers use the WWN. Drives could still be mapped to scsi id''s, but it should be done by the Solaris driver, not the HBA. And when multipathing the names should change like with FC. That''s one thing. The other is unreliability with many devices attached. I''ve talked to others that have had this problem as well. I offered to send my controller(s) and JBOD to Sun for testing, through the support channel (I had a bug open on this for awhile), but they didn''t want it. I think it came down to the classic "we don''t sell that hardware" problem. The onboard SAS controllers (x4100, v215 etc) work fine due to the limited topology. I wonder how you fix (hardcode) the scsi id''s with those. Because you''re not doing it with a PCI card. -frank
Frank Cusack
2007-Jan-22 17:39 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On January 21, 2007 12:15:22 AM -0200 Toby Thain <toby at smartgames.ca> wrote:> To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under > Solaris. Replacing a failed drive requires a reboot.Also, adding a drive that wasn''t present at boot requires a reboot. -frank
Richard Elling
2007-Jan-22 18:03 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Casper.Dik at Sun.COM wrote:>> That said, this definition is not always used consistently, as is the case >> with the x2100. I filed a bug against the docs in this case, and unfortunately >> it was closed as "will not fix." :-( > > In the context of a hardware platform it makes little sense to > distinguish between hot-plug and hot-swap. The distinction is purely > based on the capabilities of the software.Agree. I filed the bug against the docs with the justification that it confuses customers. The bug was closed and we continue to have confused customers :-( Toby Thain wrote: > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under > Solaris. Replacing a failed drive requires a reboot. I do not believe this is true, though I don''t have one to test. If this were true, then we would have had to rewrite the disk drivers to not allow us to open a device more than once, even if we also closed the device. I can''t imagine anyone allowing such code to be written. However, I don''t believe this is the context of the issue. I believe that this release note deals with the use of NVRAID (NVidia''s MCP RAID controller) which does not have a systems management interface under Solaris. The solution is to not use NVRAID for Solaris. Rather, use the proven techniques that we''ve been using for decades to manage hot plugging drives. In short, the release note is confusing, so ignore it. Use x2100 disks as hot pluggable like you''ve always used hot plug disks in Solaris. -- richard
Frank Cusack
2007-Jan-22 18:54 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On January 22, 2007 10:03:14 AM -0800 Richard Elling <Richard.Elling at Sun.COM> wrote:> Toby Thain wrote: > > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under > > Solaris. Replacing a failed drive requires a reboot. > > I do not believe this is true, though I don''t have one to test.Well if you won''t accept multiple technically adept people''s word on it, I highly suggest you get one to test instead of speculating.> If this > were true, then we would have had to rewrite the disk drivers to not allow > us to open a device more than once, even if we also closed the device. > I can''t imagine anyone allowing such code to be written.Obviously you have not rewritten the disk drivers to do this, so this is the wrong line of reasoning.> However, I don''t believe this is the context of the issue. I believe that > this release note deals with the use of NVRAID (NVidia''s MCP RAID > controller) > which does not have a systems management interface under Solaris. The > solution is to not use NVRAID for Solaris. Rather, use the proven > techniques > that we''ve been using for decades to manage hot plugging drives.No, the release note is not about NVRAID.> In short, the release note is confusing, so ignore it. Use x2100 disks as > hot pluggable like you''ve always used hot plug disks in Solaris.Again, NO these drives are not hot pluggable and the release note is accurate. PLEASE get a system to test. Or take our word for it. -frank
Jason J. W. Williams
2007-Jan-22 19:02 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Hi Frank, I''m sure Richard will check it out. He''s a very good guy and not trying to jerk you around. I''m sure the hostility isn''t warranted. :-) Best Regards, Jason On 1/22/07, Frank Cusack <fcusack at fcusack.com> wrote:> On January 22, 2007 10:03:14 AM -0800 Richard Elling > <Richard.Elling at Sun.COM> wrote: > > Toby Thain wrote: > > > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under > > > Solaris. Replacing a failed drive requires a reboot. > > > > I do not believe this is true, though I don''t have one to test. > > Well if you won''t accept multiple technically adept people''s word on it, > I highly suggest you get one to test instead of speculating. > > > If this > > were true, then we would have had to rewrite the disk drivers to not allow > > us to open a device more than once, even if we also closed the device. > > I can''t imagine anyone allowing such code to be written. > > Obviously you have not rewritten the disk drivers to do this, so this is > the wrong line of reasoning. > > > However, I don''t believe this is the context of the issue. I believe that > > this release note deals with the use of NVRAID (NVidia''s MCP RAID > > controller) > > which does not have a systems management interface under Solaris. The > > solution is to not use NVRAID for Solaris. Rather, use the proven > > techniques > > that we''ve been using for decades to manage hot plugging drives. > > No, the release note is not about NVRAID. > > > In short, the release note is confusing, so ignore it. Use x2100 disks as > > hot pluggable like you''ve always used hot plug disks in Solaris. > > Again, NO these drives are not hot pluggable and the release note is > accurate. PLEASE get a system to test. Or take our word for it. > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Frank Cusack
2007-Jan-22 19:12 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
I certainly did NOT mean any hostility whatsoever. I highly value what Richard offers in this forum. I''m just frustrated at the misinformation which is being presented as authoritative. Repeatedly. But to be clear, in my mind Richard is one of the "good ones" and I eagerly read what he has to say -- to the point that when he chimes in on a thread I''ve been ignoring I start reading it, and always learn something. OK enough of that, back to the bashing! :-) -frank On January 22, 2007 12:02:03 PM -0700 "Jason J. W. Williams" <jasonjwwilliams at gmail.com> wrote:> Hi Frank, > > I''m sure Richard will check it out. He''s a very good guy and not > trying to jerk you around. I''m sure the hostility isn''t warranted. :-) > > Best Regards, > Jason > > On 1/22/07, Frank Cusack <fcusack at fcusack.com> wrote: >> On January 22, 2007 10:03:14 AM -0800 Richard Elling >> <Richard.Elling at Sun.COM> wrote: >> > Toby Thain wrote: >> > > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" >> > > under Solaris. Replacing a failed drive requires a reboot. >> > >> > I do not believe this is true, though I don''t have one to test. >> >> Well if you won''t accept multiple technically adept people''s word on it, >> I highly suggest you get one to test instead of speculating. >> >> > If this >> > were true, then we would have had to rewrite the disk drivers to not >> > allow us to open a device more than once, even if we also closed the >> > device. I can''t imagine anyone allowing such code to be written. >> >> Obviously you have not rewritten the disk drivers to do this, so this is >> the wrong line of reasoning. >> >> > However, I don''t believe this is the context of the issue. I believe >> > that this release note deals with the use of NVRAID (NVidia''s MCP RAID >> > controller) >> > which does not have a systems management interface under Solaris. The >> > solution is to not use NVRAID for Solaris. Rather, use the proven >> > techniques >> > that we''ve been using for decades to manage hot plugging drives. >> >> No, the release note is not about NVRAID. >> >> > In short, the release note is confusing, so ignore it. Use x2100 >> > disks as hot pluggable like you''ve always used hot plug disks in >> > Solaris. >> >> Again, NO these drives are not hot pluggable and the release note is >> accurate. PLEASE get a system to test. Or take our word for it. >> >> -frank >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>
David J. Orman
2007-Jan-22 19:19 UTC
[zfs-discuss] Re: Re: External drive enclosures + Sun Server for
> Hi Frank, > > I''m sure Richard will check it out. He''s a very good > guy and not > trying to jerk you around. I''m sure the hostility > isn''t warranted. :-) > > Best Regards, > JasonI''m very confused now. Do the x2200m2s support "hot plug" of drives or not? I can''t believe it''s that confusing/difficult. They do or they don''t. I don''t care if I can just yank a drive in a running system out and have no problems, but I *do* need to be able to swap a failed disk in a mirror without downtime. Does Sun not have an official word on this? I''m losing my faith very rapidly on the lack of an absolute response to this question. Along these same lines, what is the roadmap for ZFS on boot disks? I''ve not heard anything about it in quite some time, and google doesn''t yield any current information either. This message posted from opensolaris.org
Frank Cusack
2007-Jan-22 19:28 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
>>> > In short, the release note is confusing, so ignore it. Use x2100 >>> > disks as hot pluggable like you''ve always used hot plug disks in >>> > Solaris. >>> >>> Again, NO these drives are not hot pluggable and the release note is >>> accurate. PLEASE get a system to test. Or take our word for it.hmm I think I may have just figured out the problem here. YES the x2100 is that bad. I too found it quite hard to believe that Sun would sell this without hot plug drives. It seems like a step backwards. (and of course I don''t mean that the x2100 is awful, it''s a great hardware and very well priced ... now if only hot plug worked!) My main issue is that the x2100 is advertised as hot plug working. You have to dig pretty deep -- deeper than would be expected of a "typical" buyer -- to find that Solaris does not support it. -frank
David J. Orman
2007-Jan-22 19:30 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for mass
> Not to be picky, but the X2100 and X2200 series are > NOT > designed/targeted for disk serving (they don''t even > have redundant power > supplies). They''re compute-boxes. The X4100/X4200 > are what you are > looking for to get a flexible box more oriented > towards disk i/o and > expansion.I don''t see those as being any better suited to external discs other than: #1 - They have the capacity for redundant PSUs, which is irrelevant to my needs. #2 - They only have PCI Express slots, and I can''t find any good external SATA interface cards on PCI Express I can''t wrap my head around the idea that I should buy a lot more than I need, which still doesn''t serve my purposes. The 4 disks in an x4100 still aren''t enough, and the machine is a fair amount more costly. I just need mirrored boot drives, and an external disk array.> That said (if you''re set on an X2200 M2), you are > probably better off > getting a PCI-E SCSI controller, and then attaching > it to an external > SCSI->SATA JBOD. There are plenty of external JBODs > out there which use > Ultra320/Ultra160 as a host interface and SATA as a > drive interface. > Sun will sell you a supported SCSI controller with > the X2200 M2 (the > "Sun StorageTek PCI-E Dual Channel Ultra320 SCSI > HBA"). > > SCSI is far better for a host attachment mechanism > than eSATA if you > plan on doing more than a couple of drives, which it > sounds like you > are. While the SCSI HBA is going to cost quite a bit > more than an eSATA > HBA, the external JBODs run about the same, and the > total difference is > going to be $300 or so across the whole setup (which > will cost you $5000 > or more fully populated). So the cost to use SCSI vs > eSATA as the host- > attach is a rounding error.I understand your comments in some ways, in others I do not. It sounds like we''re moving backwards in time. Exactly why is SCSI "better" than SAS/SATA for external devices? From my experience (with other OSs/hardware platforms) the opposite is true. A nice SAS/SATA controller with external ports (especially those that allow multiple SAS/SATA drives via one cable - whichever tech you use) works wonderfully for me, and I get a nice thin/clean cable which makes cable management much more "enjoyable" in higher density situations. I also don''t agree with the logic "just spend a mere $300 extra to use older technology!" $300 may not be much to large business, but things like this nickle and dime small business owners. There''s a lot of things I''d prefer to spend $300 on than an expensive SCSI HBA which offers no advantages over a SAS counterpart, in fact offers disadvantages instead. Your input is of course highly valued, and it''s quite possible I''m missing an important piece to the puzzle somewhere here, but I am not convinced this is the ideal solution - simply a "stick with the old stuff, it''s easier" solution, which I am very much against. Thanks, David This message posted from opensolaris.org
Frank Cusack
2007-Jan-22 19:31 UTC
[zfs-discuss] Re: Re: External drive enclosures + Sun Server for
On January 22, 2007 11:19:40 AM -0800 "David J. Orman" <ormandj at corenode.com> wrote:> I''m very confused now. Do the x2200m2s support "hot plug" of drives or > not? I can''t believe it''s that confusing/difficult. They do or they > don''t.Running Solaris, they do not.> I don''t care if I can just yank a drive in a running system out > and have no problems, but I *do* need to be able to swap a failed disk in > a mirror without downtime.Then the x2100/x2200 is not for you in a standard configuration. You might be able to find a PCI-E sata card and use that instead of the onboard SATA. I''m hoping to find such a card. -frank
Jason J. W. Williams
2007-Jan-22 19:38 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for mass
Hi David, Depending on the I/O you''re doing the X4100/X4200 are much better suited because of the dual HyperTransport buses. As a storage box with GigE outputs you''ve got a lot more I/O capacity with two HT buses than one. That plus the X4100 is just a more solid box. The X2100 M2 while a vast improvement over the X2100 in terms of reliability and features, is still an OEM''d whitebox. We use the X2100 M2s for application servers, but for anything that needs solid reliability or I/O we go Galaxy. Best Regards, Jason On 1/22/07, David J. Orman <ormandj at corenode.com> wrote:> > Not to be picky, but the X2100 and X2200 series are > > NOT > > designed/targeted for disk serving (they don''t even > > have redundant power > > supplies). They''re compute-boxes. The X4100/X4200 > > are what you are > > looking for to get a flexible box more oriented > > towards disk i/o and > > expansion. > > I don''t see those as being any better suited to external discs other than: > > #1 - They have the capacity for redundant PSUs, which is irrelevant to my needs. > #2 - They only have PCI Express slots, and I can''t find any good external SATA interface cards on PCI Express > > I can''t wrap my head around the idea that I should buy a lot more than I need, which still doesn''t serve my purposes. The 4 disks in an x4100 still aren''t enough, and the machine is a fair amount more costly. I just need mirrored boot drives, and an external disk array. > > > That said (if you''re set on an X2200 M2), you are > > probably better off > > getting a PCI-E SCSI controller, and then attaching > > it to an external > > SCSI->SATA JBOD. There are plenty of external JBODs > > out there which use > > Ultra320/Ultra160 as a host interface and SATA as a > > drive interface. > > Sun will sell you a supported SCSI controller with > > the X2200 M2 (the > > "Sun StorageTek PCI-E Dual Channel Ultra320 SCSI > > HBA"). > > > > SCSI is far better for a host attachment mechanism > > than eSATA if you > > plan on doing more than a couple of drives, which it > > sounds like you > > are. While the SCSI HBA is going to cost quite a bit > > more than an eSATA > > HBA, the external JBODs run about the same, and the > > total difference is > > going to be $300 or so across the whole setup (which > > will cost you $5000 > > or more fully populated). So the cost to use SCSI vs > > eSATA as the host- > > attach is a rounding error. > > I understand your comments in some ways, in others I do not. It sounds like we''re moving backwards in time. Exactly why is SCSI "better" than SAS/SATA for external devices? From my experience (with other OSs/hardware platforms) the opposite is true. A nice SAS/SATA controller with external ports (especially those that allow multiple SAS/SATA drives via one cable - whichever tech you use) works wonderfully for me, and I get a nice thin/clean cable which makes cable management much more "enjoyable" in higher density situations. > > I also don''t agree with the logic "just spend a mere $300 extra to use older technology!" > > $300 may not be much to large business, but things like this nickle and dime small business owners. There''s a lot of things I''d prefer to spend $300 on than an expensive SCSI HBA which offers no advantages over a SAS counterpart, in fact offers disadvantages instead. > > Your input is of course highly valued, and it''s quite possible I''m missing an important piece to the puzzle somewhere here, but I am not convinced this is the ideal solution - simply a "stick with the old stuff, it''s easier" solution, which I am very much against. > > Thanks, > David > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
David J. Orman
2007-Jan-22 19:38 UTC
[zfs-discuss] Re: Re: Re: External drive enclosures + Sun Server for
> On January 22, 2007 11:19:40 AM -0800 "David J. > Orman" > <ormandj at corenode.com> wrote: > > I''m very confused now. Do the x2200m2s support "hot > plug" of drives or > > not? I can''t believe it''s that confusing/difficult. > They do or they > > don''t. > > Running Solaris, they do not.Wow. What was/is Sun thinking here? Glad I asked the question happenstance, this makes the X2* series a total waste to purchase.> > I don''t care if I can just yank a drive in a > running system out > > and have no problems, but I *do* need to be able to > swap a failed disk in > > a mirror without downtime. > > Then the x2100/x2200 is not for you in a standard > configuration. You might > be able to find a PCI-E sata card and use that > instead of the onboard SATA. > I''m hoping to find such a card. >I''m not going to pay for hardware that can''t handle very basic things such as mirrored boot drives on the vendor-provided OS. That''s insane. Guess it''s time to investigate Supermicro and Tyan solutions, startup-essentials program or not - that makes no hardware sense. Who do I gripe to concerning this (we''re starting to stray from discussion pertinent to this list...)? Would I gripe to my sales rep? Thanks for the clarity, David This message posted from opensolaris.org
Jason J. W. Williams
2007-Jan-22 19:42 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Hi Guys, The original X2100 was a pile of doggie doo-doo. All of our problems with it go back to the atrocious quality of the nForce 4 Pro chipset. The NICs in particular are just crap. The M2s are better, but the MCP55 chipset has not resolved all of its flakiness issues. That being said Sun designed that case with hot-plug bays, if Solaris isn''t going to support it, then those shouldn''t be there in my opinion. Best Regards, Jason On 1/22/07, Frank Cusack <fcusack at fcusack.com> wrote:> >>> > In short, the release note is confusing, so ignore it. Use x2100 > >>> > disks as hot pluggable like you''ve always used hot plug disks in > >>> > Solaris. > >>> > >>> Again, NO these drives are not hot pluggable and the release note is > >>> accurate. PLEASE get a system to test. Or take our word for it. > > hmm I think I may have just figured out the problem here. > > YES the x2100 is that bad. I too found it quite hard to believe that > Sun would sell this without hot plug drives. It seems like a step > backwards. > > (and of course I don''t mean that the x2100 is awful, it''s a great > hardware and very well priced ... now if only hot plug worked!) > > My main issue is that the x2100 is advertised as hot plug working. > You have to dig pretty deep -- deeper than would be expected of a > "typical" buyer -- to find that Solaris does not support it. > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Frank Cusack
2007-Jan-22 19:42 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 19, 2007 10:01:43 PM -0800 Dan Mick <dan.mick at sun.com> wrote:> Scouting around a bit, I see SIIG has a 3132 chip, for which they make a > card, eSATA II, available in PCIe and PCIe ExpressCard formfactors. I > can''t promise, but chances seem good that it''s supported by si3124 driver > in Solaris: > > si3124 "pci1095,3124" > si3124 "pci1095,3132" > > Street price for the PCIe card is $30-35.Myself, I''d just like to have internal SATA with hot plug support. (I''m using FC for external storage.) I''ve only found cards like this: <http://www.cdw.com/shop/products/default.aspx?EDC=1070554> which is $57. Could you share where I might find one for $30? -frank
David J. Orman
2007-Jan-22 19:50 UTC
[zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass
> Hi David, > > Depending on the I/O you''re doing the X4100/X4200 are > much better > suited because of the dual HyperTransport buses. As a > storage box with > GigE outputs you''ve got a lot more I/O capacity with > two HT buses than > one. That plus the X4100 is just a more solid box.That much makes sense, thanks for clearing that up.> The X2100 M2 while > a vast improvement over the X2100 in terms of > reliability and > features, is still an OEM''d whitebox. We use the > X2100 M2s for > application servers, but for anything that needs > solid reliability or > I/O we go Galaxy.Ahh. That explains a lot. Thank you once again! Sounds like the X2* is the red-headed stepchild of Sun''s product line. They should slap disclaimers up on the product information pages so we know better than to purchase into something that doesn''t fully function. Still unclear on the SAS/SATA solutions, but hopefully that''ll progress further now in the thread. Cheers, David This message posted from opensolaris.org
Frank Cusack
2007-Jan-22 19:52 UTC
[zfs-discuss] Re: Re: Re: External drive enclosures + Sun Server for
On January 22, 2007 11:38:35 AM -0800 "David J. Orman" <ormandj at corenode.com> wrote:> Guess it''s time to investigate Supermicro and Tyan solutions, > startup-essentials program or not - that makes no hardware sense.I know it seems ridiculous to HAVE to buy a 3rd party card, but come on it is only $50 or so. Assuming you don''t need both pci slots for other uses. I personally wouldn''t want to deal with "PC" hardware suppliers directly. Putting together and maintaining those kinds of systems is a PITA. The $50 is worth it. Assuming it will work. Especially under the startup program you''re going to have as good or better prices from Sun, and good support. -frank
Jason J. W. Williams
2007-Jan-22 19:57 UTC
[zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass
Hi David, Glad to help! I don''t want to bad-mouth the X2100 M2s that much, because they have been solid. I believe the M2s are made/designed just for Sun by Quanta Computer (http://www.quanta.com.tw/e_default.htm) whereas the mobos in the original X2100 was Tyan Tiger with some slight modifications. That all being said, the problem is that Nvidia chipset. The MCP55 in the X2100 M2 is an alright chipset, the nForce 4 Pro just had bugs. Best Regards, Jason On 1/22/07, David J. Orman <ormandj at corenode.com> wrote:> > Hi David, > > > > Depending on the I/O you''re doing the X4100/X4200 are > > much better > > suited because of the dual HyperTransport buses. As a > > storage box with > > GigE outputs you''ve got a lot more I/O capacity with > > two HT buses than > > one. That plus the X4100 is just a more solid box. > > That much makes sense, thanks for clearing that up. > > > The X2100 M2 while > > a vast improvement over the X2100 in terms of > > reliability and > > features, is still an OEM''d whitebox. We use the > > X2100 M2s for > > application servers, but for anything that needs > > solid reliability or > > I/O we go Galaxy. > > Ahh. That explains a lot. Thank you once again! > > Sounds like the X2* is the red-headed stepchild of Sun''s product line. They should slap disclaimers up on the product information pages so we know better than to purchase into something that doesn''t fully function. > > Still unclear on the SAS/SATA solutions, but hopefully that''ll progress further now in the thread. > > Cheers, > David > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
David J. Orman
2007-Jan-22 20:03 UTC
[zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun
> I know it seems ridiculous to HAVE to buy a 3rd party > card, but come > on it is only $50 or so. Assuming you don''t need > both pci slots for > other uses.I do. Two would have gone to external access for a JBOD (if that ever gets sorted out, haha) - most external adapters seem to support 4 disks.> I personally wouldn''t want to deal with "PC" hardware > suppliers directly.Neither would I, hence looking to Sun. :)> Putting together and maintaining those kinds of > systems is a PITA.Well, the Supermicro and Tyan systems generally are not.> The > $50 is worth it. Assuming it will work.Herein lies the problem, more following...> Especially > under the startup > program you''re going to have as good or better prices > from Sun,With the program, the prices are still more than I would pay from Supermicro/Tyan, but they are acceptably higher as the integration/support would be much better, of course. Except, this does not seem the case on the X2* series.> and > good support.Here is the big problem. I''d be buying a piece of Sun hardware specifically for this reason, already paying more (even with the startup essentials program) - but do you think Sun is going to support that SAS/SATA controller I bought? If something doesn''t work, or later gets broken (for example, the driver disappears/breaks in a later version of Solaris) - what will I do then? Nothing. :) Might as well buy whitebox if I''m going to build the system out in a whitebox-way. ;) I''d much prefer Sun products, however - I just expect them to support Sun''s flagship OS, and be supported fully. I''m going to look into the X4* series assuming they don''t have such problems with supported boot disk mirroring/hot plugging/etc. Thanks, David This message posted from opensolaris.org
Frank Cusack
2007-Jan-22 20:29 UTC
[zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun
On January 22, 2007 12:03:51 PM -0800 "David J. Orman" <ormandj at corenode.com> wrote:> I''d much prefer Sun products, however - I just expect them to support > Sun''s flagship OS, and be supported fully. I''m going to look into the X4* > series assuming they don''t have such problems with supported boot disk > mirroring/hot plugging/etc.I have had great success with an x4100. It just works. I wish it had OBP or EFI instead of the BIOS, but whatever. -frank
Frank Cusack
2007-Jan-22 20:40 UTC
[zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun
On January 22, 2007 12:03:51 PM -0800 "David J. Orman" <ormandj at corenode.com> wrote:>> I know it seems ridiculous to HAVE to buy a 3rd party >> card, but come >> on it is only $50 or so. Assuming you don''t need >> both pci slots for >> other uses. > > I do. Two would have gone to external access for a JBOD (if that ever > gets sorted out, haha) - most external adapters seem to support 4 disks.You can''t actually use those adapters in the x2100/x2200 or even the x4100/x4200. The slots are "MD2" low profile slots and the 4 port adapters require a full height slot. Even the x4600 only has MD2 slots. So you can only use 2 port adapters. I think there are esata cards that use the infiniband (SAS style) connector, which will fit in an MD2 slot and still access 4 drives, but I''m not aware of any that Solaris supports. Unfortunately, Solaris does not support SATA port multipliers (yet) so I think you''re pretty limited in how many esata drives you can connect. External SAS is pretty much a non-starter on Solaris (today) so I think you''re left with iscsi or FC if you need more than just a few drives and you want to use Sun servers instead of building your own. -frank
David J. Orman
2007-Jan-22 20:45 UTC
[zfs-discuss] Re: Re: Re: Re: Re: External drive enclosures + Sun
> You can''t actually use those adapters in the > x2100/x2200 or even the > x4100/x4200. The slots are "MD2" low profile slots > and the 4 port adapters > require a full height slot. Even the x4600 only has > MD2 slots. So you can > only use 2 port adapters. I think there are esata > cards that use the > infiniband (SAS style) connector, which will fit in > an MD2 slot and still > access 4 drives, but I''m not aware of any that > Solaris supports.Fair enough. :)> Unfortunately, Solaris does not support SATA port > multipliers (yet) so > I think you''re pretty limited in how many esata > drives you can connect.Gotcha.> External SAS is pretty much a non-starter on Solaris > (today) so I think > you''re left with iscsi or FC if you need more than > just a few drives and > you want to use Sun servers instead of building your > own.iSCSI is interesting to me, are there any JBOD iSCSI external arrays that would allow me to use SAS/SATA drives? I''d actually prefer this to eSATA, as network cable is even more easily dealt with. Toss in one of the dual/quad gigabit cards and run iSCSI to a JBOD filled with SATA/SAS drives == winning solution for me. 4gbit via network avoiding all of the expense of FC is nothing to sniffle at. Would this still be workable with ZFS? Ideally, I''d like 8-10 drives, running RaidZ2. Know of any products out there I should be looking at, in terms of the hardware enclosure/iSCSI interface for the drives? David This message posted from opensolaris.org
Frank Cusack
2007-Jan-22 20:53 UTC
[zfs-discuss] Re: Re: Re: Re: Re: External drive enclosures + Sun
On January 22, 2007 12:45:29 PM -0800 "David J. Orman" <ormandj at corenode.com> wrote:>> External SAS is pretty much a non-starter on Solaris >> (today) so I think >> you''re left with iscsi or FC if you need more than >> just a few drives and >> you want to use Sun servers instead of building your >> own.I should add: or scsi of course.> iSCSI is interesting to me, are there any JBOD iSCSI external arrays that > would allow me to use SAS/SATA drives?There''s a few. I''m thinking about a promise m500i. They make smaller ones also. Not 100% sure though; I might just stick with FC.> I''d actually prefer this to eSATA, > as network cable is even more easily dealt with. Toss in one of the > dual/quad gigabit cards and run iSCSI to a JBOD filled with SATA/SAS > drives == winning solution for me. 4gbit via network avoiding all of the > expense of FC is nothing to sniffle at.The promise only has 2 gbit ports, and I''m not sure if they can load balance. So if you want multi-gigabit performance you should look to other enclosures.> Would this still be workable with ZFS?My understanding is yes, but I haven''t done this personally yet.> Know of any products out therepromise, dnf, nexsan are the ones I know of. Oh I think Adaptec might have one also. -frank
mike
2007-Jan-22 21:12 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for mass
Areca makes excellent PCI express cards - but probably have zero support in Solaris/OpenSolaris. I use them in both Windows and Linux. Works natively in FreeBSD too. They''re the fastest cards on the market I believe still. However probably not very appropriate for this since it''s a Solaris-based OS :( On 1/22/07, David J. Orman <ormandj at corenode.com> wrote:> #2 - They only have PCI Express slots, and I can''t find any good external SATA interface cards on PCI Express
Toby Thain
2007-Jan-22 21:14 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On 22-Jan-07, at 5:28 PM, Frank Cusack wrote:>>>> > In short, the release note is confusing, so ignore it. Use x2100 >>>> > disks as hot pluggable like you''ve always used hot plug disks in >>>> > Solaris.Won''t work - some of us have tested it.>>>> >>>> Again, NO these drives are not hot pluggable and the release >>>> note is >>>> accurate. PLEASE get a system to test. Or take our word for it. > > hmm I think I may have just figured out the problem here. > > YES the x2100 is that bad. I too found it quite hard to believe that > Sun would sell this without hot plug drives. It seems like a step > backwards. > > (and of course I don''t mean that the x2100 is awful, it''s a great > hardware and very well priced ... now if only hot plug worked!) > > My main issue is that the x2100 is advertised as hot plug working.Agree 100% with the above. In all other respects I like the X2100. I''ve come to accept the lack of hotswap as an indirect consequence of market segmentation, but it would be great if it worked (like Frank I saw nothing to indicate the contrary until someone pointed me to fine print in the release notes, long after we purchased). --Toby> You have to dig pretty deep -- deeper than would be expected of a > "typical" buyer -- to find that Solaris does not support it. > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Toby Thain
2007-Jan-22 21:16 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On 22-Jan-07, at 4:03 PM, Richard Elling wrote:> Casper.Dik at Sun.COM wrote: >>> That said, this definition is not always used consistently, as is >>> the case >>> with the x2100. I filed a bug against the docs in this case, and >>> unfortunately >>> it was closed as "will not fix." :-( >> In the context of a hardware platform it makes little sense to >> distinguish between hot-plug and hot-swap. The distinction is purely >> based on the capabilities of the software. > > Agree. I filed the bug against the docs with the justification > that it > confuses customers. The bug was closed and we continue to have > confused > customers :-( > > Toby Thain wrote: > > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" > under > > Solaris. Replacing a failed drive requires a reboot. > > I do not believe this is true, though I don''t have one to test.This error has been sufficiently addressed in later posts, I think...> If this > were true, then we would have had to rewrite the disk drivers to > not allow > us to open a device more than once, even if we also closed the device. > I can''t imagine anyone allowing such code to be written. > > However, I don''t believe this is the context of the issue. I > believe that > this release note deals with the use of NVRAID (NVidia''s MCP RAID > controller) > which does not have a systems management interface under Solaris. The > solution is to not use NVRAID for Solaris. Rather, use the proven > techniques > that we''ve been using for decades to manage hot plugging drives.I have no interest in NVRAID whatsoever. I use SVM and ZFS. Furthermore, NVRAID is the only method that *does* allow hotswap on X2100! (Bizarrely, only with Windows, which is of course useless to me too.) --Toby> > In short, the release note is confusing, so ignore it. Use x2100 > disks as > hot pluggable like you''ve always used hot plug disks in Solaris. > -- richard > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I''m dying here - does anyone know when or even if they will support these? I had this whole setup planned out but it requires eSATA + port multipliers. I want to use ZFS, but currently cannot in that fashion. I''d still have to buy some [more expensive, noisier, bulky internal drive] solution for ZFS. Unless anyone has other ideas. I''m looking to run a 5-10 drive system (with easy ability to expand) in my home office; not in a datacenter. Even opening up to iSCSI seems to not get me much - there aren''t any SOHO type NAS enclosures that act as iSCSI targets. There are however handfuls of eSATA based 4, 5, and 10 drive enclosures perfect for this... but all require the port multiplier support. On 1/22/07, Frank Cusack <fcusack at fcusack.com> wrote:> Unfortunately, Solaris does not support SATA port multipliers (yet) so > I think you''re pretty limited in how many esata drives you can connect.
James C. McPherson
2007-Jan-22 21:53 UTC
SAS support on Solaris, was Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage
Hi Frank, Frank Cusack wrote:> On January 20, 2007 6:08:07 PM -0800 Richard Elling >> Frank Cusack wrote: >>> On January 19, 2007 5:59:13 PM -0800 "David J. Orman" >>>> card that supports SAS would be *ideal*, >>> Except that SAS support on Solaris is not very good. >>> One major problem is they treat it like scsi when instead they should >>> treat it like FC (or native SATA). >> uhmm... SAS is serial attached SCSI, why wouldn''t we treat it like SCSI? > On January 21, 2007 8:17:10 PM +1100 "James C. McPherson" >> Uh ... you do know that the second "S" in SAS stands for >> "serial-attached SCSI", right? > Uh ... you do know that the SCSI part of SAS refers to the command > set, right? And not the physical topology and associated things. > (Please forgive any terminology errors, you know what I mean.) > That seems like saying, "Uh ... you do know that there is no SCSI in FC, > right?" (Yet FC is still SCSI.)Sorry, I should have been more specific there. I was responding on your "(or native SATA)" comment.>> Would you please expand upon this, because I''m really interested >> in what your thoughts are..... since I work on Sun''s SAS driver :) > SAS is limited, by the Solaris driver, to 16 devices.Correct.> Not even that, > it''s limited to devices with SCSI id''s 0-15, so if you have 16 drives > and they start at id 10, well you only get access to 6 of them.Why would you start your numbering at 10?> But SAS doesn''t even really have scsi target id''s. It has WWN-like > identifiers. I guess HBAs do some kind of mapping but it''s not > reliable and can change, and inspecting or hardcoding device->id > mappings requires changing settings in the card''s BIOS/OF.SAS has WWNs because that is what the standard requires. SAS hba implementors are free to map WWNs to relatively user-friendly identifiers, which is what the LSI SAS1064/SAS1064/E chips do.> Also, the HBA may renumber devices. That can be a big problem.Agreed. No argument there!> It would be better to use the SASAddress the way the fibre channel > drivers use the WWN. Drives could still be mapped to scsi id''s, but > it should be done by the Solaris driver, not the HBA. And when > multipathing the names should change like with FC.That too is my preference. We''re currently working on multipathing with SAS.> That''s one thing. The other is unreliability with many devices > attached. I''ve talked to others that have had this problem as well. > I offered to send my controller(s) and JBOD to Sun for testing, through > the support channel (I had a bug open on this for awhile), but they > didn''t want it. I think it came down to the classic "we don''t > sell that hardware" problem. The onboard SAS controllers (x4100, v215 > etc) work fine due to the limited topology. I wonder how you fix > (hardcode) the scsi id''s with those. Because you''re not doing it > with a PCI card.With a physically limited topology numbering isn''t an issue because of the way that the ports are connected to the onboard devices. It''s external devices (requiring a plugin hba) where it''s potentially a problem. Of course, to fully exploit that situation you''d need to have 64K addressable targets attached to a single controller, and that hasn''t happened yet. So we do have a window of opportunity :) best regards, James C. McPherson -- Solaris kernel software engineer Sun Microsystems
Dan Mick
2007-Jan-22 22:28 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Casper.Dik at sun.com wrote:>> That said, this definition is not always used consistently, as is the case >> with the x2100. I filed a bug against the docs in this case, and unfortunately >> it was closed as "will not fix." :-( > > In the context of a hardware platform it makes little sense to > distinguish between hot-plug and hot-swap. The distinction is purely > based on the capabilities of the software.well, back when I tried (in vain) to apply some common terminology to this, there were SCSI backplanes that had sequenced logic-vs-power connections on insert vs. remove, and had "generate an interrupt on insert or remove" capability....and there were backplanes that did not. The former class was maybe kinda practical to support unassisted "surprise" plugging. The latter made it impossible. I''m sure no one knows what their hardware capabilities ever are, because the industry has completely failed to come up with sane nomenclature for the hardware capabilities...and then we multiply that confusion by having no sane nomenclature for OS capabilities either, and the OS capabilities are never discussed as though they depend on the hardware, which, of course, they do.
Dan Mick
2007-Jan-23 01:39 UTC
[zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> On January 19, 2007 10:01:43 PM -0800 Dan Mick <dan.mick at sun.com> wrote: >> Scouting around a bit, I see SIIG has a 3132 chip, for which they make a >> card, eSATA II, available in PCIe and PCIe ExpressCard formfactors. I >> can''t promise, but chances seem good that it''s supported by si3124 driver >> in Solaris: >> >> si3124 "pci1095,3124" >> si3124 "pci1095,3132" >> >> Street price for the PCIe card is $30-35. > > Myself, I''d just like to have internal SATA with hot plug support. > (I''m using FC for external storage.) I''ve only found cards like this: > > <http://www.cdw.com/shop/products/default.aspx?EDC=1070554> > > which is $57. Could you share where I might find one for $30? > > -frankI went to Froogle.com and searched for eSataII. That led me to, among others, this: http://froogle.google.com/froogle_cluster?q=SIIG+eSata+II+PCI&btnG=Search&lmode=online&oid=14674630309093109908 but that shows a PCI card, even though it says PCIe...so there may be some confusion. CDW isn''t where I''d generally look for low prices, though.
Frank Cusack
2007-Jan-23 03:07 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
On January 22, 2007 12:12:19 PM -0600 Brian Hechinger <wonko at ia64.int.dittman.net> wrote:> On Mon, Jan 22, 2007 at 09:39:19AM -0800, Frank Cusack wrote: >> On January 21, 2007 12:15:22 AM -0200 Toby Thain <toby at smartgames.ca> >> wrote: >> > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" under >> > Solaris. Replacing a failed drive requires a reboot. >> >> Also, adding a drive that wasn''t present at boot requires a reboot. > > This couldn''t possibly be true, unless we''ve taken major steps backwards > as this has always been possible (at least on sparc)It is true. Try it. [Sorry to send a reply to a personal mail back to the list, but your email address bounces 450 <wonko at ia64.int.dittman.net>: Recipient address rejected: Domain not found] -frank
Frank Cusack
2007-Jan-23 04:17 UTC
SAS support on Solaris, was Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage
On January 23, 2007 8:53:30 AM +1100 "James C. McPherson" <James.McPherson at Sun.COM> wrote:> > Hi Frank, > > Frank Cusack wrote: >>> Would you please expand upon this, because I''m really interested >>> in what your thoughts are..... since I work on Sun''s SAS driver :) >> SAS is limited, by the Solaris driver, to 16 devices. > > Correct. > >> Not even that, >> it''s limited to devices with SCSI id''s 0-15, so if you have 16 drives >> and they start at id 10, well you only get access to 6 of them. > > Why would you start your numbering at 10?Because you don''t have a choice. It is up to the HBA and getting it to do the right thing (ie, what you want) isn''t always easy. IIRC, the LSI Logic HBA(s) I had would automatically remember SASAddress to SCSI ID mappings. So if you had attached 16 drives, removed one and replaced it with a different one (even in a JBOD, ie it would be attached to the same PHY), it would be id 16, because the first 16 scsi id''s (0-15) were already accounted for. And then the new drive, lets call it a replacement for a failed drive, would be unaccessible under Solaris. Why it would ever start at something other than 0, I''m not sure. I also kind of remember that scsi.conf had some setting to map the HBA to target 7 (which doesn''t apply to SAS! yet the reference there was specifically for LSI 1068. again IIRC). I think that I was seeing that drives started at 8 because of this initialization, and that removing it allowed the drives to start at 0 -- once I reset the HBA BIOS to forget the mappings it had already made.>> But SAS doesn''t even really have scsi target id''s. It has WWN-like >> identifiers. I guess HBAs do some kind of mapping but it''s not >> reliable and can change, and inspecting or hardcoding device->id >> mappings requires changing settings in the card''s BIOS/OF. > > SAS has WWNs because that is what the standard requires. SAS hba > implementors are free to map WWNs to relatively user-friendly > identifiers, which is what the LSI SAS1064/SAS1064/E chips do. > >> Also, the HBA may renumber devices. That can be a big problem. > > Agreed. No argument there! > >> It would be better to use the SASAddress the way the fibre channel >> drivers use the WWN. Drives could still be mapped to scsi id''s, but >> it should be done by the Solaris driver, not the HBA. And when >> multipathing the names should change like with FC. > > That too is my preference. We''re currently working on multipathing > with SAS.That is good to hear.>> That''s one thing. The other is unreliability with many devices >> attached. I''ve talked to others that have had this problem as well. >> I offered to send my controller(s) and JBOD to Sun for testing, through >> the support channel (I had a bug open on this for awhile), but they >> didn''t want it. I think it came down to the classic "we don''t >> sell that hardware" problem. The onboard SAS controllers (x4100, v215 >> etc) work fine due to the limited topology. I wonder how you fix >> (hardcode) the scsi id''s with those. Because you''re not doing it >> with a PCI card. > > With a physically limited topology numbering isn''t an issue because > of the way that the ports are connected to the onboard devices. It''s > external devices (requiring a plugin hba) where it''s potentially a > problem. Of course, to fully exploit that situation you''d need to > have 64K addressable targets attached to a single controller, and > that hasn''t happened yet. So we do have a window of opportunity :)I believe SAS supports a maximum of 128 devices per controller, including multipliers. -frank
James C. McPherson
2007-Jan-23 04:38 UTC
SAS support on Solaris, was Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage
Frank Cusack wrote:> On January 23, 2007 8:53:30 AM +1100 "James C. McPherson"...>> Why would you start your numbering at 10? > Because you don''t have a choice. It is up to the HBA and getting it > to do the right thing (ie, what you want) isn''t always easy. IIRC, > the LSI Logic HBA(s) I had would automatically remember SASAddress to > SCSI ID mappings. So if you had attached 16 drives, removed one > and replaced it with a different one (even in a JBOD, ie it would > be attached to the same PHY), it would be id 16, because the first 16 > scsi id''s (0-15) were already accounted for. And then the new drive, > lets call it a replacement for a failed drive, would be unaccessible > under Solaris.Oh heck. That sounds like one helluva broken way of doing things.> Why it would ever start at something other than 0, I''m not sure. I > also kind of remember that scsi.conf had some setting to map the HBA > to target 7 (which doesn''t apply to SAS! yet the reference there was > specifically for LSI 1068. again IIRC). I think that I was seeing > that drives started at 8 because of this initialization, and that > removing it allowed the drives to start at 0 -- once I reset the HBA > BIOS to forget the mappings it had already made./me groans ... more brokenness. I''ll pass this onto some others in our team who''ve been working on a similar issue. ...>> With a physically limited topology numbering isn''t an issue because >> of the way that the ports are connected to the onboard devices. It''s >> external devices (requiring a plugin hba) where it''s potentially a >> problem. Of course, to fully exploit that situation you''d need to >> have 64K addressable targets attached to a single controller, and >> that hasn''t happened yet. So we do have a window of opportunity :) > I believe SAS supports a maximum of 128 devices per controller, including > multipliers.Not quite correct - each expander device can have 128 connections, up to a max of 16256 devices in a single SAS domain. My figure of 64K addressable targets makes an assumption about the number of SAS domains that a controller can have :) Even so, we''ve still got that window. cheers, James C. McPherson -- Solaris kernel software engineer Sun Microsystems
On January 23, 2007 3:38:32 PM +1100 "James C. McPherson" <James.McPherson at Sun.COM> wrote:> /me groans ... more brokenness. I''ll pass this onto some others in > our team who''ve been working on a similar issue.Cool. I really hope Solaris gets good SAS support, it''s a great technology and a good complement to SATA. But I have my doubts about how useful it is in the short term. AFAIK only Adaptec and LSI Logic are making controllers today. With so few manufacturers it''s a scary investment. (Of course, someone please correct me if you know of other players.) -frank
Peter Karlsson
2007-Jan-23 05:02 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Hi Frank, try man devfsadm, it will update devfs with your new disk drives. disks is an older command that does about the same thing. Cheers, Peter Frank Cusack wrote:> On January 22, 2007 12:12:19 PM -0600 Brian Hechinger > <wonko at ia64.int.dittman.net> wrote: >> On Mon, Jan 22, 2007 at 09:39:19AM -0800, Frank Cusack wrote: >>> On January 21, 2007 12:15:22 AM -0200 Toby Thain <toby at smartgames.ca> >>> wrote: >>> > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" >>> under >>> > Solaris. Replacing a failed drive requires a reboot. >>> >>> Also, adding a drive that wasn''t present at boot requires a reboot. >> >> This couldn''t possibly be true, unless we''ve taken major steps backwards >> as this has always been possible (at least on sparc) > > It is true. Try it. > > [Sorry to send a reply to a personal mail back to the list, but your > email address bounces > > 450 <wonko at ia64.int.dittman.net>: Recipient address rejected: Domain > not found] > > -frank > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frank Cusack
2007-Jan-23 05:07 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
yes I am an experienced Solaris admin and know all about devfsadm :-) and the older disks command. It doesn''t help in this case. I think it''s a BIOS thing. Linux and Windows can''t see IDE drives that aren''t there at boot time either, and on Solaris the SATA controller runs in some legacy mode so I guess that''s why you can''t see the newly added drive. Unfortunately all my x2100 hardware is in production and I can''t readily retest this to verify. -frank On January 23, 2007 12:02:48 PM +0700 Peter Karlsson <Peter.Karlsson at Sun.COM> wrote:> Hi Frank, > > try man devfsadm, it will update devfs with your new disk drives. disks > is an older command that does about the same thing. > > Cheers, > Peter > > Frank Cusack wrote: >> On January 22, 2007 12:12:19 PM -0600 Brian Hechinger >> <wonko at ia64.int.dittman.net> wrote: >>> On Mon, Jan 22, 2007 at 09:39:19AM -0800, Frank Cusack wrote: >>>> On January 21, 2007 12:15:22 AM -0200 Toby Thain <toby at smartgames.ca> >>>> wrote: >>>> > To be clear: the X2100 drives are neither "hotswap" nor "hotplug" >>>> under >>>> > Solaris. Replacing a failed drive requires a reboot. >>>> >>>> Also, adding a drive that wasn''t present at boot requires a reboot. >>> >>> This couldn''t possibly be true, unless we''ve taken major steps backwards >>> as this has always been possible (at least on sparc) >> >> It is true. Try it. >> >> [Sorry to send a reply to a personal mail back to the list, but your >> email address bounces >> >> 450 <wonko at ia64.int.dittman.net>: Recipient address rejected: Domain >> not found] >> >> -frank >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Samuel Hexter
2007-Jan-23 14:40 UTC
[zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass
> Areca makes excellent PCI express cards - but probably have zero > support in Solaris/OpenSolaris. I use them in both Windows and Linux. > Works natively in FreeBSD too. They''re the fastest cards on the market > I believe still. > > However probably not very appropriate for this since it''s a Solaris-based OS :(We''ve got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each) running a 12TB zpool on snv54 and Areca''s arcmsr driver. They''re a bit of an expensive/over-the-top solution since the cards do hardware RAID-6 and cost roughly $1k each, but we''re just using them as JBOD controllers. The hardware RAID-6 capability will be a nice backup if we ever ditch Solaris/ZFS but I can''t see that happening any time soon; ZFS is just too good. This message posted from opensolaris.org
Robert Suh
2007-Jan-23 17:32 UTC
[zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun
People trying to hack together systems might want to look at the HP DL320s http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475 -f79-3232017.html 12 drive bays, Intel Woodcrest, SAS (and SATA) controller. If you snoop around, you might be able to find drive carriers on eBay or elsewhere (*cough* search "HP drive sleds" "HP drive carriers") $3k for the chassis. A mini thumper. Though I''m not sure if Solaris supports the Smart Array controller. Rob -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of mike Sent: Monday, January 22, 2007 1:17 PM To: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun I''m dying here - does anyone know when or even if they will support these? I had this whole setup planned out but it requires eSATA + port multipliers. I want to use ZFS, but currently cannot in that fashion. I''d still have to buy some [more expensive, noisier, bulky internal drive] solution for ZFS. Unless anyone has other ideas. I''m looking to run a 5-10 drive system (with easy ability to expand) in my home office; not in a datacenter. Even opening up to iSCSI seems to not get me much - there aren''t any SOHO type NAS enclosures that act as iSCSI targets. There are however handfuls of eSATA based 4, 5, and 10 drive enclosures perfect for this... but all require the port multiplier support. On 1/22/07, Frank Cusack <fcusack at fcusack.com> wrote:> Unfortunately, Solaris does not support SATA port multipliers (yet) so > I think you''re pretty limited in how many esata drives you canconnect. _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
*snip snip*> AFAIK > only Adaptec and LSI Logic are making controllers > today. With so few > manufacturers it''s a scary investment. (Of course, > someone please > correct me if you know of other players.)There''s a few others. Those are (of course) the major players (and with big names like that making them, you can be pretty sure they are going to be around for a while...) That said, I know of ARIO Data ( http://www.ariodata.com/products/controllers/ ) making some (or ramping up to make them.) I''m sure there are some others. It''s certainly not as common as SATA/SCSI/etc right now, up until recently you couldn''t even buy drives. Now, the fastest drive I''ve seen is SAS only (15k 2.5" Seagate). I''m pretty sure when Seagate is making it''s fastest product SAS, SAS has been accepted. :p http://techreport.com/onearticle.x/11638 This message posted from opensolaris.org
Bart Smaalders
2007-Jan-23 18:51 UTC
[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage
Frank Cusack wrote:> yes I am an experienced Solaris admin and know all about devfsadm :-) > and the older disks command. > > It doesn''t help in this case. I think it''s a BIOS thing. Linux and > Windows can''t see IDE drives that aren''t there at boot time either, > and on Solaris the SATA controller runs in some legacy mode so I guess > that''s why you can''t see the newly added drive. > > Unfortunately all my x2100 hardware is in production and I can''t > readily retest this to verify. > > -frankThis is exactly the issue; some of the simple SATA drives are used in PATA compatibility mode. The ide driver doesn''t know a thing about hot anything, so we would need a proper SATA driver for these chips. Since they work (with the exception of hot *) it is difficult to prioritize this work above getting some other piece of hardware working under Solaris. In addition, switching drivers & bios configs during upgrade is a non-trivial exercise. - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
Jason J. W. Williams
2007-Jan-23 19:28 UTC
[zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun
I believe the SmartArray is an LSI like the Dell PERC isn''t it? Best Regards, Jason On 1/23/07, Robert Suh <roberts at bluenile.com> wrote:> People trying to hack together systems might want to look > at the HP DL320s > > http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475 > -f79-3232017.html > > 12 drive bays, Intel Woodcrest, SAS (and SATA) controller. If you snoop > around, you > might be able to find drive carriers on eBay or elsewhere (*cough* > search "HP drive sleds" > "HP drive carriers") $3k for the chassis. A mini thumper. > > Though I''m not sure if Solaris supports the Smart Array controller. > > Rob > > -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org > [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of mike > Sent: Monday, January 22, 2007 1:17 PM > To: zfs-discuss at opensolaris.org > Subject: Re: [zfs-discuss] Re: Re: Re: Re: External drive enclosures + > Sun > > > I''m dying here - does anyone know when or even if they will support > these? > > I had this whole setup planned out but it requires eSATA + port > multipliers. > > I want to use ZFS, but currently cannot in that fashion. I''d still > have to buy some [more expensive, noisier, bulky internal drive] > solution for ZFS. Unless anyone has other ideas. I''m looking to run a > 5-10 drive system (with easy ability to expand) in my home office; not > in a datacenter. > > Even opening up to iSCSI seems to not get me much - there aren''t any > SOHO type NAS enclosures that act as iSCSI targets. There are however > handfuls of eSATA based 4, 5, and 10 drive enclosures perfect for > this... but all require the port multiplier support. > > > > On 1/22/07, Frank Cusack <fcusack at fcusack.com> wrote: > > Unfortunately, Solaris does not support SATA port multipliers (yet) so > > I think you''re pretty limited in how many esata drives you can > connect. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
mike
2007-Jan-23 20:11 UTC
[zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass
ooh. they support it? cool. i''ll have to explore that option now. however i still really want eSATA. On 1/23/07, Samuel Hexter <samuel.hexter at gmail.com> wrote:> We''ve got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each) running a 12TB zpool on snv54 and Areca''s arcmsr driver. They''re a bit of an expensive/over-the-top solution since the cards do hardware RAID-6 and cost roughly $1k each, but we''re just using them as JBOD controllers. The hardware RAID-6 capability will be a nice backup if we ever ditch Solaris/ZFS but I can''t see that happening any time soon; ZFS is just too good.
Toby Thain
2007-Jan-23 22:11 UTC
[zfs-discuss] X2100 not hotswap, was Re: External drive enclosures + Sun Server for massstorage
On 23-Jan-07, at 4:51 PM, Bart Smaalders wrote:> Frank Cusack wrote: >> yes I am an experienced Solaris admin and know all about devfsadm :-) >> and the older disks command. >> It doesn''t help in this case. I think it''s a BIOS thing. Linux and >> Windows can''t see IDE drives that aren''t there at boot time either, >> and on Solaris the SATA controller runs in some legacy mode so I >> guess >> that''s why you can''t see the newly added drive. >> Unfortunately all my x2100 hardware is in production and I can''t >> readily retest this to verify. >> -frank > > This is exactly the issue; some of the simple SATA drives > are used in PATA compatibility mode. The ide driver doesn''t > know a thing about hot anything, so we would need a proper > SATA driver for these chips. Since they work (with the exception > of hot *) it is difficult to prioritize this workDisappointing but not completely surprising - "What do you expect, it''s an entry level product, not a high end product." Still, would be nice for those of us who bought them. And judging by other posts on this thread it seems just about everyone assumes hotswap "just works". --Toby> above getting > some other piece of hardware working under Solaris. In addition, > switching drivers & bios configs during upgrade is a non-trivial > exercise. > > > - Bart > > > > -- > Bart Smaalders Solaris Kernel Performance > barts at cyber.eng.sun.com http://blogs.sun.com/barts > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It''s interesting the topics that come up here, which really have little to do with zfs. I guess it just shows how great zfs is. I mean, you would never have a ufs list that talked about the merits of sata vs sas and what hardware do i buy. Also interesting is that zfs exposes hardware bugs yet I don''t think that''s what really drives the hardware questions here. -frank
Frank Cusack wrote:> It''s interesting the topics that come up here, which really have little to > do with zfs. I guess it just shows how great zfs is. I mean, you would > never have a ufs list that talked about the merits of sata vs sas and what > hardware do i buy. Also interesting is that zfs exposes hardware bugs > yet I don''t think that''s what really drives the hardware questions here. >Actually, I think it''s the easy admin of more that a simple mirror.... so all of a sudden it''s simple to deal with multiple drives, add more later, etc... so connectivity to low end boxes becomes important. Also, of course, SATA is still relatively new and we don''t yet have extensive controller support (understatement). - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
Frank Cusack
2007-Jan-25 07:09 UTC
[zfs-discuss] X2100 not hotswap, was Re: External drive enclosures + Sun Server for massstorage
On January 23, 2007 8:11:24 PM -0200 Toby Thain <toby at smartgames.ca> wrote:> Still, would be nice for those of us who bought them. And judging by > other posts on this thread it seems just about everyone assumes hotswap > "just works".hot *plug* :-) -frank
Toby Thain
2007-Jan-25 11:48 UTC
[zfs-discuss] X2100 not hotswap, was Re: External drive enclosures + Sun Server for massstorage
On 25-Jan-07, at 5:09 AM, Frank Cusack wrote:> On January 23, 2007 8:11:24 PM -0200 Toby Thain > <toby at smartgames.ca> wrote: >> Still, would be nice for those of us who bought them. And judging by >> other posts on this thread it seems just about everyone assumes >> hotswap >> "just works". > > hot *plug* :-) >Hmm, yes, sloppy of me. --T> -frankRichard Elling wrote:> To be clear, Sun defines "hot swap" as a device which can be > inserted or > removed without system administration tasks required. > > Sun defines "hot plug" as a device which can be inserted or removed > without > causing damage or interruption to a running system, but which may > require > system administration.