Hi, I know (from the zfs-discuss archives and other places [1,2,3,4]) that a lot of people are looking to use zfs as a storage server in the 10-100TB range. I''m in the same boat, but I''ve found that hardware choice is the biggest issue. I''m struggling to find something which will work nicely under solaris and which meets my expectations in terms of hardware. Because of the compatibility issues, I though I should ask here to see what solutions people have already found. I''m learning as I go here, but as far as I''ve been able to determine, the basic choices for attaching drives seem to be 1) SATA Port multipliers 2) SAS Multilane Enclosures 3) SAS Expanders In option 1, the controller can only talk to one device at a time, in option 2 each miniSAS connector can talk to 4 drives at a time but in option 3 the expander can allow for communication with up to 128 drives. I''m thinking about having ~8-16 drives on each controller (PCI-e card) so I think I want option 3. Additionally, because I might get greedier in the future and decide to add more drives on each controller I think option 3 is the best way to go. I can have a motherboard with a lot of PCIe slots and have one controller card for each expander. Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible via a single (4u) LSI SASX36 SAS expander chip, but I''m worried about controller death and having the backplane as a single point of failure. I guess, ideally, I''d like a 4u enclosure with 2x2u SAS expanders. If I wanted hardware redundancy, I could then use mirrored vdevs with one side of each mirror on one controller/expander pair and the other side on a separate pair. This would allow me to survive controller or expander death as well hard drive failure. Replace motherboard: ~500 Replace backplane: ~500 Replace controller: ~300 Replace disk (SATA): ~100 Does anyone have any example systems they have built or any thoughts on what I could do differently? Best regards, Ian. [1] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27234.html [2] http://www.avsforum.com/avs-vb/showthread.php?p=17543496 [3] http://www.stringliterals.com/?p=53 [4] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg22761.html
On Nov 17, 2009, at 12:50 PM, Ian Allison wrote:> Hi, > > I know (from the zfs-discuss archives and other places [1,2,3,4]) > that a lot of people are looking to use zfs as a storage server in > the 10-100TB range. > > I''m in the same boat, but I''ve found that hardware choice is the > biggest issue. I''m struggling to find something which will work > nicely under solaris and which meets my expectations in terms of > hardware. Because of the compatibility issues, I though I should ask > here to see what solutions people have already found. > > > I''m learning as I go here, but as far as I''ve been able to > determine, the basic choices for attaching drives seem to be > > 1) SATA Port multipliers > 2) SAS Multilane Enclosures > 3) SAS Expanders > > In option 1, the controller can only talk to one device at a time, > in option 2 each miniSAS connector can talk to 4 drives at a time > but in option 3 the expander can allow for communication with up to > 128 drives. I''m thinking about having ~8-16 drives on each > controller (PCI-e card) so I think I want option 3. Additionally, > because I might get greedier in the future and decide to add more > drives on each controller I think option 3 is the best way to go. I > can have a motherboard with a lot of PCIe slots and have one > controller card for each expander. > > > Cases like the Supermicro 846E1-R900B have 24 hot swap bays > accessible via a single (4u) LSI SASX36 SAS expander chip, but I''m > worried about controller death and having the backplane as a single > point of failure.There will be dozens of single point failures in your system. Don''t worry about controllers or expanders because they will be at least 10x more reliable than your disks. If you want to invest for better reliability, invest in enterprise class disks, preferably SSDs. -- richard> > I guess, ideally, I''d like a 4u enclosure with 2x2u SAS expanders. > If I wanted hardware redundancy, I could then use mirrored vdevs > with one side of each mirror on one controller/expander pair and > the other side on a separate pair. This would allow me to survive > controller or expander death as well hard drive failure. > > > Replace motherboard: ~500 > Replace backplane: ~500 > Replace controller: ~300 > Replace disk (SATA): ~100 > > > Does anyone have any example systems they have built or any thoughts > on what I could do differently? > > Best regards, > Ian. > > > [1] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27234.html > [2] http://www.avsforum.com/avs-vb/showthread.php?p=17543496 > [3] http://www.stringliterals.com/?p=53 > [4] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg22761.html > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi, I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so good.. So i have a 48 TB raw capacity, with a mirror configuration for NFS usage (Xen VMs) and i feel that for the price i paid i have a very nice system. Bruno Ian Allison wrote:> Hi, > > I know (from the zfs-discuss archives and other places [1,2,3,4]) that > a lot of people are looking to use zfs as a storage server in the > 10-100TB range. > > I''m in the same boat, but I''ve found that hardware choice is the > biggest issue. I''m struggling to find something which will work nicely > under solaris and which meets my expectations in terms of hardware. > Because of the compatibility issues, I though I should ask here to see > what solutions people have already found. > > > I''m learning as I go here, but as far as I''ve been able to determine, > the basic choices for attaching drives seem to be > > 1) SATA Port multipliers > 2) SAS Multilane Enclosures > 3) SAS Expanders > > In option 1, the controller can only talk to one device at a time, in > option 2 each miniSAS connector can talk to 4 drives at a time but in > option 3 the expander can allow for communication with up to 128 > drives. I''m thinking about having ~8-16 drives on each controller > (PCI-e card) so I think I want option 3. Additionally, because I might > get greedier in the future and decide to add more drives on each > controller I think option 3 is the best way to go. I can have a > motherboard with a lot of PCIe slots and have one controller card for > each expander. > > > Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible > via a single (4u) LSI SASX36 SAS expander chip, but I''m worried about > controller death and having the backplane as a single point of failure. > > I guess, ideally, I''d like a 4u enclosure with 2x2u SAS expanders. If > I wanted hardware redundancy, I could then use mirrored vdevs with one > side of each mirror on one controller/expander pair and the other > side on a separate pair. This would allow me to survive controller or > expander death as well hard drive failure. > > > Replace motherboard: ~500 > Replace backplane: ~500 > Replace controller: ~300 > Replace disk (SATA): ~100 > > > Does anyone have any example systems they have built or any thoughts > on what I could do differently? > > Best regards, > Ian. > > > [1] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27234.html > [2] http://www.avsforum.com/avs-vb/showthread.php?p=17543496 > [3] http://www.stringliterals.com/?p=53 > [4] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg22761.html > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3656 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091117/0add81a1/attachment.bin>
Also if you are a startup, there are some ridiculously sweet deals on Sun hardware through the Sun Startup Essentials program. http://sun.com/startups This way you do not need to worry about compatibility and you get all the Enterprise RAS features at a pretty low price point. -Angelo On Nov 17, 2009, at 4:14 PM, Bruno Sousa wrote:> Hi, > > I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a > Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so > good.. > So i have a 48 TB raw capacity, with a mirror configuration for NFS > usage (Xen VMs) and i feel that for the price i paid i have a very nice > system. > > > Bruno > > Ian Allison wrote: >> Hi, >> >> I know (from the zfs-discuss archives and other places [1,2,3,4]) that >> a lot of people are looking to use zfs as a storage server in the >> 10-100TB range. >> >> I''m in the same boat, but I''ve found that hardware choice is the >> biggest issue. I''m struggling to find something which will work nicely >> under solaris and which meets my expectations in terms of hardware. >> Because of the compatibility issues, I though I should ask here to see >> what solutions people have already found. >> >> >> I''m learning as I go here, but as far as I''ve been able to determine, >> the basic choices for attaching drives seem to be >> >> 1) SATA Port multipliers >> 2) SAS Multilane Enclosures >> 3) SAS Expanders >> >> In option 1, the controller can only talk to one device at a time, in >> option 2 each miniSAS connector can talk to 4 drives at a time but in >> option 3 the expander can allow for communication with up to 128 >> drives. I''m thinking about having ~8-16 drives on each controller >> (PCI-e card) so I think I want option 3. Additionally, because I might >> get greedier in the future and decide to add more drives on each >> controller I think option 3 is the best way to go. I can have a >> motherboard with a lot of PCIe slots and have one controller card for >> each expander. >> >> >> Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible >> via a single (4u) LSI SASX36 SAS expander chip, but I''m worried about >> controller death and having the backplane as a single point of failure. >> >> I guess, ideally, I''d like a 4u enclosure with 2x2u SAS expanders. If >> I wanted hardware redundancy, I could then use mirrored vdevs with one >> side of each mirror on one controller/expander pair and the other >> side on a separate pair. This would allow me to survive controller or >> expander death as well hard drive failure. >> >> >> Replace motherboard: ~500 >> Replace backplane: ~500 >> Replace controller: ~300 >> Replace disk (SATA): ~100 >> >> >> Does anyone have any example systems they have built or any thoughts >> on what I could do differently? >> >> Best regards, >> Ian. >> >> >> [1] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27234.html >> [2] http://www.avsforum.com/avs-vb/showthread.php?p=17543496 >> [3] http://www.stringliterals.com/?p=53 >> [4] http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg22761.html >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Richard, Richard Elling wrote:>> >> Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible >> via a single (4u) LSI SASX36 SAS expander chip, but I''m worried about >> controller death and having the backplane as a single point of failure. > > There will be dozens of single point failures in your system. Don''t > worry about > controllers or expanders because they will be at least 10x more reliable > than > your disks. If you want to invest for better reliability, invest in > enterprise class > disks, preferably SSDs. > -- richardI agree about the points of failure, but I guess I''m not looking as much for reliability as I am for replacability. The motherboard, backplane and controllers are all reasonably priced (to the extent that if I had a few of these machine I would keep spares of everything on hand). They are also pretty generic so I could recycle them if I decided to go in a different direction. Thanks, Ian.
Hi Bruno, Bruno Sousa wrote:> Hi, > > I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a > Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so > good.. > So i have a 48 TB raw capacity, with a mirror configuration for NFS > usage (Xen VMs) and i feel that for the price i paid i have a very nice > system.Sounds good. I understand from http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27248.html That you need something like supermicro''s CSE-PTJBOD-CB1 to cable the drive trays up, do you do anything about monitoring the power supply? Cheers, Ian.
Hi Ian, I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that has : * Power Control Card * SAS 846EL2/EL1 BP External Cascading Cable * SAS 846EL1 BP 1-Port Internal Cascading Cable I don''t do any monitoring in the JBOD chassis.. Bruno Ian Allison wrote:> Hi Bruno, > > Bruno Sousa wrote: >> Hi, >> >> I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a >> Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so >> good.. >> So i have a 48 TB raw capacity, with a mirror configuration for NFS >> usage (Xen VMs) and i feel that for the price i paid i have a very >> nice system. > > Sounds good. I understand from > > http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27248.html > > That you need something like supermicro''s CSE-PTJBOD-CB1 to cable the > drive trays up, do you do anything about monitoring the power supply? > > Cheers, > Ian. > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091117/db5d0eb1/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3656 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091117/db5d0eb1/attachment.bin>
You can get the E2 version of the chassis that supports multipathing but you have to use dual port SAS disks. Or you can use seperate SAS hba to connect to seperate jbos chassis and do mirror over 2 chassis. The backplane is just a path-through fabric which is very unlikely to die. Then like others said, your storage head unit is single point of failure. Unless you implement some cluster design, there is always single point of failure. -- This message posted from opensolaris.org
On Tuesday 17 November 2009 22:50, Ian Allison wrote:> I''m learning as I go here, but as far as I''ve been able to determine, > the basic choices for attaching drives seem to be > > 1) SATA Port multipliers > 2) SAS Multilane Enclosures > 3) SAS Expanderswhat about pci(-X) cards? as stated in: http://opensolaris.org/jive/thread.jspa?messageID=247226 you can use http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm to add 8 more sata disks. why not use that (now that 2TB disks have been released)? -- Real programmers don''t document. If it was hard to write, it should be hard to understand. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091118/fb53af10/attachment.bin>
Hi Bruno, Bruno Sousa wrote:> Hi Ian, > > I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit > that has : > > * Power Control CardSorry to keep bugging you, but which card is this. I like the sound of your setup. Cheers, Ian.> > * SAS 846EL2/EL1 BP External Cascading Cable > > * SAS 846EL1 BP 1-Port Internal Cascading Cable > > I don''t do any monitoring in the JBOD chassis.. > Bruno > > Ian Allison wrote: >> Hi Bruno, >> >> Bruno Sousa wrote: >>> Hi, >>> >>> I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a >>> Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so >>> good.. >>> So i have a 48 TB raw capacity, with a mirror configuration for NFS >>> usage (Xen VMs) and i feel that for the price i paid i have a very >>> nice system. >> >> Sounds good. I understand from >> >> http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27248.html >> >> That you need something like supermicro''s CSE-PTJBOD-CB1 to cable the >> drive trays up, do you do anything about monitoring the power supply? >> >> Cheers, >> Ian. >> >> >-- Ian Allison PIMS-UBC/SFU System and Network Administrator the Pacific Institute for the Mathematical Sciences Phone: (778) 991 1522 email: iana at pims.math.ca
And I like to cut of your jib, my young fellow me lad! -- This message posted from opensolaris.org
Chris Du wrote: > You can get the E2 version of the chassis that supports multipathing > but you have to use dual port SAS disks. Or you can use seperate SAS > hba to connect to seperate jbos chassis and do mirror over 2 chassis. > The backplane is just a path-through fabric which is very unlikely to > die. > Then like others said, your storage head unit is single point of > failure. Unless you implement some cluster design, there is always > single point of failure. Thanks, I think I''ll go with the single SAS expander, I''m less worried about that setup now. As you say, I should probably just cluster similar machines when I''m looking for redundancy. At the moment I just want to get something working with reasonable priced parts which I can expand on in the future. Thanks, Ian.
On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa <bsousa at epinfante.com> wrote:> Hi Ian, > > I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that > has : > > Power Control Card > > SAS 846EL2/EL1 BP External Cascading Cable > > SAS 846EL1 BP 1-Port Internal Cascading Cable > > I don''t do any monitoring in the JBOD chassis..I have some really newbie questions here about such a chassis: - Do we need to buy a motherboard as well ? - What Motherboard model do you have for such a chassis ? - Does the motherboard accept dual power supplies ? - Which motherboard model do you have ?> Bruno > > Ian Allison wrote: > > Hi Bruno, > > Bruno Sousa wrote: > > Hi, > > I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a > Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so > good.. > So i have a 48 TB raw capacity, with a mirror configuration for NFS > usage (Xen VMs) and i feel that for the price i paid i have a very nice > system. > > Sounds good. I understand from > > http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27248.html > > That you need something like supermicro''s CSE-PTJBOD-CB1 to cable the drive > trays up, do you do anything about monitoring the power supply? > > Cheers, > ????Ian. > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
Hi, You have two options to use this chassis , being : * add a motherboard, that can hold redundant power supplies, and this will be just a server with a 4U with several disks * use a server with the LSI card (or other one) and connect this LSI with a SAS cable to the chassis, that "only" has disks, disk backplane, jbod power interface and power supplies Hope this helps... Bruno Sriram Narayanan wrote:> On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa <bsousa at epinfante.com> wrote: > >> Hi Ian, >> >> I use the Supermicro SuperChassis 846E1-R710B, and i added the JBOD kit that >> has : >> >> Power Control Card >> >> SAS 846EL2/EL1 BP External Cascading Cable >> >> SAS 846EL1 BP 1-Port Internal Cascading Cable >> >> I don''t do any monitoring in the JBOD chassis.. >> > > I have some really newbie questions here about such a chassis: > - Do we need to buy a motherboard as well ? > - What Motherboard model do you have for such a chassis ? > - Does the motherboard accept dual power supplies ? > - Which motherboard model do you have ? > > >> Bruno >> >> Ian Allison wrote: >> >> Hi Bruno, >> >> Bruno Sousa wrote: >> >> Hi, >> >> I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a >> Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so >> good.. >> So i have a 48 TB raw capacity, with a mirror configuration for NFS >> usage (Xen VMs) and i feel that for the price i paid i have a very nice >> system. >> >> Sounds good. I understand from >> >> http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg27248.html >> >> That you need something like supermicro''s CSE-PTJBOD-CB1 to cable the drive >> trays up, do you do anything about monitoring the power supply? >> >> Cheers, >> Ian. >> >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091120/73fc78de/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3656 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091120/73fc78de/attachment.bin>