Hi Sorry to cross-posting. I don''t knwon which mailing-list I should post this message. I''ll would like to use FreeBSD with ZFS on some Dell server with some MD1200 (classique DAS). When we buy a MD1200 we need a RAID PERC H800 card on the server so we have two options : 1/ create a LV on the PERC H800 so the server see one volume and put the zpool on this unique volume and let the hardware manage the raid. 2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD and ZFS manage the raid. which one is the best solution ? Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk) Regards. JAS -- Albert SHIH DIO batiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex T?l?phone : 01 45 07 76 26/06 86 69 95 71 Heure local/Local time: mer 19 oct 2011 16:11:40 CEST
On Wed, Oct 19, 2011 at 11:14 AM, Albert Shih <Albert.Shih at obspm.fr> wrote:> Hi > > Sorry to cross-posting. I don''t knwon which mailing-list I should post this > message. > > I''ll would like to use FreeBSD with ZFS on some Dell server with some > MD1200 (classique DAS). > > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have > two options : > > ? ? ? ?1/ create a LV on the PERC H800 so the server see one volume and put > ? ? ? ?the zpool on this unique volume and let the hardware manage the > ? ? ? ?raid. > > ? ? ? ?2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD > ? ? ? ?and ZFS manage the raid. > > which one is the best solution ? > > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk) > > Regards.for ZFS approach the second option in my opinion is better.> JAS > -- > Albert SHIH > DIO batiment 15 > Observatoire de Paris > 5 Place Jules Janssen > 92195 Meudon Cedex > T?l?phone : 01 45 07 76 26/06 86 69 95 71 > Heure local/Local time: > mer 19 oct 2011 16:11:40 CEST > _______________________________________________ > freebsd-questions at freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org" >-- Jorge Andr?s Medina Oliva. Computer engineer. IT consultant http://www.bsdchile.cl
On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih <Albert.Shih at obspm.fr> wrote:> Hi > > Sorry to cross-posting. I don''t knwon which mailing-list I should post this > message. > > I''ll would like to use FreeBSD with ZFS on some Dell server with some > MD1200 (classique DAS). > > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have > two options : > > ? ? ? ?1/ create a LV on the PERC H800 so the server see one volume and put > ? ? ? ?the zpool on this unique volume and let the hardware manage the > ? ? ? ?raid. > > ? ? ? ?2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD > ? ? ? ?and ZFS manage the raid. > > which one is the best solution ?Neither. The best solution is to find a controller which can pass the disk as JBOD (not encapsulated as virtual disk). Failing that, I''d go with (1) (though others might disagree).> > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)The more the better :) Just make sure do NOT use dedup untul you REALLY know what you''re doing (which usually means buying lots of RAM and SSD for L2ARC). -- Fajar
On 10/19/11 15:30, Fajar A. Nugraha wrote:> On Wed, Oct 19, 2011 at 9:14 PM, Albert Shih<Albert.Shih at obspm.fr> wrote: >> Hi >> >> Sorry to cross-posting. I don''t knwon which mailing-list I should post this >> message. >> >> I''ll would like to use FreeBSD with ZFS on some Dell server with some >> MD1200 (classique DAS). >> >> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have >> two options : >> >> 1/ create a LV on the PERC H800 so the server see one volume and put >> the zpool on this unique volume and let the hardware manage the >> raid. >> >> 2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD >> and ZFS manage the raid. >> >> which one is the best solution ? > > Neither. > > The best solution is to find a controller which can pass the disk as > JBOD (not encapsulated as virtual disk). Failing that, I''d go with (1) > (though others might disagree).No go with 2. ALWAYS let ZFS manage the redundancy otherwise it can''t self-heal. -- Darren J Moffat
On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih <Albert.Shih at obspm.fr> wrote:> When we buy a MD1200 we need a RAID PERC H800 card on the server so we have > two options : > > ? ? ? ?1/ create a LV on the PERC H800 so the server see one volume and put > ? ? ? ?the zpool on this unique volume and let the hardware manage the > ? ? ? ?raid. > > ? ? ? ?2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD > ? ? ? ?and ZFS manage the raid. > > which one is the best solution ? > > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk)I know the PERC H200 can be flashed with IT firmware, making it in effect a "dumb" HBA perfect for ZFS usage. Perhaps the H800 has the same? (If not, can you get the machine configured with a H200?) If that''s not an option, I think Option 2 will work. My first ZFS server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID 0s in the PERC Option ROM, but Solaris did not seem to mind that. --khd
I have several dells with the perc controller and I can say that the best solution is to user raid 0 (see both disks) and let zfs to mirror them.. This works ok, and you can use the zfs tools to manage the disks. Howerver this does not solve the problem that I had with the perc controller.. The problem is that all the storage you have is in the perc controller, even it is reliable, when it "breaks", all of your storage (and so the computer...) is useless. You cannot move the drives (HD) to another machine because the "normal" controller (ad, ada) will not recognize the disk. Even if you have a spare perc controller of the same kind at hand (and I bet you do not have...) the disks are "signed" by the other (the broken) controller and so will not be recognized by the new controller. In my case I had to call dell support, and only after several hours I could put the HD online again. I mount only one disk (of the zfs pool) in the new controller, and even with the dell support in the phone the new controller wipe out the disk. A new call (with a different dell support) was able to re-initialize the disk, that includes re-install Freebsd... and after that "attach" the old disk, and using zpool detatch, followed by zpool attach (the old disk), it than reconstruct the mirror... resulting in almost 6 hour downtime and a loss of one day working for the hole company. Now, a dell support (here) does not attend you if you say that the OS is FreeBSD, you must say to them that you are installing Linux... to get the support. Conclusion: now I prefer the IBM 32XX series... Ok that is my story Sergio. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20111019/b9f60a38/attachment.html>
On 10/19/11 9:14 AM, "Albert Shih" <Albert.Shih at obspm.fr> wrote:>When we buy a MD1200 we need a RAID PERC H800 card on the serverNo, you need a card that includes 2 external x4 SFF8088 SAS connectors. I''d recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then it presents the individual disks and ZFS can handle redundancy and recovery. -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com
I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based on past experience Also LSI Original HBA normally released FW earlier than OEM. Plus, most of users in community use LSI HBA. Rocky -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Dave Pooser Sent: Wednesday, October 19, 2011 5:56 PM To: freebsd-questions at freebsd.org; zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] ZFS on Dell with FreeBSD On 10/19/11 9:14 AM, "Albert Shih" <Albert.Shih at obspm.fr> wrote:>When we buy a MD1200 we need a RAID PERC H800 card on the serverNo, you need a card that includes 2 external x4 SFF8088 SAS connectors. I''d recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then it presents the individual disks and ZFS can handle redundancy and recovery. -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser <dave.zfs at alfordmedia.com> wrote:> On 10/19/11 9:14 AM, "Albert Shih" <Albert.Shih at obspm.fr> wrote: > >>When we buy a MD1200 we need a RAID PERC H800 card on the server > > No, you need a card that includes 2 external x4 SFF8088 SAS connectors. > I''d recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then > it presents the individual disks and ZFS can handle redundancy and > recovery.Exactly, thanks for suggesting an exact controller model that can present disks as JBOD. With hardware RAID, you''d pretty much rely on the controller to behave nicely, which is why I suggested to simply create one big volume for zfs to use (so you pretty much only use features like snapshot, clones, etc, but don''t use zfs self healing feature). Again, others might (and have) disagree and suggest using volumes for individual disk (even when you''re still relying on hardware RAID controller). But ultimately there''s no question that the best possible setup would be to present the disks as JBOD and let zfs handle it directly. -- Fajar
On Thu, 20 Oct 2011, Fajar A. Nugraha wrote:> On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser <dave.zfs at alfordmedia.com> wrote: >> On 10/19/11 9:14 AM, "Albert Shih" <Albert.Shih at obspm.fr> wrote: >> >>> When we buy a MD1200 we need a RAID PERC H800 card on the server >> >> No, you need a card that includes 2 external x4 SFF8088 SAS connectors. >> I''d recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then >> it presents the individual disks and ZFS can handle redundancy and >> recovery. > > Exactly, thanks for suggesting an exact controller model that can > present disks as JBOD. > > With hardware RAID, you''d pretty much rely on the controller to behave > nicely, which is why I suggested to simply create one big volume for zfs > to use (so you pretty much only use features like snapshot, clones, etc, > but don''t use zfs self healing feature). Again, others might (and have) > disagree and suggest using volumes for individual disk (even when you''re > still relying on hardware RAID controller). But ultimately there''s no > question that the best possible setup would be to present the disks as > JBOD and let zfs handle it directly. >I saw something interesting and different today, which I''ll just throw out. A buddy has a HP370 loaded with disks (not the only machine that provides these services, rather the one he was showing off). The 370''s disks are managed by the underlying hardware RAID controller, which he built as multiple RAID1 volumes. ESXi 5.0 is loaded and in control of the volumes, some of which are partitioned. Consequently, his result is vendor supported interfaces between disks, RAID controller, ESXi, and managing/reporting software. The HP370 has multiple FreeNAS instances whose "disks" are the "disks" (volumes/partitions) from ESXi (all on the same physical hardware). The FreeNAS instances are partitioned according to their physical and logical function within the infrastructure, whether by physical or logical connections. The FreeNAS instances then serves its "disks" to consumers. We have not done any performance testing. Generally, his NAS consumers are not I/O pigs though we want the best performance possible (some consumers are over the WAN resulting in any HP/ESXi/FreeNAS performance issues possibly moot). (I want to do some performance testing because, well, it may have significant amusement value.) A question we have is whether ZFS (ARC, maybe L2ARC) within FreeNAS is possible or would provide any value.
On 20 Oct 2011, at 05:24, Dennis Glatting <freebsd at penx.com> wrote:> > > On Thu, 20 Oct 2011, Fajar A. Nugraha wrote: > >> On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser <dave.zfs at alfordmedia.com> wrote: >>> On 10/19/11 9:14 AM, "Albert Shih" <Albert.Shih at obspm.fr> wrote: >>> >>>> When we buy a MD1200 we need a RAID PERC H800 card on the server >>> >>> No, you need a card that includes 2 external x4 SFF8088 SAS connectors. >>> I''d recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then >>> it presents the individual disks and ZFS can handle redundancy and >>> recovery. >> >> Exactly, thanks for suggesting an exact controller model that can >> present disks as JBOD. >> >> With hardware RAID, you''d pretty much rely on the controller to behave nicely, which is why I suggested to simply create one big volume for zfs to use (so you pretty much only use features like snapshot, clones, etc, but don''t use zfs self healing feature). Again, others might (and have) disagree and suggest using volumes for individual disk (even when you''re still relying on hardware RAID controller). But ultimately there''s no question that the best possible setup would be to present the disks as JBOD and let zfs handle it directly. >> > > I saw something interesting and different today, which I''ll just throw out. > > A buddy has a HP370 loaded with disks (not the only machine that provides these services, rather the one he was showing off). The 370''s disks are managed by the underlying hardware RAID controller, which he built as multiple RAID1 volumes. > > ESXi 5.0 is loaded and in control of the volumes, some of which are partitioned. Consequently, his result is vendor supported interfaces between disks, RAID controller, ESXi, and managing/reporting software. > > The HP370 has multiple FreeNAS instances whose "disks" are the "disks" (volumes/partitions) from ESXi (all on the same physical hardware). The FreeNAS instances are partitioned according to their physical and logical function within the infrastructure, whether by physical or logical connections. The FreeNAS instances then serves its "disks" to consumers. > > We have not done any performance testing. Generally, his NAS consumers are not I/O pigs though we want the best performance possible (some consumers are over the WAN resulting in any HP/ESXi/FreeNAS performance issues possibly moot). (I want to do some performance testing because, well, it may have significant amusement value.) A question we have is whether ZFS (ARC, maybe L2ARC) within FreeNAS is possible or would provide any value. >Possible, yes. Provides value, somewhat. You still get to use snapshots, compression, dedup... You don''t get ZFS self healing though which IMO is a big loss. Regarding the ARC, it totally depends on the kind of files you serve and the amount of RAM you have available. If you keep serving huge, different files all the time, it won''t help as much as when clients request the same small/avg files over and over again.
Le 19/10/2011 ? 21:30:31+0700, Fajar A. Nugraha a ?crit> > Sorry to cross-posting. I don''t knwon which mailing-list I should post this > > message. > > > > I''ll would like to use FreeBSD with ZFS on some Dell server with some > > MD1200 (classique DAS). > > > > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have > > two options : > > > > ? ? ? ?1/ create a LV on the PERC H800 so the server see one volume and put > > ? ? ? ?the zpool on this unique volume and let the hardware manage the > > ? ? ? ?raid. > > > > ? ? ? ?2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD > > ? ? ? ?and ZFS manage the raid. > > > > which one is the best solution ? > > Neither. > > The best solution is to find a controller which can pass the disk as > JBOD (not encapsulated as virtual disk). Failing that, I''d go with (1) > (though others might disagree).Thanks. That''s going to be very complicate...but I''m going to try.> > > > > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk) > > The more the better :)Well, my employer is not so rich. It''s first time I''m going to use ZFS on FreeBSD on production (I use on my laptop but that''s mean nothing), so what''s in your opinion the minimum ram I need ? Is something like 48 Go is enough ?> Just make sure do NOT use dedup untul you REALLY know what you''re > doing (which usually means buying lots of RAM and SSD for L2ARC).Ok. Regards. JAS -- Albert SHIH DIO batiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex T?l?phone : 01 45 07 76 26/06 86 69 95 71 Heure local/Local time: jeu 20 oct 2011 11:30:49 CEST
On Thu, Oct 20, 2011 at 4:33 PM, Albert Shih <Albert.Shih at obspm.fr> wrote:>> > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk) >> >> The more the better :) > > Well, my employer is not so rich. > > It''s first time I''m going to use ZFS on FreeBSD on production (I use on my > laptop but that''s mean nothing), so what''s in your opinion the minimum ram > I need ? Is something like 48 Go is enough ?If you don''t use dedup (recommended), should be more than enough. If you use dedup, search zfs-discuss archive for some calculation method posted. For comparison purposes, you could also look at Oracle''s zfs storage appliance configuration: https://shop.oracle.com/pls/ostore/f?p=dstore:product:3479784507256153::NO:RP,6:P6_LPI,P6_PROD_HIER_ID:424445158091311922637762,114303924177622138569448 -- Fajar
Le 19/10/2011 ? 10:52:07-0400, Krunal Desai a ?crit> On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih <Albert.Shih at obspm.fr> wrote: > > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have > > two options : > > > > ? ? ? ?1/ create a LV on the PERC H800 so the server see one volume and put > > ? ? ? ?the zpool on this unique volume and let the hardware manage the > > ? ? ? ?raid. > > > > ? ? ? ?2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD > > ? ? ? ?and ZFS manage the raid. > > > > which one is the best solution ? > > > > Any advise about the RAM I need on the server (actually one MD1200 so 12x2To disk) > > I know the PERC H200 can be flashed with IT firmware, making it in > effect a "dumb" HBA perfect for ZFS usage. Perhaps the H800 has the > same? (If not, can you get the machine configured with a H200?)I''m not sure what you mean when you say ?H200 flashed with IT firmware? ?> If that''s not an option, I think Option 2 will work. My first ZFS > server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID > 0s in the PERC Option ROM, but Solaris did not seem to mind that.OK. I don''t have choice (too complexe to explain and it''s meanless here) but I can only buy at Dell at this moment. On the Dell website I''ve the choice between : SAS 6Gbps External Controller PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache LSI2032 SCSI Internal PCIe Controller Card I''ve no idea what''s the first thing is. But what I understand the best solution is the first or the last ? Regards. JAS -- Albert SHIH DIO batiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex T?l?phone : 01 45 07 76 26/06 86 69 95 71 Heure local/Local time: jeu 20 oct 2011 11:44:39 CEST
> > On the Dell website I''ve the choice between : > > > SAS 6Gbps External Controller > PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe > PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe > PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe > PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache > PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache > LSI2032 SCSI Internal PCIe Controller Card >The first one probably is a LSI card. However check with DELL (and if it is LSI, check what card exactly). And check if with that controller they support seeing all individual drives in the chassis as JBOD. Otherwise consider buying the chassis without the controller and get just the LSI from someone else. Regards, JP
On Thu, Oct 20, 2011 at 5:49 AM, Albert Shih <Albert.Shih at obspm.fr> wrote:> I''m not sure what you mean when you say ?H200 flashed with IT firmware? ?IT is "Initiator Target", and many LSI chips have a version of their firmware available that will put them into this mode, which is desirable for ZFS. This is opposed to other LSI firmware modes like "IR" which is RAID, I believe. (which you do not want). Since the H200 uses a LSI chip, you can download that firmware from LSI and flash it to the card turning it into an IT-mode card and a simple HBA. --khd
Le 19/10/2011 ? 19:23:26-0700, Rocky Shek a ?crit Hi. Thanks for this information.> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based on > past experienceDo you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under FreBSD 9.0 ? Best regards. Regards. -- Albert SHIH DIO batiment 15 Observatoire de Paris 5 Place Jules Janssen 92195 Meudon Cedex T?l?phone : 01 45 07 76 26/06 86 69 95 71 Heure local/Local time: jeu 27 oct 2011 17:20:11 CEST
On Thu, October 27, 2011 11:32, Albert Shih wrote:>> I also recommend LSI 9200-8E or new 9205-8E with the IT firmware based >> on past experience > > Do you known if the LSI-9205-8E HBA or the LSI-9202-16E HBA work under > FreBSD 9.0 ?Check the man page for mpt(4): http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+9-current http://www.freebsd.org/cgi/man.cgi?query=mpt&manpath=FreeBSD+8.2-RELEASE Or LSI''s site: http://www.lsi.com/products/storagecomponents/Pages/LSISAS9205-8e.aspx http://www.lsi.com/products/storagecomponents/Pages/LSISAS9202-16e.aspx Do you know how to use a search engine?