Hi, all, I just wanted to say a big big thank you to Kip and all the developers who made ZFS on FreeBSD real. And to everyone who provided helpful comments in the last couple of days. I had to delete and rebuild my zpool to switch from a 12-disk raidz2 to two 6-disk ones, but yesterday I could replace the raw devices with glabel devices and practice replacing a failed disk at the same time. ;-) So now we have this setup: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 raidz2 ONLINE 0 0 0 label/disk100 ONLINE 0 0 0 label/disk101 ONLINE 0 0 0 label/disk102 ONLINE 0 0 0 label/disk103 ONLINE 0 0 0 label/disk104 ONLINE 0 0 0 label/disk105 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 label/disk106 ONLINE 0 0 0 label/disk107 ONLINE 0 0 0 label/disk108 ONLINE 0 0 0 label/disk109 ONLINE 0 0 0 label/disk110 ONLINE 0 0 0 label/disk111 ONLINE 0 0 0 which will get another enclosure with 6 750-GB-disks, soon. I really like the way I can manage storage from the operating system without propriatary controller management software or even rebooting into the BIOS. Kind regards, Patrick -- punkt.de GmbH * Kaiserallee 13a * 76133 Karlsruhe Tel. 0721 9109 0 * Fax 0721 9109 100 info@punkt.de http://www.punkt.de Gf: J?rgen Egeling AG Mannheim 108285
On Thu, July 9, 2009 08:25, Patrick M. Hausen wrote:> Hi, all, > > I just wanted to say a big big thank you to Kip and all the > developers who made ZFS on FreeBSD real. > > And to everyone who provided helpful comments in the > last couple of days. > > I had to delete and rebuild my zpool to switch from a > 12-disk raidz2 to two 6-disk ones, but yesterday I could > replace the raw devices with glabel devices and practice > replacing a failed disk at the same time. ;-) > > So now we have this setup: > > NAME STATE READ WRITE CKSUM > zfs ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > label/disk100 ONLINE 0 0 0 > label/disk101 ONLINE 0 0 0 > label/disk102 ONLINE 0 0 0 > label/disk103 ONLINE 0 0 0 > label/disk104 ONLINE 0 0 0 > label/disk105 ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > label/disk106 ONLINE 0 0 0 > label/disk107 ONLINE 0 0 0 > label/disk108 ONLINE 0 0 0 > label/disk109 ONLINE 0 0 0 > label/disk110 ONLINE 0 0 0 > label/disk111 ONLINE 0 0 0 > > which will get another enclosure with 6 750-GB-disks, soon. > > I really like the way I can manage storage from the operating > system without propriatary controller management software or > even rebooting into the BIOS. > > Kind regards, > PatrickI've always been curious about this. is said not good to have many disks in one pool. ok then. but this layout you're using in here will have the same effect as the twelve disks in only one pool ? (the space here is the sum of both pools ?) thanks, matheus -- We will call you cygnus, The God of balance you shall be A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? http://en.wikipedia.org/wiki/Posting_style
On Sat, Jul 11, 2009 at 11:40 AM, Peter Jeremy<peterjeremy@optushome.com.au> wrote:> On 2009-Jul-09 15:39:35 +0300, Dan Naumov <dan.naumov@gmail.com> wrote: >>A single 40 disk raidz (DO NOT DO THIS) will have 40 disks total, 39 >>disks worth of space and will definately explode on you sooner rather >>than later (probably on the first import, export or scrub). > > Can you provide a reference for this statement. ?AFAIK, the only > reason for the upper recommended limit of 9 disks is performance. > > -- > Peter JeremySearching the different FreeBSD mailing lists (which have had discussions about such cases) and googling should yield results. - Sincerely, Dan Naumov
On 2009-07-11 10:40, Peter Jeremy wrote:> On 2009-Jul-09 15:39:35 +0300, Dan Naumov <dan.naumov@gmail.com> wrote: >> A single 40 disk raidz (DO NOT DO THIS) will have 40 disks total, 39 >> disks worth of space and will definately explode on you sooner rather >> than later (probably on the first import, export or scrub). > > Can you provide a reference for this statement. AFAIK, the only > reason for the upper recommended limit of 9 disks is performance.The more disks you use in one RAID set, the higher the probability that more than one disk will fail at the same time. An interesting read can be found here: http://blogs.zdnet.com/storage/?p=162
On Sat, Jul 11, 2009 at 1:40 AM, Peter Jeremy <peterjeremy@optushome.com.au>wrote:> On 2009-Jul-09 15:39:35 +0300, Dan Naumov <dan.naumov@gmail.com> wrote: > >A single 40 disk raidz (DO NOT DO THIS) will have 40 disks total, 39 > >disks worth of space and will definately explode on you sooner rather > >than later (probably on the first import, export or scrub). > > Can you provide a reference for this statement. AFAIK, the only > reason for the upper recommended limit of 9 disks is performance. >We found it impossible to re-silver a new/replacement drive in a 24-drive raidz2 vdev. Even after almost two weeks of trying, it never got above 20-30% complete before restarting. That led me to do a bunch of web searches, and found several blogs by Sun people that went over how the raidz implementation works, what the limitations are (limited to the IOps of a single drive), and the recommendation to never use more than 8 or 9 drives in any single vdev. -- Freddie Cash fjwcash@gmail.com