hi, i just set up snv_54 on an old p4 celeron system and even tho the processor is crap, it''s got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i''m wondering if there is an optimal way to lay out the ZFS pool(s) to make this old girl as fast as possible.... as it stands now i''ve got the following drive layout: pri master: 80GB (call it drive1) pri slave: 40GB (drive 2) sec master:40GB (drv 3) sec slave: DVD (all connected with 80 conductor ribbons) my partitions are: drive 1: i''ve got 2 10GB UFS root slices (so i can do live upgrades), and 1GB swap slice i''ve got one big zpool consisting of a 50GB slice on drive 1 and all of drives 2 & 3. i''m not sure that this is the optimal layout for striping. i don''t need mirroring or redundancy-- just speed. i''m thinking maybe i''d be better booting off one of the smaller drives and putting the other two on one controller and putting the zpool on those only. spanning the two IDE controllers with the single zpool seems like it might be a bad idea, but i am just postulating here.... This message posted from opensolaris.org
Patrick P Korsnick wrote:> hi, > > i just set up snv_54 on an old p4 celeron system and even tho the processor is crap, it''s got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i''m wondering if there is an optimal way to lay out the ZFS pool(s) to make this old girl as fast as possible.... > > as it stands now i''ve got the following drive layout: > > pri master: 80GB (call it drive1) > pri slave: 40GB (drive 2) > > sec master:40GB (drv 3) > sec slave: DVD > > (all connected with 80 conductor ribbons) > > my partitions are: > drive 1: i''ve got 2 10GB UFS root slices (so i can do live upgrades), and 1GB swap slice > i''ve got one big zpool consisting of a 50GB slice on drive 1 and all of drives 2 & 3.If the zpool is a dynamic stripe, then yes, you are in the most performant config.> i''m not sure that this is the optimal layout for striping. i don''t need mirroring or > redundancy-- just speed. i''m thinking maybe i''d be better booting off one of the > smaller drives and putting the other two on one controller and putting the zpool on > those only. spanning the two IDE controllers with the single zpool seems like it > might be a bad idea, but i am just postulating here....Friends don''t let friends use RAID-0 (aka just dynamic striping) I don''t think the IDE controllers will be the bottleneck. More likely you will hit limitations on memory with only a 32-bit processor. Of course, that all depends on what you are doing, but I wouldn''t spend much time trying to make it be a speed demon, because time is money. -- richard
Patrick P Korsnick wrote:> hi, > > i just set up snv_54 on an old p4 celeron system and even tho the processor is crap, it''s got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i''m wondering if there is an optimal way to lay out the ZFS pool(s) to make this old girl as fast as possible.... > > as it stands now i''ve got the following drive layout: > > pri master: 80GB (call it drive1) > pri slave: 40GB (drive 2) > > sec master:40GB (drv 3) > sec slave: DVD > > (all connected with 80 conductor ribbons) > > my partitions are: > drive 1: i''ve got 2 10GB UFS root slices (so i can do live upgrades), and 1GB swap slice > i''ve got one big zpool consisting of a 50GB slice on drive 1 and all of drives 2 & 3. > > i''m not sure that this is the optimal layout for striping. i don''t need mirroring or redundancy-- just speed. i''m thinking maybe i''d be better booting off one of the smaller drives and putting the other two on one controller and putting the zpool on those only. spanning the two IDE controllers with the single zpool seems like it might be a bad idea, but i am just postulating here.... > >Why not take all but 40GB from the 80 for the OS/Boot, then take the remaining 40GB, and the 40GB each from Drive 2 and Drive 3 and put it in a 3 device RaidZ. This will at least give you some redundancy. -Kyle> > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On 1/12/07, Kyle McDonald <Kyle.McDonald at bigbandnet.com> wrote:> Patrick P Korsnick wrote: > > hi, > > > > i just set up snv_54 on an old p4 celeron system and even tho the processor is crap, it''s got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i''m wondering if there is an optimal way to lay out the ZFS pool(s) to make this old girl as fast as possible.... > > > > as it stands now i''ve got the following drive layout: > > > > pri master: 80GB (call it drive1) > > pri slave: 40GB (drive 2) > > > > sec master:40GB (drv 3) > > sec slave: DVD > > > > (all connected with 80 conductor ribbons) > > > > my partitions are: > > drive 1: i''ve got 2 10GB UFS root slices (so i can do live upgrades), and 1GB swap slice > > i''ve got one big zpool consisting of a 50GB slice on drive 1 and all of drives 2 & 3. > > > > i''m not sure that this is the optimal layout for striping. i don''t need mirroring or >> redundancy-- just speed. i''m thinking maybe i''d be better bootingoff one of the smaller >>drives and putting the other two on one controller and putting the zpool on those only. >> spanning the two IDE controllers with the single zpool seems like it might be a bad>> idea, but i am just postulating here....that is backwards, you want them split, on each controller, in case one controller chip dies the data still remains in tact and safe if they the two drives are mirrored. for most the space, availible with safety, take a 40GB slice off the 80GB drive, and combine with the other 2x 40GB drives in a raidz group composed of 3x 40GB pieces that gives you about 80GB of usable disk space. your other choice is 2x 40GB drives together mirrored in one pool, so it can survive the loss of a drive and still maintain data. and use the remaining slice off the 80GB drive as a single drive pool that holds temporary data that isn''t important because the data is gone if the drive dies. Needless to say I would recomend the first idea unless you can find another 40GB drive and another controller, and use 4x pieces to make a 120GB pool. But for best performance its best to allocate full drives so ZFS can activate write caching on the drives. James Dickens uadmin.blogspot.com> > > > > Why not take all but 40GB from the 80 for the OS/Boot, then take the > remaining 40GB, and the 40GB each from Drive 2 and Drive 3 and put it in > a 3 device RaidZ. This will at least give you some redundancy. > > -Kyle > > > > This message posted from opensolaris.org > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
[attempt to clean up the text, sorry if I miss something] James Dickens wrote:> On 1/12/07, Kyle McDonald <Kyle.McDonald at bigbandnet.com> wrote: >> Patrick P Korsnick wrote: >> i just set up snv_54 on an old p4 celeron system and even tho the >> processor is crap, it''s got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i''m >> wondering if there is an optimal way to lay out the ZFS pool(s) to >> make this old girl as fast as possible.... >> > >> > as it stands now i''ve got the following drive layout: >> > >> > pri master: 80GB (call it drive1) >> > pri slave: 40GB (drive 2) >> > >> > sec master:40GB (drv 3) >> > sec slave: DVD >> > >> > (all connected with 80 conductor ribbons) >> > >> > my partitions are: >> > drive 1: i''ve got 2 10GB UFS root slices (so i can do live >> > upgrades), and 1GB swap slice >> > i''ve got one big zpool consisting of a 50GB slice on drive 1 and all >> > of drives 2 & 3. >> > >> > i''m not sure that this is the optimal layout for striping. i don''t >> > need mirroring or >> > redundancy-- just speed. i''m thinking maybe i''d be better booting >> > off one of the smaller >>drives and putting the other two on one >> > controller and putting the zpool on those only. >> spanning the two >> > IDE controllers with the single zpool seems like it might be a bad >> > idea, but i am just postulating here.... > > that is backwards, you want them split, on each controller, in case > one controller chip dies the data still remains in tact and safe if > they the two drives are mirrored.It is not a good assumption that if you have two IDE controllers that you have two IDE controller chips. In fact, I can''t remember the last time I saw such a beast... 1997 perhaps? Don''t worry about the controllers. See http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_vs> for most the space, availible with safety, take a 40GB slice off the > 80GB drive, and combine with the other 2x 40GB drives in a raidz group > composed of 3x 40GB pieces that gives you about 80GB of usable disk > space.>> your other choice is 2x 40GB drives together mirrored in one pool, so > it can survive the loss of a drive and still maintain data. and use > the remaining slice off the 80GB drive as a single drive pool that > holds temporary data that isn''t important because the data is gone if > the drive dies.And a third choice is cutting your 40GByte drives in two such that you have a total of 6x 20 GByte partitions spread across your 80 and 40 GByte drives. Then install three 2-way mirrors across the disks. Some people like such things, and there is merit to such designs. disk1 20 GBytes <- mirror -> disk2 20 Gbytes disk2 20 GBytes <- mirror -> disk3 20 Gbytes disk3 20 GBytes <- mirror -> disk1 20 Gbytes total available space: 60 GBytes. Better performance for some workloads than raidz. BTW, I recently bought a 160 GByte IDE disk, new in the box for $20 from a major retail chain. If it were my choice, I''d spend $20. -- richard
thanks all for the feedback! i definitely learned a lot-- storage isn''t anywhere near my field of expertise, so it''s great to get some real examples to go with all the buzzwords you hear around the watercooler. ;) i''ll probably give one of the raid-z or mirroring setups suggested a try when i migrate to the next sx:cr. i''m not too worried about the data on this machine-- it''s more something to play with while i wait for my ultra 40 to arrive. ;D richard: great blog post about the current state of storage topologies! i''d heard about new technologies like SAS, but your post really helped get me up to speed with what''s current. This message posted from opensolaris.org
On 1/13/07, Richard Elling <Richard.Elling at sun.com> wrote:> And a third choice is cutting your 40GByte drives in two such that you > have a total of 6x 20 GByte partitions spread across your 80 and 40 GByte > drives. Then install three 2-way mirrors across the disks. Some people > like such things, and there is merit to such designs. > disk1 20 GBytes <- mirror -> disk2 20 Gbytes > disk2 20 GBytes <- mirror -> disk3 20 Gbytes > disk3 20 GBytes <- mirror -> disk1 20 Gbytes > total available space: 60 GBytes. Better performance for some workloads > than raidz.You will then lose disk_write_caching on the 2 40G disks. Is there any benchmarks on the impact of these choices? 1) raidZ 3 x 40G 2) your above suggestion Of course, disk1 is really 80G so ZFS does not have the whole disk anyway.> BTW, I recently bought a 160 GByte IDE disk, new in the box for $20 from a > major retail chain. If it were my choice, I''d spend $20.Luck you. -- Just me, Wire ...