Hi, I have these two pools, four luns each. One has two mirrors x two luns, the other is one mirror x 4 luns. I am trying to figure out what the pro''s and cons are of these two configs. One thing I have noticed is that the single mirror 4 lun config can survive as many as three lun failures. The other config only two. I am thinking that space efficiency is similar because zfs strips across all the luns in both configs. So that being said. I would like to here from others on pro''s and cons of these two approaches. Thanks ahead, -tomg NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror ONLINE 0 0 0 /export/lun5 ONLINE 0 0 0 /export/lun2 ONLINE 0 0 0 mirror ONLINE 0 0 0 /export/lun3 ONLINE 0 0 0 /export/lun4 ONLINE 0 0 0 NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror ONLINE 0 0 0 /export/luna ONLINE 0 0 0 /export/lunb ONLINE 0 0 0 /export/lund ONLINE 0 0 0 /export/lunc ONLINE 0 0 0
Hello Tom, Tuesday, May 23, 2006, 9:46:24 PM, you wrote: TG> Hi, TG> I have these two pools, four luns each. One has two mirrors x two luns, TG> the other is one mirror x 4 luns. TG> I am trying to figure out what the pro''s and cons are of these two configs. TG> One thing I have noticed is that the single mirror 4 lun config can TG> survive as many as three lun failures. The other config only two. TG> I am thinking that space efficiency is similar because zfs strips across TG> all the luns in both configs. TG> So that being said. I would like to here from others on pro''s and cons TG> of these two approaches. TG> Thanks ahead, TG> -tomg TG> NAME STATE READ WRITE CKSUM TG> mypool ONLINE 0 0 0 TG> mirror ONLINE 0 0 0 TG> /export/lun5 ONLINE 0 0 0 TG> /export/lun2 ONLINE 0 0 0 TG> mirror ONLINE 0 0 0 TG> /export/lun3 ONLINE 0 0 0 TG> /export/lun4 ONLINE 0 0 0 TG> NAME STATE READ WRITE CKSUM TG> newpool ONLINE 0 0 0 TG> mirror ONLINE 0 0 0 TG> /export/luna ONLINE 0 0 0 TG> /export/lunb ONLINE 0 0 0 TG> /export/lund ONLINE 0 0 0 TG> /export/lunc ONLINE 0 0 0 In the first config you should get a pool storage with capacity equal to ''2x lun size''. In the second config only ''1x lun size''. So in the second config you get better redundancy but only half storage size. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> Hello Tom, > > Tuesday, May 23, 2006, 9:46:24 PM, you wrote: > > TG> Hi, > > TG> I have these two pools, four luns each. One has two mirrors x two luns, > TG> the other is one mirror x 4 luns. > > TG> I am trying to figure out what the pro''s and cons are of these two configs. > > TG> One thing I have noticed is that the single mirror 4 lun config can > TG> survive as many as three lun failures. The other config only two. > TG> I am thinking that space efficiency is similar because zfs strips across > TG> all the luns in both configs. > > TG> So that being said. I would like to here from others on pro''s and cons > TG> of these two approaches. > > TG> Thanks ahead, > TG> -tomg > > TG> NAME STATE READ WRITE CKSUM > TG> mypool ONLINE 0 0 0 > TG> mirror ONLINE 0 0 0 > TG> /export/lun5 ONLINE 0 0 0 > TG> /export/lun2 ONLINE 0 0 0 > TG> mirror ONLINE 0 0 0 > TG> /export/lun3 ONLINE 0 0 0 > TG> /export/lun4 ONLINE 0 0 0 > > TG> NAME STATE READ WRITE CKSUM > TG> newpool ONLINE 0 0 0 > TG> mirror ONLINE 0 0 0 > TG> /export/luna ONLINE 0 0 0 > TG> /export/lunb ONLINE 0 0 0 > TG> /export/lund ONLINE 0 0 0 > TG> /export/lunc ONLINE 0 0 0 > > > In the first config you should get a pool storage with capacity equal to > ''2x lun size''. In the second config only ''1x lun size''. > So in the second config you get better redundancy but only half > storage size. > >Ok I see that, df shows it explicitly. root at chopin> df -F zfs -h Filesystem size used avail capacity Mounted on mypool 2.0G 39M 1.9G 2% /mypool newpool 1000M 8K 1000M 1% /newpool What confused me is that ZFS does dynamic striping and if I write to the 2x lun mirror all of the disks get IO. But my error in thought was in how the data gets spread out. It must be that the writes get striped for bandwidth utilization but the blocks and their copies are not spread across the mirrors. I''d like to understand that better. It sure is good to be able to experiment with devious. -tomg
Hello Tom, Tuesday, May 23, 2006, 10:37:31 PM, you wrote: TG> Robert Milkowski wrote:>> Hello Tom, >> >> Tuesday, May 23, 2006, 9:46:24 PM, you wrote: >> >> TG> Hi, >> >> TG> I have these two pools, four luns each. One has two mirrors x two luns, >> TG> the other is one mirror x 4 luns. >> >> TG> I am trying to figure out what the pro''s and cons are of these two configs. >> >> TG> One thing I have noticed is that the single mirror 4 lun config can >> TG> survive as many as three lun failures. The other config only two. >> TG> I am thinking that space efficiency is similar because zfs strips across >> TG> all the luns in both configs. >> >> TG> So that being said. I would like to here from others on pro''s and cons >> TG> of these two approaches. >> >> TG> Thanks ahead, >> TG> -tomg >> >> TG> NAME STATE READ WRITE CKSUM >> TG> mypool ONLINE 0 0 0 >> TG> mirror ONLINE 0 0 0 >> TG> /export/lun5 ONLINE 0 0 0 >> TG> /export/lun2 ONLINE 0 0 0 >> TG> mirror ONLINE 0 0 0 >> TG> /export/lun3 ONLINE 0 0 0 >> TG> /export/lun4 ONLINE 0 0 0 >> >> TG> NAME STATE READ WRITE CKSUM >> TG> newpool ONLINE 0 0 0 >> TG> mirror ONLINE 0 0 0 >> TG> /export/luna ONLINE 0 0 0 >> TG> /export/lunb ONLINE 0 0 0 >> TG> /export/lund ONLINE 0 0 0 >> TG> /export/lunc ONLINE 0 0 0 >> >> >> In the first config you should get a pool storage with capacity equal to >> ''2x lun size''. In the second config only ''1x lun size''. >> So in the second config you get better redundancy but only half >> storage size. >> >>TG> Ok I see that, df shows it explicitly. root at chopin>> df -F zfs -h TG> Filesystem size used avail capacity Mounted on TG> mypool 2.0G 39M 1.9G 2% /mypool TG> newpool 1000M 8K 1000M 1% /newpool TG> What confused me is that ZFS does dynamic striping and if I write to the TG> 2x lun mirror all of the disks get IO. But my error in thought was in TG> how the data gets spread out. It must be that the writes get striped for TG> bandwidth utilization but the blocks and their copies are not spread TG> across the mirrors. I''d like to understand that better. TG> It sure is good to be able to experiment with devious. Well, mirror A B mirror C D with zfs is actually behaving like RAID-10 (stripe over mirrors). The main difference here is variable stripe width but when it comes to protection it''s just RAID-10 + checksums for data+metadata. You can imaging such a config as stacked raid - the same if you have created two mirrors on HW RAID, then exposed two such disks to a host and then just did striping using ZFS (zpool create pool X Y - where X is one mirror from two disks and Y is another mirror from two disks). The difference is in using variable stripe width and checksums (and more clever IO scheduler?) -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com