Constantin Gonzalez
2006-Aug-21 09:16 UTC
[zfs-discuss] ZFS Load-balancing over vdevs vs. real disks?
Hi, my ZFS pool for my home server is a bit unusual: pool: pelotillehue state: ONLINE scrub: scrub completed with 0 errors on Mon Aug 21 06:10:13 2006 config: NAME STATE READ WRITE CKSUM pelotillehue ONLINE 0 0 0 mirror ONLINE 0 0 0 c0d1s5 ONLINE 0 0 0 c1d0s5 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c0d0s3 ONLINE 0 0 0 c0d1s3 ONLINE 0 0 0 c1d0s3 ONLINE 0 0 0 c1d1s3 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c0d1s4 ONLINE 0 0 0 c1d0s4 ONLINE 0 0 0 c1d1s4 ONLINE 0 0 0 The reason is simple: I have 4 differently-sized disks (80, 80, 200, 250 GB. It''s a home server and so I crammed whatever I could find elswhere into that box :) ) and my goal was to create the biggest pool possible but retaining some level of redundancy. The above config therefore groups the biggest slices that can be created on all four disks into the 4-disk RAID-Z vdev, then the biggest slices that can be created on 3 disks into the 3-disk RAID-Z, then two large slices remain which are mirrored. It''s like playing Tetris with disk slices... But the pool can tolerate 1 broken disk and it gave me maximum storage capacity, so be it. This means that we have one pool with 3 vdevs that access up to 3 different sliced on the same physical disk. Question: Does ZFS consider the underlying physical disks when load-balancing or does it only load-balance across vdevs thereby potentially overloading physical disks with up to 3 parallel requests per physical disk at once? I''m pretty sure ZFS is very intelligent and will do the right thing, but a confirmation would be nice here. Best regards, Constantin -- Constantin Gonzalez Sun Microsystems GmbH, Germany Platform Technology Group, Client Solutions http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/
eric kustarz
2006-Aug-21 19:48 UTC
[zfs-discuss] ZFS Load-balancing over vdevs vs. real disks?
Constantin Gonzalez wrote:>Hi, > >my ZFS pool for my home server is a bit unusual: > > pool: pelotillehue > state: ONLINE > scrub: scrub completed with 0 errors on Mon Aug 21 06:10:13 2006 >config: > > NAME STATE READ WRITE CKSUM > pelotillehue ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c0d1s5 ONLINE 0 0 0 > c1d0s5 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > c0d0s3 ONLINE 0 0 0 > c0d1s3 ONLINE 0 0 0 > c1d0s3 ONLINE 0 0 0 > c1d1s3 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > c0d1s4 ONLINE 0 0 0 > c1d0s4 ONLINE 0 0 0 > c1d1s4 ONLINE 0 0 0 > >The reason is simple: I have 4 differently-sized disks (80, 80, 200, 250 GB. >It''s a home server and so I crammed whatever I could find elswhere into that box >:) ) and my goal was to create the biggest pool possible but retaining some >level of redundancy. > >The above config therefore groups the biggest slices that can be created on all >four disks into the 4-disk RAID-Z vdev, then the biggest slices that can be >created on 3 disks into the 3-disk RAID-Z, then two large slices remain which >are mirrored. It''s like playing Tetris with disk slices... But the pool can >tolerate 1 broken disk and it gave me maximum storage capacity, so be it. > >This means that we have one pool with 3 vdevs that access up to 3 different >sliced on the same physical disk. > >Question: Does ZFS consider the underlying physical disks when load-balancing >or does it only load-balance across vdevs thereby potentially overloading >physical disks with up to 3 parallel requests per physical disk at once? > >ZFS only does dynamic striping across the (top-level) vdevs. I understand why you setup your pool that way, but ZFS really likes whole disks instead of slices. Trying to interpret that the devices are really slices and part of other vdevds seems overly complicated for the gain achieved. eric
Constantin Gonzalez
2006-Aug-22 10:22 UTC
[zfs-discuss] ZFS Load-balancing over vdevs vs. real disks?
Hi Eric,>> This means that we have one pool with 3 vdevs that access up to 3 >> different >> sliced on the same physical disk.minor correction: 1 pool, 3 vdevs, 3 slices per disk on 4 disks.>> Question: Does ZFS consider the underlying physical disks when >> load-balancing >> or does it only load-balance across vdevs thereby potentially overloading >> physical disks with up to 3 parallel requests per physical disk at once? > > ZFS only does dynamic striping across the (top-level) vdevs. > > I understand why you setup your pool that way, but ZFS really likes > whole disks instead of slices.ok, understood. When I run out of storage, I''ll try to get 4 cheap SATA drives of equal size and migrate all over.> Trying to interpret that the devices are really slices and part of other > vdevds seems overly complicated for the gain achieved.So what data does ZFS base it''s dynamic stripig on? Does it count IOPs per vdev or does it try to sense the load on the vdevs by measuring, say response times, queue leghts etc.? Best regards, Constantin -- Constantin Gonzalez Sun Microsystems GmbH, Germany Platform Technology Group, Client Solutions http://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/
eric kustarz
2006-Aug-22 21:19 UTC
[zfs-discuss] ZFS Load-balancing over vdevs vs. real disks?
Constantin Gonzalez wrote:>Hi Eric, > > > >>>This means that we have one pool with 3 vdevs that access up to 3 >>>different >>>sliced on the same physical disk. >>> >>> > >minor correction: 1 pool, 3 vdevs, 3 slices per disk on 4 disks. > > > >>>Question: Does ZFS consider the underlying physical disks when >>>load-balancing >>>or does it only load-balance across vdevs thereby potentially overloading >>>physical disks with up to 3 parallel requests per physical disk at once? >>> >>> >>ZFS only does dynamic striping across the (top-level) vdevs. >> >>I understand why you setup your pool that way, but ZFS really likes >>whole disks instead of slices. >> >> > >ok, understood. When I run out of storage, I''ll try to get 4 cheap SATA >drives of equal size and migrate all over. > > > >>Trying to interpret that the devices are really slices and part of other >>vdevds seems overly complicated for the gain achieved. >> >> > >So what data does ZFS base it''s dynamic stripig on? Does it count IOPs per >vdev or does it try to sense the load on the vdevs by measuring, say response >times, queue leghts etc.? > > >Its currently done by capacity. We''re planning on adding the ability to factor in "speed" of the device (so a slower drive would get less work compared to a faster drive). eric