Hi, the ZFS_Best_Practises_Guide states this: "Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the pool fills up, new allocations will be forced to favor larger vdevs over smaller ones and this will cause subsequent reads to come from a subset of underlying devices leading to lower performance." I am setting up a zpool comprised of mirrored LUNs, each one being exported as a JBOD from my FC RAIDs. Now, as the zpool will fill up, I intend to attach more mirrors to it and I am wondering, if I understood that correctly. Let''s assume I am creating the initial zpool like this: zpool create tank mirror disk1a disk1b mirror disk2a disk2b mirror disk3a disk3b mirror disk4a disk4b. After some time the zpool has filled up and I attach another mirror to it: zpool attach tank mirror disk5a disk5b, this would mean, that all new data would take a performance hit, since it can only be stored on the new mirror disk, instead of being distributed across all vdevs, right? So, to circumvent this, it would be mandantory to add at least as many vdevs at once, to satisfy the desired performance? How do you guys handle this? Cheers, budy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110113/22b53e9c/attachment-0001.html>
The way I understand it is that you should add new mirrors (vdevs) of the same size as the other vdevs already attached to the said pool. That is, if your vdevs are mirrors of 2TB drives, don''t add a new mirror of, say, 1TB drives. I might be wrong but this is my understanding. -- This message posted from opensolaris.org
a.smith at ukgrid.net
2011-Jan-13 16:45 UTC
[zfs-discuss] zpool scalability and performance
Basically I think yes you need to add all the vdevs you require in the circumstances you describe. You just have to consider what ZFS is able to do with the disks that you give it. If you have 4x mirrors to start with then all writes will be spread across all disks and you will get nice performance using all 8 spindles/disks. If you fill all of these up then add one other mirror then its logical that new data written will be only written to the free space on the new mirror and you will get the performance of writing data to a single mirrored vdev. To handle this you would either have to add sufficient new devices to give you your required performance. Or if there is a fair amount of data turn around on your pool, ie you are deleting (including from snapshots) old data then you might get reasonable performance by adding a new mirror at some point before your existing pool is completely full. Ie data will initially get written and spread across all disks as there will be free space on all disks, and over time old data will be removed from the other older vdevs. Which would result in most of the time reads and writes benefiting from all vdevs, but it''t not going to give you guarantees of that I guess... Anyway, thats what occurred to me on the subject! ;) cheers Andy.
Roy Sigurd Karlsbakk
2011-Jan-15 15:08 UTC
[zfs-discuss] zpool scalability and performance
> the ZFS_Best_Practises_Guide states this: > > "Keep vdevs belonging to one zpool of similar sizes; Otherwise, as the > pool fills up, new allocations will be forced to favor larger vdevs > over smaller ones and this will cause subsequent reads to come from a > subset of underlying devices leading to lower performance." > > I am setting up a zpool comprised of mirrored LUNs, each one being > exported as a JBOD from my FC RAIDs. Now, as the zpool will fill up, I > intend to attach more mirrors to it and I am wondering, if I > understood that correctly. > > Let''s assume I am creating the initial zpool like this: zpool create > tank mirror disk1a disk1b mirror disk2a disk2b mirror disk3a disk3b > mirror disk4a disk4b. After some time the zpool has filled up and I > attach another mirror to it: zpool attach tank mirror disk5a disk5b, > this would mean, that all new data would take a performance hit, since > it can only be stored on the new mirror disk, instead of being > distributed across all vdevs, right? > > So, to circumvent this, it would be mandantory to add at least as many > vdevs at once, to satisfy the desired performance?If you make a pool and then fill it to > 80% or so, it will slow down. Then, if adding more drives to that pool, mirrors, raidz or whatever, new writes will basically be written to the new ones, since the rest of them arre full. To fix this, block rewrite needs to be implemented to create a way of balancing an existing pool, and I don''t think the ZFS developers are there yet. To avoid this, replacing existing drives with larger drives (with autoexpand=on set on the pool) might be a solution. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.