Hi, at the moment I am running a pool consisting of 4 vdefs (Seagate Enterprise SATA disks) assmebled to 2 mirrors. Now I want to add two more drives to extend the capacity to 1.5 times the old capacity. As these mirrors will be "striped" in the pool I want to know what will happen to the existing data oft he pool. Will it stay at its location and only new data will be written to the new mirror or will the existing data be spread over all 3 mirrors? Will ther be a benefit, resulting in more IOPS/Bandwith or will there only be more space? (I hope I expressed my considerations understandable despite english not beeing my mother tounge) Regards, Matthias
On Tue, 20 Oct 2009, Matthias Appel wrote:> As these mirrors will be "striped" in the pool I want to know what will > happen to the existing data oft he pool. > > Will it stay at its location and only new data will be written to the new > mirror or will the existing data be spread over all 3 mirrors?The existing data will remain in its current location. If the data is re-written, then it should be somewhat better distributed across the disks.> Will ther be a benefit, resulting in more IOPS/Bandwith or will there only > be more space?You will see more IOPS/bandwith, but if your existing disks are very full, then more traffic may be sent to the new disks, which results in less benefit. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> You will see more IOPS/bandwith, but if your existing disks are very > full, then more traffic may be sent to the new disks, which results in > less benefit.OK, that means, over time, data will be distributed across all mirrors? (assuming all blocks will be written once) I think a useful extension to ZFS would be a background task which distributes all used blocks across all vdefs. I don''t know if this can be established with ZFS but it will end up with reading one block of the pool after another assuming that by writing it, it will be spread across the mirrors.
Hi, Something like http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425 ? Bruno Matthias Appel wrote:>> You will see more IOPS/bandwith, but if your existing disks are very >> full, then more traffic may be sent to the new disks, which results in >> less benefit. >> > > > OK, that means, over time, data will be distributed across all mirrors? > (assuming all blocks will be written once) > > I think a useful extension to ZFS would be a background task which > distributes all used blocks across all vdefs. > > I don''t know if this can be established with ZFS but it will end up with > reading one block of the pool after another assuming that by writing it, it > will be spread across the mirrors. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091020/288c9280/attachment.html>
Von: Bruno Sousa [mailto:bsousa at epinfante.com] Gesendet: Dienstag, 20. Oktober 2009 22:20 An: Matthias Appel Cc: zfs-discuss at opensolaris.org Betreff: Re: [zfs-discuss] Adding another mirror to storage pool Hi, Something like http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425 ? Bruno Yes, thanks for mentioning that.For my disadvantage I must confess I did not comb through the bug database. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091020/41f412dc/attachment.html>
On Tue, 20 Oct 2009, Matthias Appel wrote:> > OK, that means, over time, data will be distributed across all mirrors? > (assuming all blocks will be written once)Yes, but it is quite rare for all files to be re-written. If you have reliable storage somewhere else, you could send your existing pool to it, and then re-create your pool from scratch. ZFS''s existing limitations are a good reason to initially over-provision the pool and not wait until the pool is close to full before adding more disks. Regardless, the only real loss is the boost to available IOPS if all disks can be used to store new data.> I think a useful extension to ZFS would be a background task which > distributes all used blocks across all vdefs.Yes. That would be a useful option. This could be combined with a file optimizer which attempts to re-layout large files for most efficient access. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/