hello all, I have the following scenario of using zfs. - I have a HDD images that has a NTFS partition stored in a zfs dataset in a file called images.img - I have X physical machines that boot from my server via iSCSI from such an image - Every time a machine ask for a boot request from my server a clone of the zfs dataset is created and the machine is given the clone to boot from I want to make an optimization to my framework that involves using a ramdisk pool to store the initial hdd images and the clones of the image being stored on a disk based pool. I tried to do this using zfs, but it wouldn''t let me do cross pool clones. If someone has any idea on how to proceed in doing this, please let me know. It is not necessary to do this exactly as I proposed, but it has to be something in this direction, a ramdisk backed initial image and more disk backed clones. thank you, Mihai -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100520/937e682d/attachment.html>
Mihai wrote:> hello all, > > I have the following scenario of using zfs. > - I have a HDD images that has a NTFS partition stored in a zfs > dataset in a file called images.img > - I have X physical machines that boot from my server via iSCSI from > such an image > - Every time a machine ask for a boot request from my server a clone > of the zfs dataset is created and the machine is given the clone to > boot from > > I want to make an optimization to my framework that involves using a > ramdisk pool to store the initial hdd images and the clones of the > image being stored on a disk based pool. > I tried to do this using zfs, but it wouldn''t let me do cross pool clones. > > If someone has any idea on how to proceed in doing this, please let me > know. It is not necessary to do this exactly as I proposed, but it has > to be something in this direction, a ramdisk backed initial image and > more disk backed clones.You haven''t said what your requirement is - i.e. what are you hoping to improve by making this change? I can only guess. If you are reading blocks from your initial hdd images (golden images) frequently enough, and you have enough memory on your system, these blocks will end up on the ARC (memory) anyway. If you don''t have enough RAM for this to help, then you could add more memory, and/or an SSD as a L2ARC device ("cache" device in zpool command line terms). -- Andrew Gabriel
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Andrew Gabriel > > If you are reading blocks from your initial hdd images (golden images) > frequently enough, and you have enough memory on your system, these > blocks will end up on the ARC (memory) anyway. If you don''t have enough > RAM for this to help, then you could add more memory, and/or an SSD as > a > L2ARC device ("cache" device in zpool command line terms).Andrew''s right. If you''ve got enough RAM in the system for a ramdisk to contain the whole image, then the kernel will automatically use that RAM for ARC cache anyway, so there should be no need for the ramdisk. The first time a block is read from the disk, it will be subsequently read from ram. If there''s enough activity on all the clone disks to push the original disk out of ARC ram cache, that means the clones are benefitting more, and the original disk is benefitting less. You should let that happen, and optionally add more ram. But one more thing: If I''m not mistaken, L2ARC cached blocks will not get striped across more than one device in your L2ARC, which means your L2ARC only helps for latency, and not throughput. (I''m really not certain about this, but I think so.) Given the stated usage scenario, I''m not sure if latency or throughput would be more vital.
----- "Mihai" <imcu.zfs at gmail.com> skrev: hello all, I have the following scenario of using zfs. - I have a HDD images that has a NTFS partition stored in a zfs dataset in a file called images.img Wouldn''t it be better to use zfs volumes? AFAIK they are way faster than using files Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100520/b5cdfab7/attachment.html>
> If I''m not mistaken, L2ARC cached blocks will not get striped across more > than one device in your L2ARC, which means your L2ARC only helps for > latency, and not throughput.Regardless of wither it does or not it can still help overall system throughput by avoiding having to read from slower (maybe significantly so) disks. > (I''m really not certain about this, but I think so.) A description of how the L2ARC works is here in the source code: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3605 The L2ARC does cycle through all the L2ARC vdevs each time the feeder runs: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#4390 It chooses the next L2ARC vdev to use with this function: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3808 -- Darren J Moffat
Edward Ned Harvey wrote:> But one more thing: > > If I''m not mistaken, L2ARC cached blocks will not get striped across more > than one device in your L2ARC, which means your L2ARC only helps for > latency, and not throughput. (I''m really not certain about this, but I > think so.) Given the stated usage scenario, I''m not sure if latency or > throughput would be more vital. >You are correct - L2ARC cache devices don''t stripe - they''re used round-robin. For multiple file access like the OP describes, it provides the same level of performance boost as with striping. I.e. N times performance for N cache devices. So, getting 4 x 40GB smaller devices is likely going to boost performance 4 x more than 1 x 160GB device. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA