Hello there, Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning, known as Hitachi Dynamic Provisioning (HDP). This gives a way to make the OS believes that a huge lun is available whilst its size is not physically allocated on the DataSystem side. A simple example : 100Gb seen by the OS but only 50Gb physically allocated in the frame, in a physical devices stock (called a HDP-pool) The USP-V is now able to reclaim zero pages that are not used by a Filesystem. Then, it could put them back to this physical pool, as free many 42Mb blocks. As far as I know, when a file is deleted, zfs just stop to reference blocks associated to this file, like MMU does with RAM. Blocks are not deleted, nor zeored (sounds very good to get back to some files after a crash !). Is there a way to transform - a posteriori or a priori - these "unreferenced blocks" to "zero blocks" to make the HDS-Frame able to reclaime these ones ? I know that this will create some overhead... It might leads to a smaller "block allocation history" but could be very usefull for zero-pages-reclaim. I do hope that my question was clear enough... Thanx for your hints, Cyril Payet -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081229/950b5a12/attachment.html>
Cyril Payet wrote: Hello there, Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning, known as Hitachi Dynamic Provisioning (HDP). This gives a way to make the OS believes that a huge lun is available whilst its size is not physically allocated on the DataSystem side. A simple example : 100Gb seen by the OS but only 50Gb physically allocated in the frame, in a physical devices stock (called a HDP-pool) The USP-V is now able to reclaim zero pages that are not used by a Filesystem. Then, it could put them back to this physical pool, as free many 42Mb blocks. As far as I know, when a file is deleted, zfs just stop to reference blocks associated to this file, like MMU does with RAM. Blocks are not deleted, nor zeored (sounds very good to get back to some files after a crash !). Is there a way to transform - a posteriori or a priori - these "unreferenced blocks" to "zero blocks" to make the HDS-Frame able to reclaime these ones ? I know that this will create some overhead... It might leads to a smaller "block allocation history" but could be very usefull for zero-pages-reclaim. I do hope that my question was clear enough... Thanx for your hints, Cyril Payet Out of curiosity, is there any filesystem which zeros blocks as they are freed up? -- Andrew _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 29 Dec 2008 19:17:27 +0000, Andrew Gabriel <agabriel at opensolaris.org> wrote:>Out of curiosity, is there any filesystem which >zeros blocks as they are freed up?The native filesystem of the Fujitsu Siemens mainframe operating system "BS2000/OSD" does that: - if a file is deleted with the DELETE-FILE command, using DESTROY=*YES parameter - if a file that is deleted has its DESTROY attribute set to *YES (even if the DESTROY parameter isn''t used in the DELETE-FILE command). I think the defragmentation tool (spaceopt) respects the DESTROY attribute as well. If my memory serves me well, the default value for the DESTROY fileattribute can be determined per volumeset (=catalog=filesystem).>-- >Andrew-- ( Kees Nuyt ) c[_]
Cyril Payet wrote:>> Hello there, >> Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning, >> known as Hitachi Dynamic Provisioning (HDP). >> This gives a way to make the OS believes that a huge lun is >> available whilst its size is not physically allocated on the >> DataSystem side. >> A simple example : 100Gb seen by the OS but only 50Gb physically >> allocated in the frame, in a physical devices stock (called a HDP-pool) >> The USP-V is now able to reclaim zero pages that are not used by >> a Filesystem. >> Then, it could put them back to this physical pool, as free many 42Mb >> blocks. >> As far as I know, when a file is deleted, zfs just stop to reference >> blocks associated to this file, like MMU does with RAM. >> Blocks are not deleted, nor zeored (sounds very good to get back to >> some files after a crash !). >> Is there a way to transform - a posteriori or a priori - these >> "unreferenced blocks" to "zero blocks" to make the HDS-Frame able to >> reclaime these ones ? I know that this will create some overhead... >> It might leads to a smaller "block allocation history" but could be >> very usefull for zero-pages-reclaim. >> I do hope that my question was clear enough... >> Thanx for your hints,There are some mainframe filesystems that do such things. I think there was also an STK array - Iceberg[?] - that had similar functionality. However, why would you use ZFS on top of HDP? If the filesystem lets you grow dynamically, and the OS let''s you add storage dynamically or grow the LUNs when the array does....what does HDP get you? Serious question as I get asked it all the time and I can''t come up with a good answer outside of procedural things such as, "We don''t like to bother the storage guys" or, "We thin provision everything no matter the app/fs/os" or <choose your own adventure>.
On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> > There are some mainframe filesystems that do such things. I think there > was also an STK array - Iceberg[?] - that had similar functionality. > However, why would you use ZFS on top of HDP? If the filesystem lets you > grow dynamically, and the OS let''s you add storage dynamically or grow > the LUNs when the array does....what does HDP get you? > > Serious question as I get asked it all the time and I can''t come up with > a good answer outside of procedural things such as, "We don''t like to > bother the storage guys" or, "We thin provision everything no matter the > app/fs/os" or <choose your own adventure>. > >Assign your database admin who swears he needs 2TB day one a 2TB lun. And 6 months from now when he''s really only using 200GB, you aren''t wasting 1.8TB of disk on him. I see it on a weekly basis. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081229/1a19ef9c/attachment.html>
On 12/29/2008 8:20 PM, Tim wrote:> > > On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon <tmcmahon2 at yahoo.com > <mailto:tmcmahon2 at yahoo.com>> wrote: > > > There are some mainframe filesystems that do such things. I think > there > was also an STK array - Iceberg[?] - that had similar functionality. > However, why would you use ZFS on top of HDP? If the filesystem > lets you > grow dynamically, and the OS let''s you add storage dynamically or grow > the LUNs when the array does....what does HDP get you? > > Serious question as I get asked it all the time and I can''t come > up with > a good answer outside of procedural things such as, "We don''t like to > bother the storage guys" or, "We thin provision everything no > matter the > app/fs/os" or <choose your own adventure>. > > > Assign your database admin who swears he needs 2TB day one a 2TB lun. > And 6 months from now when he''s really only using 200GB, you aren''t > wasting 1.8TB of disk on him.I run into the same thing but once I say, "I can add more space without downtime" they tend to smarten up. Also, ZFS will not reuse blocks in a, for lack of better words, economical fashion. If you throw them a 2TB LUN ZFS will allocate blocks all over the LUN when they''re only using a small fraction. Unless you have, as the original poster mentioned, a "empty block reclaim" you''ll have problems. UFS can show the same results btw.
Torrey McMahon wrote:> On 12/29/2008 8:20 PM, Tim wrote: > >> On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon <tmcmahon2 at yahoo.com >> <mailto:tmcmahon2 at yahoo.com>> wrote: >> >> >> There are some mainframe filesystems that do such things. I think >> there >> was also an STK array - Iceberg[?] - that had similar functionality. >> However, why would you use ZFS on top of HDP? If the filesystem >> lets you >> grow dynamically, and the OS let''s you add storage dynamically or grow >> the LUNs when the array does....what does HDP get you? >> >> Serious question as I get asked it all the time and I can''t come >> up with >> a good answer outside of procedural things such as, "We don''t like to >> bother the storage guys" or, "We thin provision everything no >> matter the >> app/fs/os" or <choose your own adventure>. >> >> >> Assign your database admin who swears he needs 2TB day one a 2TB lun. >> And 6 months from now when he''s really only using 200GB, you aren''t >> wasting 1.8TB of disk on him. >> > > I run into the same thing but once I say, "I can add more space without > downtime" they tend to smarten up. Also, ZFS will not reuse blocks in a, > for lack of better words, economical fashion. If you throw them a 2TB > LUN ZFS will allocate blocks all over the LUN when they''re only using a > small fraction. >Absolutely agree. Note: if you enable ZFS compression, zero-filled blocks will not exist as part of the data set :-) -- richard
On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon <tmcmahon2 at yahoo.com> wrote:> On 12/29/2008 8:20 PM, Tim wrote: > > I run into the same thing but once I say, "I can add more space without > downtime" they tend to smarten up. Also, ZFS will not reuse blocks in a, for > lack of better words, economical fashion. If you throw them a 2TB LUN ZFS > will allocate blocks all over the LUN when they''re only using a small > fraction. > > Unless you have, as the original poster mentioned, a "empty block reclaim" > you''ll have problems. UFS can show the same results btw.I''m not arguing anything towards his specific scenario. You said you couldn''t imagine why anyone would ever want thin provisioning, so I told you why. Some admins do not have the luxury of trying to debate with other teams they work with as to why they should do things a different way than they want to ;) That speaks nothing of the change control needed to even get a LUN grown in some shops. It''s out there, it''s being used, it isn''t a good fit for zfs. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081229/1232d294/attachment.html>
On 12/29/2008 10:36 PM, Tim wrote:> > > On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon <tmcmahon2 at yahoo.com > <mailto:tmcmahon2 at yahoo.com>> wrote: > > On 12/29/2008 8:20 PM, Tim wrote: > > I run into the same thing but once I say, "I can add more space > without downtime" they tend to smarten up. Also, ZFS will not > reuse blocks in a, for lack of better words, economical fashion. > If you throw them a 2TB LUN ZFS will allocate blocks all over the > LUN when they''re only using a small fraction. > > Unless you have, as the original poster mentioned, a "empty block > reclaim" you''ll have problems. UFS can show the same results btw. > > > I''m not arguing anything towards his specific scenario. You said you > couldn''t imagine why anyone would ever want thin provisioning, so I > told you why. Some admins do not have the luxury of trying to debate > with other teams they work with as to why they should do things a > different way than they want to ;) That speaks nothing of the change > control needed to even get a LUN grown in some shops. > > It''s out there, it''s being used, it isn''t a good fit for zfs.Right...I called those process issues. Perhaps organizational issues would have been better?