Tomas Ă–gren
2010-Feb-21 16:24 UTC
[zfs-discuss] Observations about compressability of metadata L2ARC
Hello. I got an idea.. How about creating an ramdisk, making a pool out of it, then making compressed zvols and add those as l2arc.. Instant compressed arc ;) So I did some tests with secondarycache=metadata... capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- ftp 5.07T 1.78T 198 17 11.3M 1.51M raidz2 1.72T 571G 58 5 3.78M 514K ... raidz2 1.64T 656G 75 6 3.78M 524K ... raidz2 1.70T 592G 64 5 3.74M 512K ... cache - - - - - - /dev/zvol/dsk/ramcache/ramvol 84.4M 7.62M 4 17 45.4K 233K /dev/zvol/dsk/ramcache/ramvol2 84.3M 7.71M 4 17 41.5K 233K /dev/zvol/dsk/ramcache/ramvol3 84M 8M 4 18 42.0K 236K /dev/zvol/dsk/ramcache/ramvol4 84.8M 7.25M 3 17 39.1K 225K /dev/zvol/dsk/ramcache/ramvol5 84.9M 7.08M 3 14 38.0K 193K NAME RATIO COMPRESS ramcache/ramvol 1.00x off ramcache/ramvol2 4.27x lzjb ramcache/ramvol3 6.12x gzip-1 ramcache/ramvol4 6.77x gzip ramcache/ramvol5 6.82x gzip-9 This was after ''find /ftp'' had been running for about 1h, along with all the background noise of its regular nfs serving tasks. I took an image of the uncompressed one (ramvol) and ran that through regular gzip and got 12-14x compression, probably due to smaller block size (default 8k) in the zvols.. So I tried with both 8k and 64k.. After not running that long (but at least filled), I got: NAME RATIO COMPRESS VOLBLOCK ramcache/ramvol 1.00x off 8K ramcache/ramvol2 5.57x lzjb 8K ramcache/ramvol3 7.56x lzjb 64K ramcache/ramvol4 7.35x gzip-1 8K ramcache/ramvol5 11.68x gzip-1 64K Not sure how to measure the cpu usage of the various compression levels for (de)compressing this data.. It does show that having metadata in ram compressed could be a big win though, if you have cpu cycles to spare.. Thoughts? /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se - 070-5858487
Andrey Kuzmin
2010-Feb-21 17:40 UTC
[zfs-discuss] Observations about compressability of metadata L2ARC
I don''t see why this couldn''t be extended beyond metadata (+1 for the idea): if zvol is compressed, ARC/L2ARC could store compressed data. The gain is apparent: if user has compression enabled for the volume, he/she expects volume''s data to be compressable at good ratio, yielding significant reduction of ARC memory footprint/L2ARC usable capacity boost. Regards, Andrey On Sun, Feb 21, 2010 at 7:24 PM, Tomas ?gren <stric at acc.umu.se> wrote:> Hello. > > I got an idea.. How about creating an ramdisk, making a pool out of it, > then making compressed zvols and add those as l2arc.. Instant compressed > arc ;) > > So I did some tests with secondarycache=metadata... > > ? ? ? ? ? ? ? capacity ? ? operations ? ?bandwidth > pool ? ? ? ? used ?avail ? read ?write ? read ?write > ---------- ?----- ?----- ?----- ?----- ?----- ?----- > ftp ? ? ? ? 5.07T ?1.78T ? ?198 ? ? 17 ?11.3M ?1.51M > ?raidz2 ? ?1.72T ? 571G ? ? 58 ? ? ?5 ?3.78M ? 514K > ... > ?raidz2 ? ?1.64T ? 656G ? ? 75 ? ? ?6 ?3.78M ? 524K > ... > ?raidz2 ? ?1.70T ? 592G ? ? 64 ? ? ?5 ?3.74M ? 512K > ... > cache ? ? ? ? ? - ? ? ?- ? ? ?- ? ? ?- ? ? ?- ? ? ?- > ?/dev/zvol/dsk/ramcache/ramvol ?84.4M ?7.62M ? ? ?4 ? ? 17 ?45.4K 233K > ?/dev/zvol/dsk/ramcache/ramvol2 ?84.3M ?7.71M ? ? ?4 ? ? 17 ?41.5K 233K > ?/dev/zvol/dsk/ramcache/ramvol3 ? ?84M ? ? 8M ? ? ?4 ? ? 18 ?42.0K 236K > ?/dev/zvol/dsk/ramcache/ramvol4 ?84.8M ?7.25M ? ? ?3 ? ? 17 ?39.1K 225K > ?/dev/zvol/dsk/ramcache/ramvol5 ?84.9M ?7.08M ? ? ?3 ? ? 14 ?38.0K 193K > > NAME ? ? ? ? ? ? ?RATIO ?COMPRESS > ramcache/ramvol ? 1.00x ? ? ? off > ramcache/ramvol2 ?4.27x ? ? ?lzjb > ramcache/ramvol3 ?6.12x ? ?gzip-1 > ramcache/ramvol4 ?6.77x ? ? ?gzip > ramcache/ramvol5 ?6.82x ? ?gzip-9 > > This was after ''find /ftp'' had been running for about 1h, along with all > the background noise of its regular nfs serving tasks. > > I took an image of the uncompressed one (ramvol) and ran that through > regular gzip and got 12-14x compression, probably due to smaller block > size (default 8k) in the zvols.. So I tried with both 8k and 64k.. > > After not running that long (but at least filled), I got: > > NAME ? ? ? ? ? ? ?RATIO ?COMPRESS ?VOLBLOCK > ramcache/ramvol ? 1.00x ? ? ? off ? ? ? ?8K > ramcache/ramvol2 ?5.57x ? ? ?lzjb ? ? ? ?8K > ramcache/ramvol3 ?7.56x ? ? ?lzjb ? ? ? 64K > ramcache/ramvol4 ?7.35x ? ?gzip-1 ? ? ? ?8K > ramcache/ramvol5 ?11.68x ? ?gzip-1 ? ? ? 64K > > > Not sure how to measure the cpu usage of the various compression levels > for (de)compressing this data.. ?It does show that having metadata in > ram compressed could be a big win though, if you have cpu cycles to > spare.. > > Thoughts? > > > /Tomas > -- > Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ > |- Student at Computing Science, University of Ume? > `- Sysadmin at {cs,acc}.umu.se - 070-5858487 > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Henrik Johansson
2010-Mar-02 00:08 UTC
[zfs-discuss] Observations about compressability of metadata L2ARC
On Feb 21, 2010, at 6:40 PM, Andrey Kuzmin wrote:> I don''t see why this couldn''t be extended beyond metadata (+1 for the > idea): if zvol is compressed, ARC/L2ARC could store compressed data. > The gain is apparent: if user has compression enabled for the volume, > he/she expects volume''s data to be compressable at good ratio, > yielding significant reduction of ARC memory footprint/L2ARC usable > capacity boost.I think something similar was discussed by Jeff and Bill in the ZFS keynote as KCA, Just-in-time decompression. Keeping prefetched data in memory but without decompressing it. I''ll guess you would want the data decompressed it it''s going to be used, at least frequently. The also discussed that unused data in the ARC might be able to be compressed in the future. Henrik http://sparcv9.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100302/57e6ca7e/attachment.html>