Hello zfs-discuss, Is someone actually working on it? Or any other algorithms? Any dates? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote:> Hello zfs-discuss, > > Is someone actually working on it? Or any other algorithms? > Any dates?Not that I know of. Any volunteers? :-) (Actually, I think that a RLE compression algorithm for metadata is a higher priority, but if someone from the community wants to step up, we won''t turn your code away!) --matt
On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote:> (Actually, I think that a RLE compression algorithm for metadata is a > higher priority, but if someone from the community wants to step up, we > won''t turn your code away!)Is RLE likely to be more efficient for metadata? Have you taking a stab as estimating the comparative benefits? Adam -- Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
On Thu, Aug 17, 2006 at 10:28:10AM -0700, Adam Leventhal wrote:> On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote: > > (Actually, I think that a RLE compression algorithm for metadata is a > > higher priority, but if someone from the community wants to step up, we > > won''t turn your code away!) > > Is RLE likely to be more efficient for metadata?No, it it not likely to achieve a higher compression ratio. However, it should use significantly less CPU time. We''ve seen some circumstances where the CPU usage caused by compressing metadata can be not as trivial as we''d like. --matt
Matthew Ahrens wrote:> On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote: >> Hello zfs-discuss, >> >> Is someone actually working on it? Or any other algorithms? >> Any dates? > > Not that I know of. Any volunteers? :-) > > (Actually, I think that a RLE compression algorithm for metadata is a > higher priority, but if someone from the community wants to step up, we > won''t turn your code away!)maybe a stupid question: what do we use for compressing dump data on the dump device? Michael
On Sat, Aug 19, 2006 at 01:25:21PM +0200, michael schuster wrote:> maybe a stupid question: what do we use for compressing dump data on the > dump device?We use a variant of Lempel-Ziv called lzjb (the jb is for Jeff Bonwick). The algorithm was designed for very small code/memory footprint and to be very fast (single CPU can compress data at 150MB/s). When running a compression algorithm in panic context, you have a lot of constraints. --Bill
Bill Moore wrote:> On Sat, Aug 19, 2006 at 01:25:21PM +0200, michael schuster wrote: >> maybe a stupid question: what do we use for compressing dump data on the >> dump device? > > We use a variant of Lempel-Ziv called lzjb (the jb is for Jeff Bonwick). > The algorithm was designed for very small code/memory footprint and to > be very fast (single CPU can compress data at 150MB/s). When running a > compression algorithm in panic context, you have a lot of constraints.would that help the original poster? Michael
> Matthew Ahrens wrote: > > On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote: > >> Hello zfs-discuss, > >> > >> Is someone actually working on it? Or any other algorithms? > >> Any dates? > > > > Not that I know of. Any volunteers? :-) > > > > (Actually, I think that a RLE compression algorithm for metadata is a > > higher priority, but if someone from the community wants to step up, we > > won''t turn your code away!) > > maybe a stupid question: what do we use for compressing dump data on the > dump device? > > MichaelWe use LZJB for that, too, which is Jeff''s minimal Lempel Ziv, found in usr/src/uts/common/os/compress.c. The major issue here is the small history buffer, since it has to be declared on the stack for purposes of the panic code path. For use with in-kernel CTF, I ported zlib to a general-purpose kernel misc module named "zmod" -- see usr/src/uts/common/zmod/. However, since we only needed decompress I only included that part of the library. So if we want to work on more general-purpose higher quality compression for the kernel, the first step should really be to add the compress portion of zlib to zmod, and then we can interface that to other subsystems that need it. That said, zlib''s cpu/time properties may be unsuitable for some of these tasks: it tends to be on the slow side for compression. -Mike -- Mike Shapiro, Solaris Kernel Development. blogs.sun.com/mws/