I have read on the mailing list that there has been some interest in implementing transparent compression on btrfs and I too am thinking about trying to implement it. Before I start from scratch I am wondering if anyone else has started to work on this and if so how far along have they gotten? I would be happy to work on this alone or with someone else but currently I am doing some preliminary research. Thanks, Lee -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 2008-10-27 at 10:54 -0400, Lee Trager wrote:> I have read on the mailing list that there has been some interest in > implementing transparent compression on btrfs and I too am thinking about > trying to implement it. Before I start from scratch I am wondering if > anyone else has started to work on this and if so how far along have > they gotten? I would be happy to work on this alone or with someone else > but currently I am doing some preliminary research.Compression is working on my machine, I''m just running some long tests before I push it out to the unstable repo. The current code uses the in-kernel zlib implementation. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Oct 28, 2008 at 11:47:27AM -0400, Chris Mason wrote:> On Mon, 2008-10-27 at 10:54 -0400, Lee Trager wrote: > > I have read on the mailing list that there has been some interest in > > implementing transparent compression on btrfs and I too am thinking about > > trying to implement it. Before I start from scratch I am wondering if > > anyone else has started to work on this and if so how far along have > > they gotten? I would be happy to work on this alone or with someone else > > but currently I am doing some preliminary research. > > Compression is working on my machine, I''m just running some long tests > before I push it out to the unstable repo. The current code uses the > in-kernel zlib implementation. > > -chris >Thats great I am eager to try it. How long will these tests take? I would love to look at the code. Is compression done for every file by default or does a user space program have to set a compression flag on the file? Thanks, Lee -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 2008-10-28 at 12:33 -0400, Lee Trager wrote:> On Tue, Oct 28, 2008 at 11:47:27AM -0400, Chris Mason wrote: > > On Mon, 2008-10-27 at 10:54 -0400, Lee Trager wrote: > > > I have read on the mailing list that there has been some interest in > > > implementing transparent compression on btrfs and I too am thinking about > > > trying to implement it. Before I start from scratch I am wondering if > > > anyone else has started to work on this and if so how far along have > > > they gotten? I would be happy to work on this alone or with someone else > > > but currently I am doing some preliminary research. > > > > Compression is working on my machine, I''m just running some long tests > > before I push it out to the unstable repo. The current code uses the > > in-kernel zlib implementation. > > > > -chris > > > > Thats great I am eager to try it. How long will these tests take? I > would love to look at the code. Is compression done for every file by > default or does a user space program have to set a compression flag on > the file?This is a fairly large change, I plan on running it overnight. Compression is optional and off by default (mount -o compress to enable it). When enabled, every file is compressed. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> Compression is optional and off by default (mount -o compress to enable > it). When enabled, every file is compressed.Compression is attempted as files are written when the mount option is enabled, right? There isn''t a background scrubber that tries to compress files which are already written? - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 2008-10-28 at 10:40 -0700, Zach Brown wrote:> > Compression is optional and off by default (mount -o compress to enable > > it). When enabled, every file is compressed. > > Compression is attempted as files are written when the mount option is > enabled, right?Yes, and if the compression doesn''t make a given set of pages smaller it quickly backs off and goes back to writing it straight through.> > There isn''t a background scrubber that tries to compress files which are > already written?No, but if you mount with compression on and use the single file defrag ioctl (btrfsctl -d some_file) it''ll compress it. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 2008-10-29 at 12:14 -0600, Anthony Roberts wrote:> Hi, I have a few questions about this: > > > Compression is optional and off by default (mount -o compress to enable > > it). When enabled, every file is compressed. > > Do you know what the CPU load is like with this enabled?Now that I''ve finally pushed the code out, you can try it ;) One part of the implementation I need to revisit is the place in the code where I do compression means that most of the time the single threaded pdflush is the one compressing. This doesn''t spread the load very well across the cpus. It can be fixed, but I wanted to get the code out there. The decompression does spread across cpus, and I''ve gotten about 800MB/s doing decompress and checksumming on a zero filled compressed file. At the time, the disk was reading 14MB/s.> > Do you know whether data can be compressed at a sufficient rate to still > saturate the disk on recent-ish AMD/Intel CPUs?My recentish intel cpu can compress and checksum at about 120MB/s.> > If no, is the effective pre-compression I/O rate still comparable to the > disk without compression? >It depends on your disks...> I''m pretty sure that won''t even matter in many cases (eg you''re seeking > too much to care, or you''re on a VM with lots of cores but congested > disks, or you''re dealing with media files that it doesn''t bother > compressing, etc), but I''m curious what sort of overhead this adds. :) > > Mostly it seems like a good tradeoff, it trades plentiful cores for scarce > disk resources.This varies quite a bit from workload to workload, in some places it''ll make a big difference, but many workloads are seek bound and not bandwidth bound. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, 30 Oct 2008 7:08:42 am Chris Mason wrote:> The decompression does spread across cpus, and I''ve gotten about 800MB/s > doing decompress and checksumming on a zero filled compressed file. At > the time, the disk was reading 14MB/s.FWIW I''ve got a pretty ugly patch to Bonnie++ that makes it use data from /dev/urandom for writes rather than just blocks of zero''s which give, um, optomistic values for throughput on filesystems that do compression. Still not particularly realistic in terms of an actual workload, but maybe just a tad less unrealistic. :-) Caveat emptor - I''ve not tried this since I sent it to Russell Coker in January ''07. cheers, Chris -- Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC This email may come with a PGP signature as a file. Do not panic. For more info see: http://en.wikipedia.org/wiki/OpenPGP