On Mon, Oct 11, 2010 at 6:46 PM, Calvin Walton <calvin.walton@gmail.com> wrote:> On Mon, 2010-10-11 at 03:30 +0300, Felipe Contreras wrote: >> I use btrfs on most of my volumes on my laptop, and I''ve always felt >> booting was very slow, but definitely sure is slow, is starting up >> Google Chrome: >> >> encrypted ext4: ~20s >> btrfs: ~2:11s >> >> I have tried different things to find out exactly what is the issue, >> but haven''t quite found it yet. > > If you''ve been using this volume for a while, it could just have become > badly fragmented. You could try btrfs''s fancy online defragmentation > abilities to see if that''ll give you an improvement: > > # btrfs filesystem defragment /mountpoint/of/volume > > Let us know if that helps, of course :)I finally managed to track down this issue. Indeed the fragmentation is horrible, and ''btrfs filesystem defragment'' doesn''t help: % cat History-old > History % btrfs filesystem defragment /home % echo 3 > /proc/sys/vm/drop_caches % time dd if=History of=/dev/null && time dd if=History-old of=/dev/null 109664+0 records in 109664+0 records out 56147968 bytes (56 MB) copied, 1.90015 s, 29.5 MB/s dd if=History of=/dev/null 0.08s user 0.29s system 15% cpu 2.458 total 109664+0 records in 109664+0 records out 56147968 bytes (56 MB) copied, 97.772 s, 574 kB/s dd if=History-old of=/dev/null 0.07s user 0.80s system 0% cpu 1:37.79 total I think this is a serious issue that *must* be fixed for 1.0. I filed a bug for this: https://bugzilla.kernel.org/show_bug.cgi?id=21562 -- Felipe Contreras -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Oct 31, 2010 at 1:58 PM, Felipe Contreras <felipe.contreras@gmail.com> wrote:> On Mon, Oct 11, 2010 at 6:46 PM, Calvin Walton <calvin.walton@gmail.com> wrote: >> On Mon, 2010-10-11 at 03:30 +0300, Felipe Contreras wrote: >>> I use btrfs on most of my volumes on my laptop, and I''ve always felt >>> booting was very slow, but definitely sure is slow, is starting up >>> Google Chrome: >>> >>> encrypted ext4: ~20s >>> btrfs: ~2:11s >>> >>> I have tried different things to find out exactly what is the issue, >>> but haven''t quite found it yet. >> >> If you''ve been using this volume for a while, it could just have become >> badly fragmented. You could try btrfs''s fancy online defragmentation >> abilities to see if that''ll give you an improvement: >> >> # btrfs filesystem defragment /mountpoint/of/volume >> >> Let us know if that helps, of course :) > > I finally managed to track down this issue. Indeed the fragmentation > is horrible, and ''btrfs filesystem defragment'' doesn''t help: > > % cat History-old > History > % btrfs filesystem defragment /home > % echo 3 > /proc/sys/vm/drop_caches > > % time dd if=History of=/dev/null && time dd if=History-old of=/dev/null > 109664+0 records in > 109664+0 records out > 56147968 bytes (56 MB) copied, 1.90015 s, 29.5 MB/s > dd if=History of=/dev/null 0.08s user 0.29s system 15% cpu 2.458 total > 109664+0 records in > 109664+0 records out > 56147968 bytes (56 MB) copied, 97.772 s, 574 kB/s > dd if=History-old of=/dev/null 0.07s user 0.80s system 0% cpu 1:37.79 total > > I think this is a serious issue that *must* be fixed for 1.0. I filed > a bug for this: > https://bugzilla.kernel.org/show_bug.cgi?id=21562btrfs fi defrag isn''t recursive. "btrfs filesystem defrag /home" will defragment the space used to store the folder, without touching the space used to store files in that folder. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Felipe Contreras
2010-Oct-31 22:36 UTC
Re: Horrible btrfs performance due to fragmentation
On Mon, Nov 1, 2010 at 12:25 AM, cwillu <cwillu@cwillu.com> wrote:> On Sun, Oct 31, 2010 at 1:58 PM, Felipe Contreras > <felipe.contreras@gmail.com> wrote: >> On Mon, Oct 11, 2010 at 6:46 PM, Calvin Walton <calvin.walton@gmail.com> wrote: >>> On Mon, 2010-10-11 at 03:30 +0300, Felipe Contreras wrote: >>>> I use btrfs on most of my volumes on my laptop, and I''ve always felt >>>> booting was very slow, but definitely sure is slow, is starting up >>>> Google Chrome: >>>> >>>> encrypted ext4: ~20s >>>> btrfs: ~2:11s >>>> >>>> I have tried different things to find out exactly what is the issue, >>>> but haven''t quite found it yet. >>> >>> If you''ve been using this volume for a while, it could just have become >>> badly fragmented. You could try btrfs''s fancy online defragmentation >>> abilities to see if that''ll give you an improvement: >>> >>> # btrfs filesystem defragment /mountpoint/of/volume >>> >>> Let us know if that helps, of course :) >> >> I finally managed to track down this issue. Indeed the fragmentation >> is horrible, and ''btrfs filesystem defragment'' doesn''t help: >> >> % cat History-old > History >> % btrfs filesystem defragment /home >> % echo 3 > /proc/sys/vm/drop_caches >> >> % time dd if=History of=/dev/null && time dd if=History-old of=/dev/null >> 109664+0 records in >> 109664+0 records out >> 56147968 bytes (56 MB) copied, 1.90015 s, 29.5 MB/s >> dd if=History of=/dev/null 0.08s user 0.29s system 15% cpu 2.458 total >> 109664+0 records in >> 109664+0 records out >> 56147968 bytes (56 MB) copied, 97.772 s, 574 kB/s >> dd if=History-old of=/dev/null 0.07s user 0.80s system 0% cpu 1:37.79 total >> >> I think this is a serious issue that *must* be fixed for 1.0. I filed >> a bug for this: >> https://bugzilla.kernel.org/show_bug.cgi?id=21562 > > btrfs fi defrag isn''t recursive. "btrfs filesystem defrag /home" will > defragment the space used to store the folder, without touching the > space used to store files in that folder.Yes, that came up on the IRC, but: 1) It doesn''t make sense: "btrfs filesystem" doesn''t allow a fileystem as argument? Why would anyone want it to be _non_ recursive? 2) The filesystem should not degrade performance so horribly no matter how long the it has been used. Even git has automatic garbage collection. -- Felipe Contreras
On Mon, Nov 01, 2010 at 12:36:58AM +0200, Felipe Contreras wrote:> On Mon, Nov 1, 2010 at 12:25 AM, cwillu <cwillu@cwillu.com> wrote: > > btrfs fi defrag isn''t recursive. "btrfs filesystem defrag /home" will > > defragment the space used to store the folder, without touching the > > space used to store files in that folder. > > Yes, that came up on the IRC, but: > > 1) It doesn''t make sense: "btrfs filesystem" doesn''t allow a fileystem > as argument? Why would anyone want it to be _non_ recursive?You missed the subsequent discussion on IRC about the interaction of COW with defrag. Essentially, if you''ve got two files that are COW copies of each other, and one has had something written to it since, it''s *impossible* for both files to be defragmented, without making a full copy of both: Start with a file (A, etc are data blocks on the disk): file1 = ABCDEF Cow copy it: file1 = ABCDEF file2 = ABCDEF Now write to one of them: file1 = ABCDEF file2 = ABCDxF So, either file1 is contiguous, and file2 is fragmented (with the block x somewhere else on disk), or file2 is contiguous, and file1 is fragmented (with E somewhere else on disk). In fact, we''ve determined by experiment that when you defrag a file that''s sharing blocks with another one, the file gets copied in its entirety, thus separating the blocks of the file and its COW duplicate.> 2) The filesystem should not degrade performance so horribly no matter > how long the it has been used. Even git has automatic garbage > collection.Since, I believe, btrfs uses COW very heavily internally for ensuring consistency, you can end up with fragmenting files and directories very easily. You probably need some kind of scrubber that goes looking for non-COW files that are fragmented, and defrags them in the background. Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- "No! My collection of rare, incurable diseases! Violated!" ---
Felipe Contreras
2010-Nov-01 15:58 UTC
Re: Horrible btrfs performance due to fragmentation
On Mon, Nov 1, 2010 at 12:47 AM, Hugo Mills <hugo-lkml@carfax.org.uk> wrote:> On Mon, Nov 01, 2010 at 12:36:58AM +0200, Felipe Contreras wrote: >> On Mon, Nov 1, 2010 at 12:25 AM, cwillu <cwillu@cwillu.com> wrote: >> > btrfs fi defrag isn''t recursive. "btrfs filesystem defrag /home" will >> > defragment the space used to store the folder, without touching the >> > space used to store files in that folder. >> >> Yes, that came up on the IRC, but: >> >> 1) It doesn''t make sense: "btrfs filesystem" doesn''t allow a fileystem >> as argument? Why would anyone want it to be _non_ recursive? > > You missed the subsequent discussion on IRC about the interaction > of COW with defrag. Essentially, if you''ve got two files that are COW > copies of each other, and one has had something written to it since, > it''s *impossible* for both files to be defragmented, without making a > full copy of both: > > Start with a file (A, etc are data blocks on the disk): > > file1 = ABCDEF > > Cow copy it: > > file1 = ABCDEF > file2 = ABCDEF > > Now write to one of them: > > file1 = ABCDEF > file2 = ABCDxF > > So, either file1 is contiguous, and file2 is fragmented (with the > block x somewhere else on disk), or file2 is contiguous, and file1 is > fragmented (with E somewhere else on disk). In fact, we''ve determined > by experiment that when you defrag a file that''s sharing blocks with > another one, the file gets copied in its entirety, thus separating the > blocks of the file and its COW duplicate.Ok, but the fragmentation would not be an issue in this case.>> 2) The filesystem should not degrade performance so horribly no matter >> how long the it has been used. Even git has automatic garbage >> collection. > > Since, I believe, btrfs uses COW very heavily internally for > ensuring consistency, you can end up with fragmenting files and > directories very easily. You probably need some kind of scrubber that > goes looking for non-COW files that are fragmented, and defrags them > in the background.Or when going through all the fragments of a file, have a counter, and if it exceeds certain limit mark it somehow, so that it gets defragmented at least to a certain extent. -- Felipe Contreras -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Nov 1, 2010 at 11:58 AM, Felipe Contreras <felipe.contreras@gmail.com> wrote:> Or when going through all the fragments of a file, have a counter, and > if it exceeds certain limit mark it somehow, so that it gets > defragmented at least to a certain extent.Thats elegant, — then resources are only spent on defragmenting files which are actually in use and which actually need it... but I don''t see how it deals with the partial mutual exclusivity of defragmenting and COWed files. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[ resend, sorry if anyone sees this twice ] On Sun, Oct 31, 2010 at 09:58:18PM +0200, Felipe Contreras wrote:> On Mon, Oct 11, 2010 at 6:46 PM, Calvin Walton <calvin.walton@gmail.com> wrote: > > On Mon, 2010-10-11 at 03:30 +0300, Felipe Contreras wrote: > >> I use btrfs on most of my volumes on my laptop, and I''ve always felt > >> booting was very slow, but definitely sure is slow, is starting up > >> Google Chrome: > >> > >> encrypted ext4: ~20s > >> btrfs: ~2:11s > >> > >> I have tried different things to find out exactly what is the issue, > >> but haven''t quite found it yet. > > > > If you''ve been using this volume for a while, it could just have become > > badly fragmented. You could try btrfs''s fancy online defragmentation > > abilities to see if that''ll give you an improvement: > > > > # btrfs filesystem defragment /mountpoint/of/volume > > > > Let us know if that helps, of course :) > > I finally managed to track down this issue. Indeed the fragmentation > is horrible, and ''btrfs filesystem defragment'' doesn''t help:So there are two different issues, and you''ll need sysrq-w to see which one you''re really hitting. The first is fragmentation, where you''ll want to defrag a given file with btrfs filesystem defrag <filename> The second is that when we first mount a filesystem, we have to spend a long time building up the index of which blocks are free. Josef has fixed this for the 2.6.37-rc1, and Linus'' current tree has his changes. You can also pull from the btrfs-unstable master branch and get his changes against 2.6.36. To use the new code, mount -o space_cache. You only need to do this once. The first mount will be slow, but once we''ve read in the block groups for the first mount, later mounts will be much faster. -chris