Martin Steigerwald
2012-May-04 16:35 UTC
balancing metadata fails with no space left on device
Hi! merkaba:~> btrfs balance start -m / ERROR: error during balancing ''/'' - No space left on device There may be more info in syslog - try dmesg | tail merkaba:~#19> dmesg | tail -22 [ 62.918734] CPU0: Package power limit normal [ 525.229976] btrfs: relocating block group 20422066176 flags 1 [ 526.940452] btrfs: found 3048 extents [ 528.803778] btrfs: found 3048 extents [ 528.988440] btrfs: relocating block group 17746100224 flags 34 [ 529.116424] btrfs: found 1 extents [ 529.247866] btrfs: relocating block group 17611882496 flags 36 [ 536.003596] btrfs: found 14716 extents [ 536.170073] btrfs: relocating block group 17477664768 flags 36 [ 542.230713] btrfs: found 13170 extents [ 542.353089] btrfs: relocating block group 17343447040 flags 36 [ 547.446369] btrfs: found 9809 extents [ 547.663141] btrfs: 1 enospc errors during balance [ 629.238168] btrfs: relocating block group 21894266880 flags 34 [ 629.359284] btrfs: found 1 extents [ 629.520614] btrfs: 1 enospc errors during balance [ 630.715766] btrfs: relocating block group 21927821312 flags 34 [ 630.749973] btrfs: found 1 extents [ 630.899621] btrfs: 1 enospc errors during balance [ 635.872857] btrfs: relocating block group 21961375744 flags 34 [ 635.906517] btrfs: found 1 extents [ 636.038096] btrfs: 1 enospc errors during balance merkaba:~> btrfs filesystem show failed to read /dev/sr0 Label: ''debian'' uuid: […] Total devices 1 FS bytes used 7.89GB devid 1 size 18.62GB used 17.58GB path /dev/dm-0 Btrfs Btrfs v0.19 merkaba:~> btrfs filesystem df / Data: total=15.52GB, used=7.31GB System, DUP: total=32.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=1.00GB, used=587.83MB This is repeatable. martin@merkaba:~> cat /proc/version Linux version 3.3.0-trunk-amd64 (Debian 3.3.4-1~experimental.1) (debian- kernel AT lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-1) ) #1 SMP Wed May 2 06:54:24 UTC 2012 Which is Debian´s variant of 3.3.4 with commit bfe050c8857bbc0cd6832c8bf978422573c439f5 Author: Chris Mason <chris.mason AT oracle.com> Date: Thu Apr 12 13:46:48 2012 -0400 Revert "Btrfs: increase the global block reserve estimates" commit 8e62c2de6e23e5c1fee04f59de51b54cc2868ca5 upstream. This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf. We''ve had a number of complaints of early enospc that bisect down to this patch. We''ll hae to fix the reservations differently. Signed-off-by: Chris Mason <chris.mason AT oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh AT linuxfoundation.org> from 3.3.3. May I need to wait for a proper fix to global block reserve for the balance to succeed or do I see a different issue? Since scrubbing still works I take it that balancing was aborted gracefully and thus the filesystem is still intact. This is on a ThinkPad T520 with Intel SSD 320. I only wanted to reorder metadata trees, I do not think it makes much sense to relocate data blocks on a SSD. Maybe the reordering metadata blocks may not make much sense also, but I thought I still report this. Thanks, -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Martin Steigerwald
2012-May-04 16:52 UTC
Re: balancing metadata fails with no space left on device
Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:> Hi! > > merkaba:~> btrfs balance start -m / > ERROR: error during balancing ''/'' - No space left on device > There may be more info in syslog - try dmesg | tail > merkaba:~#19> dmesg | tail -22 > [ 62.918734] CPU0: Package power limit normal > [ 525.229976] btrfs: relocating block group 20422066176 flags 1 > [ 526.940452] btrfs: found 3048 extents > [ 528.803778] btrfs: found 3048 extents > [ 528.988440] btrfs: relocating block group 17746100224 flags 34 > [ 529.116424] btrfs: found 1 extents > [ 529.247866] btrfs: relocating block group 17611882496 flags 36 > [ 536.003596] btrfs: found 14716 extents > [ 536.170073] btrfs: relocating block group 17477664768 flags 36 > [ 542.230713] btrfs: found 13170 extents > [ 542.353089] btrfs: relocating block group 17343447040 flags 36 > [ 547.446369] btrfs: found 9809 extents > [ 547.663141] btrfs: 1 enospc errors during balance > [ 629.238168] btrfs: relocating block group 21894266880 flags 34 > [ 629.359284] btrfs: found 1 extents > [ 629.520614] btrfs: 1 enospc errors during balance > [ 630.715766] btrfs: relocating block group 21927821312 flags 34 > [ 630.749973] btrfs: found 1 extents > [ 630.899621] btrfs: 1 enospc errors during balance > [ 635.872857] btrfs: relocating block group 21961375744 flags 34 > [ 635.906517] btrfs: found 1 extents > [ 636.038096] btrfs: 1 enospc errors during balance > > > merkaba:~> btrfs filesystem show > failed to read /dev/sr0 > Label: ''debian'' uuid: […] > Total devices 1 FS bytes used 7.89GB > devid 1 size 18.62GB used 17.58GB path /dev/dm-0 > > > Btrfs Btrfs v0.19 > merkaba:~> btrfs filesystem df / > Data: total=15.52GB, used=7.31GB > System, DUP: total=32.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.00GB, used=587.83MBI thought data tree might have been to big, so out of curiousity I tried a full balance. It shrunk the data tree but it failed as well: merkaba:~> btrfs balance start / ERROR: error during balancing ''/'' - No space left on device There may be more info in syslog - try dmesg | tail merkaba:~#19> dmesg | tail -63 [ 89.306718] postgres (2876): /proc/2876/oom_adj is deprecated, please use /proc/2876/oom_score_adj instead. [ 159.939728] btrfs: relocating block group 21994930176 flags 34 [ 160.010427] btrfs: relocating block group 21860712448 flags 1 [ 161.188104] btrfs: found 6 extents [ 161.507388] btrfs: found 6 extents [ 161.692596] btrfs: relocating block group 21592276992 flags 1 [ 162.804544] btrfs: found 1930 extents [ 164.615038] btrfs: found 1930 extents [ 164.836342] btrfs: relocating block group 21323841536 flags 1 [ 165.261189] btrfs: found 714 extents [ 166.405800] btrfs: found 714 extents [ 166.599482] btrfs: relocating block group 21055406080 flags 1 [ 167.554796] btrfs: found 1933 extents [ 168.984707] btrfs: found 1933 extents [ 169.169526] btrfs: relocating block group 20786970624 flags 1 [ 170.829402] btrfs: found 2602 extents [ 172.817614] btrfs: found 2602 extents [ 173.020840] btrfs: relocating block group 19885195264 flags 1 [ 177.102572] btrfs: found 5924 extents [ 179.853234] btrfs: found 5924 extents [ 180.124753] btrfs: relocating block group 18828230656 flags 1 [ 185.524803] btrfs: found 15255 extents [ 190.716666] btrfs: found 15255 extents [ 190.968648] btrfs: relocating block group 17754488832 flags 1 [ 194.653684] btrfs: found 4975 extents [ 197.213332] btrfs: found 4975 extents [ 197.427145] btrfs: relocating block group 16269705216 flags 1 [ 203.988076] btrfs: found 9435 extents [ 206.879416] btrfs: found 9435 extents [ 207.094286] btrfs: relocating block group 15195963392 flags 1 [ 214.046474] btrfs: found 12789 extents [ 218.398271] btrfs: found 12789 extents [ 218.685567] btrfs: relocating block group 13048479744 flags 1 [ 226.665003] btrfs: found 10176 extents [ 230.115369] btrfs: found 10176 extents [ 230.418228] btrfs: relocating block group 11840520192 flags 1 [ 238.866773] btrfs: found 10862 extents [ 241.769074] btrfs: found 10862 extents [ 242.030420] btrfs: relocating block group 10364125184 flags 1 [ 253.602784] btrfs: found 15486 extents [ 257.715518] btrfs: found 15486 extents [ 257.982685] btrfs: relocating block group 8619294720 flags 1 [ 267.146921] btrfs: found 13806 extents [ 271.022675] btrfs: found 13806 extents [ 271.268562] btrfs: relocating block group 6471811072 flags 1 [ 278.922272] btrfs: found 14490 extents [ 283.589668] btrfs: found 14490 extents [ 283.838663] btrfs: relocating block group 5398069248 flags 1 [ 292.536548] btrfs: found 15367 extents [ 296.030960] btrfs: found 15367 extents [ 296.346493] btrfs: relocating block group 4324327424 flags 1 [ 304.276714] btrfs: found 10555 extents [ 306.996284] btrfs: found 10555 extents [ 307.285261] btrfs: relocating block group 3250585600 flags 1 [ 317.425150] btrfs: found 26305 extents [ 322.227915] btrfs: found 26305 extents [ 322.537047] btrfs: relocating block group 2176843776 flags 1 [ 331.945877] btrfs: found 17104 extents [ 335.615238] btrfs: found 17079 extents [ 335.897953] btrfs: relocating block group 1103101952 flags 1 [ 347.888295] btrfs: found 28458 extents [ 352.736987] btrfs: found 28458 extents [ 353.099659] btrfs: 1 enospc errors during balance merkaba:~> btrfs filesystem df / Data: total=10.00GB, used=7.31GB System, DUP: total=64.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=1.12GB, used=587.20MB merkaba:~> btrfs filesystem show failed to read /dev/sr0 Label: ''debian'' uuid: […] Total devices 1 FS bytes used 7.88GB devid 1 size 18.62GB used 12.38GB path /dev/dm-0 For the sake of it I tried another time. It failed again: martin@merkaba:~> dmesg | tail -32 [ 353.099659] btrfs: 1 enospc errors during balance [ 537.057375] btrfs: relocating block group 32833011712 flags 36 [ 537.141258] btrfs: relocating block group 31759269888 flags 1 [ 537.284200] btrfs: relocating block group 30685528064 flags 1 [ 537.427898] btrfs: relocating block group 29611786240 flags 1 [ 541.103070] btrfs: found 8018 extents [ 543.033896] btrfs: found 8018 extents [ 543.242608] btrfs: relocating block group 28538044416 flags 1 [ 554.136612] btrfs: found 28088 extents [ 557.621779] btrfs: found 28088 extents [ 557.964021] btrfs: relocating block group 27464302592 flags 1 [ 568.547296] btrfs: found 27207 extents [ 572.411358] btrfs: found 27207 extents [ 572.749233] btrfs: relocating block group 26390560768 flags 1 [ 583.383545] btrfs: found 23359 extents [ 586.907206] btrfs: found 23359 extents [ 587.280967] btrfs: relocating block group 25316818944 flags 1 [ 597.054363] btrfs: found 22546 extents [ 600.206597] btrfs: found 22546 extents [ 600.444821] btrfs: relocating block group 24243077120 flags 1 [ 610.921027] btrfs: found 17593 extents [ 613.609900] btrfs: found 17593 extents [ 613.900155] btrfs: relocating block group 23169335296 flags 1 [ 624.355734] btrfs: found 18764 extents [ 627.252739] btrfs: found 18764 extents [ 627.567920] btrfs: relocating block group 22095593472 flags 1 [ 637.448364] btrfs: found 21593 extents [ 641.256887] btrfs: found 21593 extents [ 641.479140] btrfs: relocating block group 22062039040 flags 34 [ 641.695614] btrfs: relocating block group 22028484608 flags 34 [ 641.840179] btrfs: found 1 extents [ 641.965843] btrfs: 1 enospc errors during balance merkaba:~#19> btrfs filesystem df / Data: total=10.00GB, used=7.31GB System, DUP: total=32.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=1.12GB, used=586.74MB merkaba:~> btrfs filesystem show failed to read /dev/sr0 Label: ''debian'' uuid: […] Total devices 1 FS bytes used 7.88GB devid 1 size 18.62GB used 12.32GB path /dev/dm-0 Btrfs Btrfs v0.19 Well, in order to be gentle to the SSD again I stop my experiments now ;). Thanks, -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Martin Steigerwald
2012-May-06 11:19 UTC
Re: balancing metadata fails with no space left on device
Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:> Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald: > > Hi! > > > > merkaba:~> btrfs balance start -m / > > ERROR: error during balancing ''/'' - No space left on device > > There may be more info in syslog - try dmesg | tail > > merkaba:~#19> dmesg | tail -22 > > [ 62.918734] CPU0: Package power limit normal > > [ 525.229976] btrfs: relocating block group 20422066176 flags 1 > > [ 526.940452] btrfs: found 3048 extents > > [ 528.803778] btrfs: found 3048 extents[…]> > [ 635.906517] btrfs: found 1 extents > > [ 636.038096] btrfs: 1 enospc errors during balance > > > > > > merkaba:~> btrfs filesystem show > > failed to read /dev/sr0 > > Label: ''debian'' uuid: […] > > > > Total devices 1 FS bytes used 7.89GB > > devid 1 size 18.62GB used 17.58GB path /dev/dm-0 > > > > Btrfs Btrfs v0.19 > > merkaba:~> btrfs filesystem df / > > Data: total=15.52GB, used=7.31GB > > System, DUP: total=32.00MB, used=4.00KB > > System: total=4.00MB, used=0.00 > > Metadata, DUP: total=1.00GB, used=587.83MB > > I thought data tree might have been to big, so out of curiousity I > tried a full balance. It shrunk the data tree but it failed as well: > > merkaba:~> btrfs balance start / > ERROR: error during balancing ''/'' - No space left on device > There may be more info in syslog - try dmesg | tail > merkaba:~#19> dmesg | tail -63 > [ 89.306718] postgres (2876): /proc/2876/oom_adj is deprecated, > please use /proc/2876/oom_score_adj instead. > [ 159.939728] btrfs: relocating block group 21994930176 flags 34 > [ 160.010427] btrfs: relocating block group 21860712448 flags 1 > [ 161.188104] btrfs: found 6 extents > [ 161.507388] btrfs: found 6 extents[…]> [ 335.897953] btrfs: relocating block group 1103101952 flags 1 > [ 347.888295] btrfs: found 28458 extents > [ 352.736987] btrfs: found 28458 extents > [ 353.099659] btrfs: 1 enospc errors during balance > > merkaba:~> btrfs filesystem df / > Data: total=10.00GB, used=7.31GB > System, DUP: total=64.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.12GB, used=587.20MB > > merkaba:~> btrfs filesystem show > failed to read /dev/sr0 > Label: ''debian'' uuid: […] > Total devices 1 FS bytes used 7.88GB > devid 1 size 18.62GB used 12.38GB path /dev/dm-0 > > > For the sake of it I tried another time. It failed again: > > martin@merkaba:~> dmesg | tail -32 > [ 353.099659] btrfs: 1 enospc errors during balance > [ 537.057375] btrfs: relocating block group 32833011712 flags 36[…]> [ 641.479140] btrfs: relocating block group 22062039040 flags 34 > [ 641.695614] btrfs: relocating block group 22028484608 flags 34 > [ 641.840179] btrfs: found 1 extents > [ 641.965843] btrfs: 1 enospc errors during balance > > > merkaba:~#19> btrfs filesystem df / > Data: total=10.00GB, used=7.31GB > System, DUP: total=32.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.12GB, used=586.74MB > merkaba:~> btrfs filesystem show > failed to read /dev/sr0 > Label: ''debian'' uuid: […] > Total devices 1 FS bytes used 7.88GB > devid 1 size 18.62GB used 12.32GB path /dev/dm-0 > > Btrfs Btrfs v0.19 > > > Well, in order to be gentle to the SSD again I stop my experiments now > ;).I had subjective impression that the speed of the BTRFS filesystem decreased after all these Anyway, after reading the a -musage hint by Ilya in thread Is it possible to reclaim block groups once they ar allocated to data or metadata? I tried: merkaba:~> btrfs filesystem df / Data: total=10.00GB, used=7.34GB System, DUP: total=32.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=1.12GB, used=586.39MB merkaba:~> btrfs balance start -musage=1 / Done, had to relocate 2 out of 13 chunks merkaba:~> btrfs filesystem df / Data: total=10.00GB, used=7.34GB System, DUP: total=32.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=1.00GB, used=586.39MB So this worked. But I wasn´t able to specify less than a Gig: merkaba:~> btrfs balance start -musage=0.8 / Invalid usage argument: 0.8 merkaba:~#1> btrfs balance start -musage=700M / Invalid usage argument: 700M When I try without usage I get the old behavior back: merkaba:~#1> btrfs balance start -m / ERROR: error during balancing ''/'' - No space left on device There may be more info in syslog - try dmesg | tail merkaba:~> btrfs balance start -musage=1 / Done, had to relocate 2 out of 13 chunks merkaba:~> btrfs balance start -musage=1 / Done, had to relocate 1 out of 12 chunks merkaba:~> btrfs balance start -musage=1 / Done, had to relocate 1 out of 12 chunks merkaba:~> btrfs balance start -musage=1 / Done, had to relocate 1 out of 12 chunks merkaba:~> btrfs filesystem df / Data: total=10.00GB, used=7.34GB System, DUP: total=32.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=1.00GB, used=586.41MB Ciao, -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Ilya Dryomov
2012-May-06 18:48 UTC
Re: balancing metadata fails with no space left on device
On Sun, May 06, 2012 at 01:19:38PM +0200, Martin Steigerwald wrote:> Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald: > > Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald: > > > Hi! > > > > > > merkaba:~> btrfs balance start -m / > > > ERROR: error during balancing ''/'' - No space left on device > > > There may be more info in syslog - try dmesg | tail > > > merkaba:~#19> dmesg | tail -22 > > > [ 62.918734] CPU0: Package power limit normal > > > [ 525.229976] btrfs: relocating block group 20422066176 flags 1 > > > [ 526.940452] btrfs: found 3048 extents > > > [ 528.803778] btrfs: found 3048 extents > […] > > > [ 635.906517] btrfs: found 1 extents > > > [ 636.038096] btrfs: 1 enospc errors during balance > > > > > > > > > merkaba:~> btrfs filesystem show > > > failed to read /dev/sr0 > > > Label: ''debian'' uuid: […] > > > > > > Total devices 1 FS bytes used 7.89GB > > > devid 1 size 18.62GB used 17.58GB path /dev/dm-0 > > > > > > Btrfs Btrfs v0.19 > > > merkaba:~> btrfs filesystem df / > > > Data: total=15.52GB, used=7.31GB > > > System, DUP: total=32.00MB, used=4.00KB > > > System: total=4.00MB, used=0.00 > > > Metadata, DUP: total=1.00GB, used=587.83MB > > > > I thought data tree might have been to big, so out of curiousity I > > tried a full balance. It shrunk the data tree but it failed as well: > > > > merkaba:~> btrfs balance start / > > ERROR: error during balancing ''/'' - No space left on device > > There may be more info in syslog - try dmesg | tail > > merkaba:~#19> dmesg | tail -63 > > [ 89.306718] postgres (2876): /proc/2876/oom_adj is deprecated, > > please use /proc/2876/oom_score_adj instead. > > [ 159.939728] btrfs: relocating block group 21994930176 flags 34 > > [ 160.010427] btrfs: relocating block group 21860712448 flags 1 > > [ 161.188104] btrfs: found 6 extents > > [ 161.507388] btrfs: found 6 extents > […] > > [ 335.897953] btrfs: relocating block group 1103101952 flags 1 > > [ 347.888295] btrfs: found 28458 extents > > [ 352.736987] btrfs: found 28458 extents > > [ 353.099659] btrfs: 1 enospc errors during balance > > > > merkaba:~> btrfs filesystem df / > > Data: total=10.00GB, used=7.31GB > > System, DUP: total=64.00MB, used=4.00KB > > System: total=4.00MB, used=0.00 > > Metadata, DUP: total=1.12GB, used=587.20MB > > > > merkaba:~> btrfs filesystem show > > failed to read /dev/sr0 > > Label: ''debian'' uuid: […] > > Total devices 1 FS bytes used 7.88GB > > devid 1 size 18.62GB used 12.38GB path /dev/dm-0 > > > > > > For the sake of it I tried another time. It failed again: > > > > martin@merkaba:~> dmesg | tail -32 > > [ 353.099659] btrfs: 1 enospc errors during balance > > [ 537.057375] btrfs: relocating block group 32833011712 flags 36 > […] > > [ 641.479140] btrfs: relocating block group 22062039040 flags 34 > > [ 641.695614] btrfs: relocating block group 22028484608 flags 34 > > [ 641.840179] btrfs: found 1 extents > > [ 641.965843] btrfs: 1 enospc errors during balance > > > > > > merkaba:~#19> btrfs filesystem df / > > Data: total=10.00GB, used=7.31GB > > System, DUP: total=32.00MB, used=4.00KB > > System: total=4.00MB, used=0.00 > > Metadata, DUP: total=1.12GB, used=586.74MB > > merkaba:~> btrfs filesystem show > > failed to read /dev/sr0 > > Label: ''debian'' uuid: […] > > Total devices 1 FS bytes used 7.88GB > > devid 1 size 18.62GB used 12.32GB path /dev/dm-0 > > > > Btrfs Btrfs v0.19 > > > > > > Well, in order to be gentle to the SSD again I stop my experiments now > > ;). > > I had subjective impression that the speed of the BTRFS filesystem > decreased after all these > > Anyway, after reading the a -musage hint by Ilya in thread > > Is it possible to reclaim block groups once they ar allocated to data or > metadata?Currently there is no way to reclaim block groups other than performing a balance. We will add a kernel thread for this in future, but a couple of things have to be fixed before that can happen.> > > I tried: > > merkaba:~> btrfs filesystem df / > Data: total=10.00GB, used=7.34GB > System, DUP: total=32.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.12GB, used=586.39MB > > merkaba:~> btrfs balance start -musage=1 / > Done, had to relocate 2 out of 13 chunks > > merkaba:~> btrfs filesystem df / > Data: total=10.00GB, used=7.34GB > System, DUP: total=32.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.00GB, used=586.39MB > > So this worked. > > But I wasn´t able to specify less than a Gig:A follow up to the -musage hint says that the argument it takes is the percentage. That is -musage=X will balance out block groups that are less than X percent used.> > merkaba:~> btrfs balance start -musage=0.8 / > Invalid usage argument: 0.8 > merkaba:~#1> btrfs balance start -musage=700M / > Invalid usage argument: 700M > > > When I try without usage I get the old behavior back: > > merkaba:~#1> btrfs balance start -m / > ERROR: error during balancing ''/'' - No space left on device > There may be more info in syslog - try dmesg | tail > > > merkaba:~> btrfs balance start -musage=1 / > Done, had to relocate 2 out of 13 chunks > merkaba:~> btrfs balance start -musage=1 / > Done, had to relocate 1 out of 12 chunks > merkaba:~> btrfs balance start -musage=1 / > Done, had to relocate 1 out of 12 chunks > merkaba:~> btrfs balance start -musage=1 / > Done, had to relocate 1 out of 12 chunks > merkaba:~> btrfs filesystem df / > Data: total=10.00GB, used=7.34GB > System, DUP: total=32.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.00GB, used=586.41MBBtrfs allocates space in chunks, in your case metadata chunks are probably 512M in size. Naturally, having 586M busy you can''t make that chunk go away, be it with or without auto-reclaim and usage filter accepting size as its input. Thanks, Ilya -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Robin Nehls
2012-May-06 19:38 UTC
Re: balancing metadata fails with no space left on device
Am Fri, 4 May 2012 18:35:39 +0200 schrieb Martin Steigerwald <Martin@lichtvoll.de>:> Hi! > > merkaba:~> btrfs balance start -m / > ERROR: error during balancing ''/'' - No space left on device > There may be more info in syslog - try dmesg | tail > merkaba:~#19> dmesg | tail -22 > [ 62.918734] CPU0: Package power limit normal > [ 525.229976] btrfs: relocating block group 20422066176 flags 1 > [ 526.940452] btrfs: found 3048 extents > [ 528.803778] btrfs: found 3048 extents > [ 528.988440] btrfs: relocating block group 17746100224 flags 34 > [ 529.116424] btrfs: found 1 extents > [ 529.247866] btrfs: relocating block group 17611882496 flags 36 > [ 536.003596] btrfs: found 14716 extents > [ 536.170073] btrfs: relocating block group 17477664768 flags 36 > [ 542.230713] btrfs: found 13170 extents > [ 542.353089] btrfs: relocating block group 17343447040 flags 36 > [ 547.446369] btrfs: found 9809 extents > [ 547.663141] btrfs: 1 enospc errors during balance > [ 629.238168] btrfs: relocating block group 21894266880 flags 34 > [ 629.359284] btrfs: found 1 extents > [ 629.520614] btrfs: 1 enospc errors during balance > [ 630.715766] btrfs: relocating block group 21927821312 flags 34 > [ 630.749973] btrfs: found 1 extents > [ 630.899621] btrfs: 1 enospc errors during balance > [ 635.872857] btrfs: relocating block group 21961375744 flags 34 > [ 635.906517] btrfs: found 1 extents > [ 636.038096] btrfs: 1 enospc errors during balance > > > merkaba:~> btrfs filesystem show > failed to read /dev/sr0 > Label: ''debian'' uuid: […] > Total devices 1 FS bytes used 7.89GB > devid 1 size 18.62GB used 17.58GB path /dev/dm-0 > > > Btrfs Btrfs v0.19 > merkaba:~> btrfs filesystem df / > Data: total=15.52GB, used=7.31GB > System, DUP: total=32.00MB, used=4.00KB > System: total=4.00MB, used=0.00 > Metadata, DUP: total=1.00GB, used=587.83MB > > > This is repeatable. > > martin@merkaba:~> cat /proc/version > Linux version 3.3.0-trunk-amd64 (Debian 3.3.4-1~experimental.1) > (debian- kernel AT lists.debian.org) (gcc version 4.6.3 (Debian > 4.6.3-1) ) #1 SMP Wed May 2 06:54:24 UTC 2012 > > > Which is Debian´s variant of 3.3.4 with > > commit bfe050c8857bbc0cd6832c8bf978422573c439f5 > Author: Chris Mason <chris.mason AT oracle.com> > Date: Thu Apr 12 13:46:48 2012 -0400 > > Revert "Btrfs: increase the global block reserve estimates" > > commit 8e62c2de6e23e5c1fee04f59de51b54cc2868ca5 upstream. > > This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf. > > We''ve had a number of complaints of early enospc that bisect down > to this patch. We''ll hae to fix the reservations differently. > > Signed-off-by: Chris Mason <chris.mason AT oracle.com> > Signed-off-by: Greg Kroah-Hartman <gregkh AT > linuxfoundation.org> > > from 3.3.3. > > May I need to wait for a proper fix to global block reserve for the > balance to succeed or do I see a different issue? > > > Since scrubbing still works I take it that balancing was aborted > gracefully and thus the filesystem is still intact. This is on a > ThinkPad T520 with Intel SSD 320. I only wanted to reorder metadata > trees, I do not think it makes much sense to relocate data blocks on > a SSD. Maybe the reordering metadata blocks may not make much sense > also, but I thought I still report this. > > Thanks,Hi, I think I have a similar problem, but in my case there is lots of free space available. So this might also be a bug. My problem: I wanted to convert the data of my btrfs from RAID0 to single. No matter if I use soft or not, the progress always stops with 3GB RAID0 remaining. The conversion is newer completed so new files are allways written to the RAID0 part of data. If i do a balance without special options, data is converted back to RAID0. This enospc error can''t be correct because there is about 1 TB of space available. What I do: # ./btrfs balance start -dconvert=single,soft /mnt/btrfs/ ERROR: error during balancing ''/mnt/btrfs/'' - No space left on device There may be more info in syslog - try dmesg | tail Relevant Dmesg: [418912.485276] btrfs: relocating block group 11165392437248 flags 9 [418914.044328] btrfs: 1 enospc errors during balance FS Information: # ./btrfs filesystem show Label: none uuid: 0251aa44-4e39-4db5-b18d-ffc8e85042ab Total devices 3 FS bytes used 2.24TB devid 1 size 1.82TB used 1.59TB path /dev/sdc1 devid 3 size 931.51GB used 696.06GB path /dev/sdd1 devid 2 size 931.51GB used 696.00GB path /dev/sdb1 Btrfs Btrfs v0.19-dirty # ./btrfs filesystem df /mnt/btrfs/ Data, RAID0: total=3.00GB, used=3.00GB Data: total=2.80TB, used=2.24TB System, RAID1: total=64.00MB, used=328.00KB System: total=4.00MB, used=0.00 Metadata, RAID1: total=75.00GB, used=2.94GB # cat /proc/version Linux version 3.4.0-rc5-amd64 (root@hermes) (gcc version 4.6.3 (Debian 4.6.3-1) ) #1 SMP Tue May 1 23:52:34 CEST 2012 So long, Robi
Martin Steigerwald
2012-May-07 19:19 UTC
Re: balancing metadata fails with no space left on device
Am Sonntag, 6. Mai 2012 schrieb Ilya Dryomov:> On Sun, May 06, 2012 at 01:19:38PM +0200, Martin Steigerwald wrote: > > Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald: > > > Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald: > > > > Hi! > > > > > > > > merkaba:~> btrfs balance start -m / > > > > ERROR: error during balancing ''/'' - No space left on device > > > > There may be more info in syslog - try dmesg | tail > > > > merkaba:~#19> dmesg | tail -22 > > > > [ 62.918734] CPU0: Package power limit normal > > > > [ 525.229976] btrfs: relocating block group 20422066176 flags 1 > > > > [ 526.940452] btrfs: found 3048 extents > > > > [ 528.803778] btrfs: found 3048 extents > > > > […] > > > > > > [ 635.906517] btrfs: found 1 extents > > > > [ 636.038096] btrfs: 1 enospc errors during balance > > > > > > > > > > > > merkaba:~> btrfs filesystem show > > > > failed to read /dev/sr0 > > > > Label: ''debian'' uuid: […] > > > > > > > > Total devices 1 FS bytes used 7.89GB > > > > devid 1 size 18.62GB used 17.58GB path /dev/dm-0 > > > > > > > > Btrfs Btrfs v0.19 > > > > merkaba:~> btrfs filesystem df / > > > > Data: total=15.52GB, used=7.31GB > > > > System, DUP: total=32.00MB, used=4.00KB > > > > System: total=4.00MB, used=0.00 > > > > Metadata, DUP: total=1.00GB, used=587.83MB > > > > > > I thought data tree might have been to big, so out of curiousity I > > > tried a full balance. It shrunk the data tree but it failed as > > > well: > > > > > > merkaba:~> btrfs balance start / > > > ERROR: error during balancing ''/'' - No space left on device > > > There may be more info in syslog - try dmesg | tail > > > merkaba:~#19> dmesg | tail -63 > > > [ 89.306718] postgres (2876): /proc/2876/oom_adj is deprecated, > > > please use /proc/2876/oom_score_adj instead. > > > [ 159.939728] btrfs: relocating block group 21994930176 flags 34 > > > [ 160.010427] btrfs: relocating block group 21860712448 flags 1 > > > [ 161.188104] btrfs: found 6 extents > > > [ 161.507388] btrfs: found 6 extents > > > > […] > > > > > [ 335.897953] btrfs: relocating block group 1103101952 flags 1 > > > [ 347.888295] btrfs: found 28458 extents > > > [ 352.736987] btrfs: found 28458 extents > > > [ 353.099659] btrfs: 1 enospc errors during balance > > > > > > merkaba:~> btrfs filesystem df / > > > Data: total=10.00GB, used=7.31GB > > > System, DUP: total=64.00MB, used=4.00KB > > > System: total=4.00MB, used=0.00 > > > Metadata, DUP: total=1.12GB, used=587.20MB > > > > > > merkaba:~> btrfs filesystem show > > > failed to read /dev/sr0 > > > Label: ''debian'' uuid: […] > > > > > > Total devices 1 FS bytes used 7.88GB > > > devid 1 size 18.62GB used 12.38GB path /dev/dm-0 > > > > > > For the sake of it I tried another time. It failed again: > > > > > > martin@merkaba:~> dmesg | tail -32 > > > [ 353.099659] btrfs: 1 enospc errors during balance > > > [ 537.057375] btrfs: relocating block group 32833011712 flags 36 > > > > […] > > > > > [ 641.479140] btrfs: relocating block group 22062039040 flags 34 > > > [ 641.695614] btrfs: relocating block group 22028484608 flags 34 > > > [ 641.840179] btrfs: found 1 extents > > > [ 641.965843] btrfs: 1 enospc errors during balance > > > > > > > > > merkaba:~#19> btrfs filesystem df / > > > Data: total=10.00GB, used=7.31GB > > > System, DUP: total=32.00MB, used=4.00KB > > > System: total=4.00MB, used=0.00 > > > Metadata, DUP: total=1.12GB, used=586.74MB > > > merkaba:~> btrfs filesystem show > > > failed to read /dev/sr0 > > > Label: ''debian'' uuid: […] > > > > > > Total devices 1 FS bytes used 7.88GB > > > devid 1 size 18.62GB used 12.32GB path /dev/dm-0 > > > > > > Btrfs Btrfs v0.19 > > > > > > > > > Well, in order to be gentle to the SSD again I stop my experiments > > > now ;). > > > > I had subjective impression that the speed of the BTRFS filesystem > > decreased after all these > > > > Anyway, after reading the a -musage hint by Ilya in thread > > > > Is it possible to reclaim block groups once they ar allocated to data > > or metadata? > > Currently there is no way to reclaim block groups other than performing > a balance. We will add a kernel thread for this in future, but a > couple of things have to be fixed before that can happen.Thanks. Yes, I got that. I just referenced the other thread for other readers.> > I tried: > > > > merkaba:~> btrfs filesystem df / > > Data: total=10.00GB, used=7.34GB > > System, DUP: total=32.00MB, used=4.00KB > > System: total=4.00MB, used=0.00 > > Metadata, DUP: total=1.12GB, used=586.39MB > > > > merkaba:~> btrfs balance start -musage=1 / > > Done, had to relocate 2 out of 13 chunks > > > > merkaba:~> btrfs filesystem df / > > Data: total=10.00GB, used=7.34GB > > System, DUP: total=32.00MB, used=4.00KB > > System: total=4.00MB, used=0.00 > > Metadata, DUP: total=1.00GB, used=586.39MB > > > > So this worked. > > > But I wasn´t able to specify less than a Gig: > > A follow up to the -musage hint says that the argument it takes is the > percentage. That is -musage=X will balance out block groups that are > less than X percent used.I missed that. Hmmm, then the metadata at total=1.00GB was just a coincidence?> > merkaba:~> btrfs balance start -musage=0.8 / > > Invalid usage argument: 0.8 > > merkaba:~#1> btrfs balance start -musage=700M / > > Invalid usage argument: 700M > > > > > > When I try without usage I get the old behavior back: > > > > merkaba:~#1> btrfs balance start -m / > > ERROR: error during balancing ''/'' - No space left on device > > There may be more info in syslog - try dmesg | tail > > > > > > merkaba:~> btrfs balance start -musage=1 / > > Done, had to relocate 2 out of 13 chunks > > merkaba:~> btrfs balance start -musage=1 / > > Done, had to relocate 1 out of 12 chunks > > merkaba:~> btrfs balance start -musage=1 / > > Done, had to relocate 1 out of 12 chunks > > merkaba:~> btrfs balance start -musage=1 / > > Done, had to relocate 1 out of 12 chunks > > merkaba:~> btrfs filesystem df / > > Data: total=10.00GB, used=7.34GB > > System, DUP: total=32.00MB, used=4.00KB > > System: total=4.00MB, used=0.00 > > Metadata, DUP: total=1.00GB, used=586.41MB > > Btrfs allocates space in chunks, in your case metadata chunks are > probably 512M in size. Naturally, having 586M busy you can''t make that > chunk go away, be it with or without auto-reclaim and usage filter > accepting size as its input.Hmmm, whatever it did tough: I believe I had the BTRFS performance go down by a big margin by my playing around. I didn´t to any measurements yet, but apt-cache search could so much slower as well as starting Iceweasel. The SSD tends to feel quite a bit more like a harddisk. (It still feels faster tough.) And startup time has also raised: martin@merkaba:~> systemd-analyze Startup finished in 6058ms (kernel) + 9285ms (userspace) = 15344ms This has been about 8,5 seconds before. I can´t prove that this is due to a slower BTRFS, but I highly suspect it. So I think I learned that there is no guarentee that a BTRFS balance improves the situation at all. It seemed to have worsened it a lot. Well it was just my experimenting around. I didn´t have a real problem before and now I seemed I have to created me one. Now I wonder whether there would be a way to fix up the perceived performance regression except of creating a new logical volume with BTRFS, copying all the stuff to it and switching / to use the new volume. (I doubt that the general Intel SSD 320 has regressed that much due to the balances. The SSD is only one year old and according to data sheet can take 20 GB a day for 5 years. Also I use fstrim from time to time and have about 25 GB left free.) -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Martin Steigerwald
2012-May-10 15:14 UTC
Re: balancing metadata fails with no space left on device
Am Montag, 7. Mai 2012 schrieb Martin Steigerwald:> Am Sonntag, 6. Mai 2012 schrieb Ilya Dryomov: > > On Sun, May 06, 2012 at 01:19:38PM +0200, Martin Steigerwald wrote: > > > Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald: > > > > Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:[…]> > > merkaba:~> btrfs balance start -musage=0.8 / > > > Invalid usage argument: 0.8 > > > merkaba:~#1> btrfs balance start -musage=700M / > > > Invalid usage argument: 700M > > > > > > > > > When I try without usage I get the old behavior back: > > > > > > merkaba:~#1> btrfs balance start -m / > > > ERROR: error during balancing ''/'' - No space left on device > > > There may be more info in syslog - try dmesg | tail > > > > > > > > > merkaba:~> btrfs balance start -musage=1 / > > > Done, had to relocate 2 out of 13 chunks > > > merkaba:~> btrfs balance start -musage=1 / > > > Done, had to relocate 1 out of 12 chunks > > > merkaba:~> btrfs balance start -musage=1 / > > > Done, had to relocate 1 out of 12 chunks > > > merkaba:~> btrfs balance start -musage=1 / > > > Done, had to relocate 1 out of 12 chunks > > > merkaba:~> btrfs filesystem df / > > > Data: total=10.00GB, used=7.34GB > > > System, DUP: total=32.00MB, used=4.00KB > > > System: total=4.00MB, used=0.00 > > > Metadata, DUP: total=1.00GB, used=586.41MB > > > > Btrfs allocates space in chunks, in your case metadata chunks are > > probably 512M in size. Naturally, having 586M busy you can''t make > > that chunk go away, be it with or without auto-reclaim and usage > > filter accepting size as its input. > > Hmmm, whatever it did tough: I believe I had the BTRFS performance go > down by a big margin by my playing around. > > I didn´t to any measurements yet, but apt-cache search could so much > slower as well as starting Iceweasel. The SSD tends to feel quite a bit > more like a harddisk. (It still feels faster tough.) > > And startup time has also raised: > > martin@merkaba:~> systemd-analyze > Startup finished in 6058ms (kernel) + 9285ms (userspace) = 15344ms > > This has been about 8,5 seconds before. > > I can´t prove that this is due to a slower BTRFS, but I highly suspect > it. > > So I think I learned that there is no guarentee that a BTRFS balance > improves the situation at all. It seemed to have worsened it a lot. > > Well it was just my experimenting around. I didn´t have a real problem > before and now I seemed I have to created me one. > > Now I wonder whether there would be a way to fix up the perceived > performance regression except of creating a new logical volume with > BTRFS, copying all the stuff to it and switching / to use the new > volume.I did it the redo the filesystem way: merkaba:~> systemd-analyze Startup finished in 6602ms (kernel) + 4246ms (userspace) = 10848ms Thats not the about 8.5 I seen already, but it was the first start after activating inode_cache + space_cache, as I didn´t want to activate it on GRML 2011.12 3.1 kernel. The filesystem has been mounted with compress=lzo from the beginning. Which also made in space usage. Also Iceweasel and apt-cache are way faster now again. Almost instant. Next time I use an other BTRFS filesystem than my / one for experiments with balancing ;). Thanks, -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html