I''ve been using btrfs as a spool space on our back-up server to get familiar with it and things work fine until the volume fills up. Our backup software (bacula) usually spools until the volume is full, then despools and respools, etc. With btrfs, it fills up and bacula thinks there is still space so it keeps trying and then finally errors out. I thought this was a problem because I was using compression, but I''ve repeated the problem without compression. Doing some tests, this is what I''ve found: lsddomainsd:/spool# dd if=/dev/zero of=junk bs=1024000 dd: writing `junk'': No space left on device 522163+0 records in 522162+0 records out 534693888000 bytes (535 GB) copied, 6026.85 s, 88.7 MB/s lsddomainsd:/spool# ls junk lsddomainsd:/spool# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/lsddomain-root 6.5G 3.5G 2.7G 58% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 10M 172K 9.9M 2% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda1 228M 92M 124M 43% /boot /dev/mapper/lsddomain-home 4.6G 138M 4.5G 3% /home 192.168.58.2:/backup/bacula 2.5T 996G 1.5T 40% /backup /dev/mapper/spool 500G 499G 1.6G 100% /spool lsddomainsd:/spool# dd if=/dev/zero of=junk2 bs=1024000 dd: writing `junk2'': No space left on device 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0704083 s, 0.0 kB/s lsddomainsd:/spool# dd if=/dev/zero of=junk3 bs=1024000 dd: writing `junk3'': No space left on device 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.108706 s, 0.0 kB/s lsddomainsd:/spool# ls -lh total 498G -rw-r--r-- 1 root root 498G 2010-03-04 09:54 junk -rw-r--r-- 1 root root 0 2010-03-04 13:45 junk2 -rw-r--r-- 1 root root 0 2010-03-04 13:45 junk3 lsddomainsd:/spool# So even though the volume is full it shows space left. Is this supposed to happen? I don''t remember seeing any space left on other file systems and I''ve filled quite a few. Thanks, Robert LeBlanc Life Sciences & Undergraduate Education Computer Support Brigham Young University -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Mar 04, 2010 at 01:58:22PM -0700, Robert LeBlanc wrote:> I''ve been using btrfs as a spool space on our back-up server to get > familiar with it and things work fine until the volume fills up. Our > backup software (bacula) usually spools until the volume is full, then > despools and respools, etc. With btrfs, it fills up and bacula thinks > there is still space so it keeps trying and then finally errors out. I > thought this was a problem because I was using compression, but I''ve > repeated the problem without compression. Doing some tests, this is > what I''ve found: > > lsddomainsd:/spool# dd if=/dev/zero of=junk bs=1024000 > dd: writing `junk'': No space left on device > 522163+0 records in > 522162+0 records out > 534693888000 bytes (535 GB) copied, 6026.85 s, 88.7 MB/s > lsddomainsd:/spool# ls > junk > lsddomainsd:/spool# df -h > Filesystem Size Used Avail Use% Mounted on > /dev/mapper/lsddomain-root > 6.5G 3.5G 2.7G 58% / > tmpfs 2.0G 0 2.0G 0% /lib/init/rw > udev 10M 172K 9.9M 2% /dev > tmpfs 2.0G 0 2.0G 0% /dev/shm > /dev/sda1 228M 92M 124M 43% /boot > /dev/mapper/lsddomain-home > 4.6G 138M 4.5G 3% /home > 192.168.58.2:/backup/bacula > 2.5T 996G 1.5T 40% /backup > /dev/mapper/spool 500G 499G 1.6G 100% /spool > lsddomainsd:/spool# dd if=/dev/zero of=junk2 bs=1024000 > dd: writing `junk2'': No space left on device > 1+0 records in > 0+0 records out > 0 bytes (0 B) copied, 0.0704083 s, 0.0 kB/s > lsddomainsd:/spool# dd if=/dev/zero of=junk3 bs=1024000 > dd: writing `junk3'': No space left on device > 1+0 records in > 0+0 records out > 0 bytes (0 B) copied, 0.108706 s, 0.0 kB/s > lsddomainsd:/spool# ls -lh > total 498G > -rw-r--r-- 1 root root 498G 2010-03-04 09:54 junk > -rw-r--r-- 1 root root 0 2010-03-04 13:45 junk2 > -rw-r--r-- 1 root root 0 2010-03-04 13:45 junk3 > lsddomainsd:/spool# > > So even though the volume is full it shows space left. Is this > supposed to happen? I don''t remember seeing any space left on other > file systems and I''ve filled quite a few. >Yeah this is an unfortunate side-affect of how we currently do df. We plan on changing it, but currently it just shows data used in the used column, so the 1.6G will be whats been reserved for metadata space. IIRC the consensus was to count the used amount from all spaces, and then just add the free unallocated space to that, but it will still likely end up with "Avail" having what is free for metadata space, but not actually able to be used as data. Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Mar 4, 2010 at 2:10 PM, Josef Bacik <josef@redhat.com> wrote:> On Thu, Mar 04, 2010 at 01:58:22PM -0700, Robert LeBlanc wrote: >> I''ve been using btrfs as a spool space on our back-up server to get >> familiar with it and things work fine until the volume fills up. Our >> backup software (bacula) usually spools until the volume is full, then >> despools and respools, etc. With btrfs, it fills up and bacula thinks >> there is still space so it keeps trying and then finally errors out. I >> thought this was a problem because I was using compression, but I''ve >> repeated the problem without compression. Doing some tests, this is >> what I''ve found: >> >> lsddomainsd:/spool# dd if=/dev/zero of=junk bs=1024000 >> dd: writing `junk'': No space left on device >> 522163+0 records in >> 522162+0 records out >> 534693888000 bytes (535 GB) copied, 6026.85 s, 88.7 MB/s >> lsddomainsd:/spool# ls >> junk >> lsddomainsd:/spool# df -h >> Filesystem Size Used Avail Use% Mounted on >> /dev/mapper/lsddomain-root >> 6.5G 3.5G 2.7G 58% / >> tmpfs 2.0G 0 2.0G 0% /lib/init/rw >> udev 10M 172K 9.9M 2% /dev >> tmpfs 2.0G 0 2.0G 0% /dev/shm >> /dev/sda1 228M 92M 124M 43% /boot >> /dev/mapper/lsddomain-home >> 4.6G 138M 4.5G 3% /home >> 192.168.58.2:/backup/bacula >> 2.5T 996G 1.5T 40% /backup >> /dev/mapper/spool 500G 499G 1.6G 100% /spool >> lsddomainsd:/spool# dd if=/dev/zero of=junk2 bs=1024000 >> dd: writing `junk2'': No space left on device >> 1+0 records in >> 0+0 records out >> 0 bytes (0 B) copied, 0.0704083 s, 0.0 kB/s >> lsddomainsd:/spool# dd if=/dev/zero of=junk3 bs=1024000 >> dd: writing `junk3'': No space left on device >> 1+0 records in >> 0+0 records out >> 0 bytes (0 B) copied, 0.108706 s, 0.0 kB/s >> lsddomainsd:/spool# ls -lh >> total 498G >> -rw-r--r-- 1 root root 498G 2010-03-04 09:54 junk >> -rw-r--r-- 1 root root 0 2010-03-04 13:45 junk2 >> -rw-r--r-- 1 root root 0 2010-03-04 13:45 junk3 >> lsddomainsd:/spool# >> >> So even though the volume is full it shows space left. Is this >> supposed to happen? I don''t remember seeing any space left on other >> file systems and I''ve filled quite a few. >> > > Yeah this is an unfortunate side-affect of how we currently do df. We plan on > changing it, but currently it just shows data used in the used column, so the > 1.6G will be whats been reserved for metadata space. IIRC the consensus was to > count the used amount from all spaces, and then just add the free unallocated > space to that, but it will still likely end up with "Avail" having what is free > for metadata space, but not actually able to be used as data. Thanks, > > Josef >So is it possible to prevent the creation of metadata if there is no data blocks left. I wonder if that would solve this issue. Maybe if Bacula sees it can''t write a file at all, it then goes into despooling. I wonder if other programs (cp, rsync, etc) will run into the same problem when the volume is full. I know it''s a corner case, so I''m not going to press the issue when btrfs is not close to stable. Just an issue I came across and thought I''d get some additional info about. Robert LeBlanc Life Sciences & Undergraduate Education Computer Support Brigham Young University -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>> "J" == Josef Bacik <josef@redhat.com> writes:J> Yeah this is an unfortunate side-affect of how we currently do df. Speaking of that, it would be cool were df -i to be enabled. Fs''s like xfs, AIUI, which have fully dynamic inode allocation just use the number of blocks as the max possible number of inodes. Reporting that and iused would be helpful. -JimC -- James Cloos <cloos@jhcloos.com> OpenPGP: 1024D/ED7DAEA6 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html