myLC@gmx.net
2008-Jan-21 02:19 UTC
[Btrfs-devel] 3 thoughts about important and outstanding features
'lo there, =)
1)
having a DVB-S receiver running Linux (PPC) I found myself
wondering how to delete data from the middle of a large file
(stripping a recording of ads, for example - or messing
around in a virtual disk file, etc.). Currently, the common
way of doing this seems to be by copying the file (leaving a
part behind) and then deleting the original. Of course, on a
large file (say 12 GB or more) this can take an eternity;
also you can run into trouble if the filesystem is nearly
full...
Is it me, or doesn't that make any sense?
Having a block-oriented filesystem, operations like this
should only take an instance.
So basically I'm looking for functions to:
- insert a chunk into a file
- delete a chunk from a file
- move a chunk from one file into another
All of the above would be very useful when dealing with
large data, such as DVB-recordings (i.e.: video).
This is also interesting for large databases. Currently they
are implementing "their own" filesystems on top of other
filesystems - which would then be superfluous.
Seeing a large file as a chain of blocks, making such an
operation on a block-sized basis should already be easy to
accomplish. However, if you want full support there should
be the possibility to insert "sparse blocks" (less content
than the usual) within a chain of blocks (including the
beginning).
Is this already possible?
Would it be difficult to implement?
Think about it: instead of copying gigabytes with the
drive's heads clicking around - taking minutes to hours,
such operations could be performed in (milli)seconds.
I think that there should indeed be a standard (Posix?) for
providing such functionality. (One call could be to
determine if the filesystem supports those operations fast -
it could return a version for instance, 0 meaning that the
operations, although provided, will be slow.)
What is needed in the first place however, is a filesystem
supporting those operations (via fcntl or so) - making it
instantly first choice for VDRs running Linux. (The other
fs' would surely follow soon after, at which point there
should be the chance for establishing a standard.)
2)
There is a HUGE difference in performance when it comes to
harddrives and their outer versus their inner tracks.
A "self-optimizing" filesystem could make use of this by
allowing the user/administrator to specify preferences about
where to put certain files.
For instance, a particular group of files might be best held
together; big files of importance to the system's
performance could declare their liking to be put into the
outer zones, while less critical stuff could be shoved
towards the drive's hub...
Currently this is mostly handled by partitioning. We all
know how inflexible and wasteful this can be.
Since you are already having online fschecks, online
optimization (also defragmentation) would be the next,
certainly quite appreciated, level.
3)
Not sure if you already have this: ZFS' outstanding feature?
Copy on write links. ;-) VERY powerful.
Looking forward to some enlightenment... :-)
LC (myLC@gmx.net)
Chris Mason
2008-Jan-21 08:47 UTC
[Btrfs-devel] 3 thoughts about important and outstanding features
On Monday 21 January 2008, myLC@gmx.net wrote:> 'lo there, =) > > 1) > having a DVB-S receiver running Linux (PPC) I found myself > wondering how to delete data from the middle of a large file > (stripping a recording of ads, for example - or messing > around in a virtual disk file, etc.). Currently, the common > way of doing this seems to be by copying the file (leaving a > part behind) and then deleting the original. Of course, on a > large file (say 12 GB or more) this can take an eternity; > also you can run into trouble if the filesystem is nearly > full... > > Is it me, or doesn't that make any sense? > Having a block-oriented filesystem, operations like this > should only take an instance. > > So basically I'm looking for functions to: > - insert a chunk into a file > - delete a chunk from a file > - move a chunk from one file into anotherLike most filesystems, btrfs stores metadata for extents keyed by the offset into the file. So, in order to insert one byte in the middle of the file, you have to change all of the extent pointers after that byte in the file by one. This can be slow, although it is certainly faster than copying all the data. It is very hard to provide slicing operations in place due to races with truncate. I'm not eager to dive into all of those corner cases. But, what Btrfs can do is provide a few building blocks that applications could use to do this much much faster than it is done today. It relates to the cow one file ioctl as well. On disk, btrfs file data extent pointers store 4 numbers: [ start of the extent on disk, length of the extent on disk, offset into the extent, length ] This is currently used when doing copy on write operations in the middle of an extent. A new extent is created with the modified data, and pointers are setup to the old extent for the bytes surrounding the new data. The Btrfs disk format allows one file to reference extents made by another file. So, we could easily write an ioctl that created a new file referencing all of the extents of an existing file. That ioctl could also provide the ability to reference only parts of an existing file. Punching holes in an existing file is something I think XFS can do now via an ioctl. Shifting bytes around is a different (and much more complex) story unless you're willing to shift them to a new file.> > 2) > There is a HUGE difference in performance when it comes to > harddrives and their outer versus their inner tracks. > A "self-optimizing" filesystem could make use of this by > allowing the user/administrator to specify preferences about > where to put certain files.Different disks do different things (including inverting where block 0 lies). Btrfs will have flexible allocation policies that help admins make their own decisions about these things, but inner/outer optimizations probably won't be a focus of automatic tuning. -chris