From: Josef Bacik <josef@redhat.com> Miao pointed out there''s a problem with mixing dio writes and buffered reads. If the read happens between us invalidating the page range and actually locking the extent we can bring in pages into page cache. Then once the write finishes if somebody tries to read again it will just find uptodate pages and we''ll read stale data. So we need to lock the extent and check for uptodate bits in the range. If there are uptodate bits we need to unlock and invalidate again. This will keep this race from happening since we will hold the extent locked until we create the ordered extent, and then teh read side always waits for ordered extents. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> --- V1->V2 -Use invalidate_inode_pages2_range since it will actually unmap existing pages -Do a filemap_write_and_wait_range in case of mmap fs/btrfs/inode.c | 42 +++++++++++++++++++++++++++++++++++++++--- 1 files changed, 39 insertions(+), 3 deletions(-) diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 9d8c45d..a430549 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -6360,12 +6360,48 @@ static ssize_t btrfs_direct_IO(int rw, struct kiocb *iocb, */ ordered = btrfs_lookup_ordered_range(inode, lockstart, lockend - lockstart + 1); - if (!ordered) + + /* + * We need to make sure there are no buffered pages in this + * range either, we could have raced between the invalidate in + * generic_file_direct_write and locking the extent. The + * invalidate needs to happen so that reads after a write do not + * get stale data. + */ + if (!ordered && (!writing || + !test_range_bit(&BTRFS_I(inode)->io_tree, + lockstart, lockend, EXTENT_UPTODATE, 0, + cached_state))) break; + unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, &cached_state, GFP_NOFS); - btrfs_start_ordered_extent(inode, ordered, 1); - btrfs_put_ordered_extent(ordered); + + if (ordered) { + btrfs_start_ordered_extent(inode, ordered, 1); + btrfs_put_ordered_extent(ordered); + } else { + /* Screw you mmap */ + ret = filemap_write_and_wait_range(file->f_mapping, + lockstart, + lockend); + if (ret) + goto out; + + /* + * If we found a page that couldn''t be invalidated just + * fall back to buffered. + */ + ret = invalidate_inode_pages2_range(file->f_mapping, + lockstart >> PAGE_CACHE_SHIFT, + lockend >> PAGE_CACHE_SHIFT); + if (ret) { + if (ret == -EBUSY) + ret = 0; + goto out; + } + } + cond_resched(); } -- 1.7.7.6 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 26 Jun 2012 09:42:56 -0400, Josef Bacik wrote:> From: Josef Bacik <josef@redhat.com> > > Miao pointed out there''s a problem with mixing dio writes and buffered > reads. If the read happens between us invalidating the page range and > actually locking the extent we can bring in pages into page cache. Then > once the write finishes if somebody tries to read again it will just find > uptodate pages and we''ll read stale data. So we need to lock the extent and > check for uptodate bits in the range. If there are uptodate bits we need to > unlock and invalidate again. This will keep this race from happening since > we will hold the extent locked until we create the ordered extent, and then > teh read side always waits for ordered extents. Thanks,This patch still can not work well. It is because we don''t update i_size in time. Writer Worker Reader lock_extent do direct io end io finish io unlock_extent lock_extent check the pos is beyond EOF or not beyond EOF, zero the page and set it uptodate unlock_extent update i_size So I think we must update the i_size in time, and I wrote a small patch to do it: diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 77d4ae8..7f05f77 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -5992,6 +5992,7 @@ static void btrfs_endio_direct_write(struct bio *bio, int err) struct btrfs_ordered_extent *ordered = NULL; u64 ordered_offset = dip->logical_offset; u64 ordered_bytes = dip->bytes; + u64 i_size; int ret; if (err) @@ -6003,6 +6004,11 @@ again: if (!ret) goto out_test; + /* We don''t worry the file truncation because we hold i_mutex now. */ + i_size = ordered->file_offset + ordered->len; + if (i_size > i_size_read(inode)) + i_size_write(inode, ordered->file_offset + ordered->len); + ordered->work.func = finish_ordered_fn; ordered->work.flags = 0; btrfs_queue_worker(&root->fs_info->endio_write_workers, ---- After applying your patch(the second version) and this patch, all my test passed. But I still think updating the pages is a good way to fix this problem, because it needn''t invalidate the page again and again, and doesn''t waste lots of time. Beside that there is no rule to say the direct io should not touch the page, so I think since we can not invalidate the pages at once just update them. And the race problem between aio and dio can be fixed completely. Thanks Miao> Signed-off-by: Josef Bacik <josef@redhat.com> > --- > V1->V2 > -Use invalidate_inode_pages2_range since it will actually unmap existing pages > -Do a filemap_write_and_wait_range in case of mmap > fs/btrfs/inode.c | 42 +++++++++++++++++++++++++++++++++++++++--- > 1 files changed, 39 insertions(+), 3 deletions(-) > > diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c > index 9d8c45d..a430549 100644 > --- a/fs/btrfs/inode.c > +++ b/fs/btrfs/inode.c > @@ -6360,12 +6360,48 @@ static ssize_t btrfs_direct_IO(int rw, struct kiocb *iocb, > */ > ordered = btrfs_lookup_ordered_range(inode, lockstart, > lockend - lockstart + 1); > - if (!ordered) > + > + /* > + * We need to make sure there are no buffered pages in this > + * range either, we could have raced between the invalidate in > + * generic_file_direct_write and locking the extent. The > + * invalidate needs to happen so that reads after a write do not > + * get stale data. > + */ > + if (!ordered && (!writing || > + !test_range_bit(&BTRFS_I(inode)->io_tree, > + lockstart, lockend, EXTENT_UPTODATE, 0, > + cached_state))) > break; > + > unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, > &cached_state, GFP_NOFS); > - btrfs_start_ordered_extent(inode, ordered, 1); > - btrfs_put_ordered_extent(ordered); > + > + if (ordered) { > + btrfs_start_ordered_extent(inode, ordered, 1); > + btrfs_put_ordered_extent(ordered); > + } else { > + /* Screw you mmap */ > + ret = filemap_write_and_wait_range(file->f_mapping, > + lockstart, > + lockend); > + if (ret) > + goto out; > + > + /* > + * If we found a page that couldn''t be invalidated just > + * fall back to buffered. > + */ > + ret = invalidate_inode_pages2_range(file->f_mapping, > + lockstart >> PAGE_CACHE_SHIFT, > + lockend >> PAGE_CACHE_SHIFT); > + if (ret) { > + if (ret == -EBUSY) > + ret = 0; > + goto out; > + } > + } > + > cond_resched(); > } >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Josef Bacik
2012-Jun-28 12:34 UTC
Re: [PATCH] Btrfs: fix dio write vs buffered read race V2
On Wed, Jun 27, 2012 at 09:35:08PM -0600, Miao Xie wrote:> On Tue, 26 Jun 2012 09:42:56 -0400, Josef Bacik wrote: > > From: Josef Bacik <josef@redhat.com> > > > > Miao pointed out there''s a problem with mixing dio writes and buffered > > reads. If the read happens between us invalidating the page range and > > actually locking the extent we can bring in pages into page cache. Then > > once the write finishes if somebody tries to read again it will just find > > uptodate pages and we''ll read stale data. So we need to lock the extent and > > check for uptodate bits in the range. If there are uptodate bits we need to > > unlock and invalidate again. This will keep this race from happening since > > we will hold the extent locked until we create the ordered extent, and then > > teh read side always waits for ordered extents. Thanks, > > This patch still can not work well. It is because we don''t update i_size in time. > Writer Worker Reader > lock_extent > do direct io > end io > finish io > unlock_extent > lock_extent > check the pos is beyond EOF or not > beyond EOF, zero the page and set it uptodate > unlock_extent > update i_size > > So I think we must update the i_size in time, and I wrote a small patch to do it: >We should probably be updating i_size when we create an extent past EOF in the write stuff, not during endio, I will work this out and fold it into my patch. Good catch.> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c > index 77d4ae8..7f05f77 100644 > --- a/fs/btrfs/inode.c > +++ b/fs/btrfs/inode.c > @@ -5992,6 +5992,7 @@ static void btrfs_endio_direct_write(struct bio *bio, int err) > struct btrfs_ordered_extent *ordered = NULL; > u64 ordered_offset = dip->logical_offset; > u64 ordered_bytes = dip->bytes; > + u64 i_size; > int ret; > > if (err) > @@ -6003,6 +6004,11 @@ again: > if (!ret) > goto out_test; > > + /* We don''t worry the file truncation because we hold i_mutex now. */ > + i_size = ordered->file_offset + ordered->len; > + if (i_size > i_size_read(inode)) > + i_size_write(inode, ordered->file_offset + ordered->len); > + > ordered->work.func = finish_ordered_fn; > ordered->work.flags = 0; > btrfs_queue_worker(&root->fs_info->endio_write_workers, > > ---- > After applying your patch(the second version) and this patch, all my test passed. > > But I still think updating the pages is a good way to fix this problem, because it needn''t > invalidate the page again and again, and doesn''t waste lots of time. Beside that there is no > rule to say the direct io should not touch the page, so I think since we can not invalidate the pages at once just update them. And the race problem between aio and dio can be fixed completely. >Except that your way makes us unconditionally search through pagecache for every DIO write, where as my patch only causes multiple invalidations if somebody is mixing buffered reads with direct writes, and if they are doing that they deserve to be punished ;). Thanks, Josef -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, 28 Jun 2012 08:34:23 -0400, Josef Bacik wrote:> On Wed, Jun 27, 2012 at 09:35:08PM -0600, Miao Xie wrote: >> On Tue, 26 Jun 2012 09:42:56 -0400, Josef Bacik wrote: >>> From: Josef Bacik <josef@redhat.com> >>> >>> Miao pointed out there''s a problem with mixing dio writes and buffered >>> reads. If the read happens between us invalidating the page range and >>> actually locking the extent we can bring in pages into page cache. Then >>> once the write finishes if somebody tries to read again it will just find >>> uptodate pages and we''ll read stale data. So we need to lock the extent and >>> check for uptodate bits in the range. If there are uptodate bits we need to >>> unlock and invalidate again. This will keep this race from happening since >>> we will hold the extent locked until we create the ordered extent, and then >>> teh read side always waits for ordered extents. Thanks, >> >> This patch still can not work well. It is because we don''t update i_size in time. >> Writer Worker Reader >> lock_extent >> do direct io >> end io >> finish io >> unlock_extent >> lock_extent >> check the pos is beyond EOF or not >> beyond EOF, zero the page and set it uptodate >> unlock_extent >> update i_size >> >> So I think we must update the i_size in time, and I wrote a small patch to do it: >> > > We should probably be updating i_size when we create an extent past EOF in the > write stuff, not during endio, I will work this out and fold it into my patch. > Good catch.It is better that update i_size in endio, I think. because during endio, we are sure that the data is flushed into the disk successfully, and can update i_size at ease. and if the error happens when flushing the data into the disk, we also needn''t reset i_size. Thanks Miao -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Chris Mason
2012-Jun-29 13:05 UTC
Re: [PATCH] Btrfs: fix dio write vs buffered read race V2
On Thu, Jun 28, 2012 at 08:18:35PM -0600, Miao Xie wrote:> On Thu, 28 Jun 2012 08:34:23 -0400, Josef Bacik wrote: > > On Wed, Jun 27, 2012 at 09:35:08PM -0600, Miao Xie wrote: > >> On Tue, 26 Jun 2012 09:42:56 -0400, Josef Bacik wrote: > >>> From: Josef Bacik <josef@redhat.com> > >>> > >>> Miao pointed out there''s a problem with mixing dio writes and buffered > >>> reads. If the read happens between us invalidating the page range and > >>> actually locking the extent we can bring in pages into page cache. Then > >>> once the write finishes if somebody tries to read again it will just find > >>> uptodate pages and we''ll read stale data. So we need to lock the extent and > >>> check for uptodate bits in the range. If there are uptodate bits we need to > >>> unlock and invalidate again. This will keep this race from happening since > >>> we will hold the extent locked until we create the ordered extent, and then > >>> teh read side always waits for ordered extents. Thanks, > >> > >> This patch still can not work well. It is because we don''t update i_size in time. > >> Writer Worker Reader > >> lock_extent > >> do direct io > >> end io > >> finish io > >> unlock_extent > >> lock_extent > >> check the pos is beyond EOF or not > >> beyond EOF, zero the page and set it uptodate > >> unlock_extent > >> update i_size > >> > >> So I think we must update the i_size in time, and I wrote a small patch to do it: > >> > > > > We should probably be updating i_size when we create an extent past EOF in the > > write stuff, not during endio, I will work this out and fold it into my patch. > > Good catch. > > It is better that update i_size in endio, I think. because during endio, we are sure that > the data is flushed into the disk successfully, and can update i_size at ease. and if the > error happens when flushing the data into the disk, we also needn''t reset i_size.I think the i_size update should happen sooner. The rest of the filesystems work that way, and it will have fewer interaction problems with the VM. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 29 Jun 2012 09:05:10 -0400, Chris Mason wrote:> On Thu, Jun 28, 2012 at 08:18:35PM -0600, Miao Xie wrote: >> On Thu, 28 Jun 2012 08:34:23 -0400, Josef Bacik wrote: >>> On Wed, Jun 27, 2012 at 09:35:08PM -0600, Miao Xie wrote: >>>> On Tue, 26 Jun 2012 09:42:56 -0400, Josef Bacik wrote: >>>>> From: Josef Bacik <josef@redhat.com> >>>>> >>>>> Miao pointed out there''s a problem with mixing dio writes and buffered >>>>> reads. If the read happens between us invalidating the page range and >>>>> actually locking the extent we can bring in pages into page cache. Then >>>>> once the write finishes if somebody tries to read again it will just find >>>>> uptodate pages and we''ll read stale data. So we need to lock the extent and >>>>> check for uptodate bits in the range. If there are uptodate bits we need to >>>>> unlock and invalidate again. This will keep this race from happening since >>>>> we will hold the extent locked until we create the ordered extent, and then >>>>> teh read side always waits for ordered extents. Thanks, >>>> >>>> This patch still can not work well. It is because we don''t update i_size in time. >>>> Writer Worker Reader >>>> lock_extent >>>> do direct io >>>> end io >>>> finish io >>>> unlock_extent >>>> lock_extent >>>> check the pos is beyond EOF or not >>>> beyond EOF, zero the page and set it uptodate >>>> unlock_extent >>>> update i_size >>>> >>>> So I think we must update the i_size in time, and I wrote a small patch to do it: >>>> >>> >>> We should probably be updating i_size when we create an extent past EOF in the >>> write stuff, not during endio, I will work this out and fold it into my patch. >>> Good catch. >> >> It is better that update i_size in endio, I think. because during endio, we are sure that >> the data is flushed into the disk successfully, and can update i_size at ease. and if the >> error happens when flushing the data into the disk, we also needn''t reset i_size. > > I think the i_size update should happen sooner. The rest of the > filesystems work that way, and it will have fewer interaction problems > with the VM.Thanks for your explanation. Regards Miao -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Christoph Hellwig
2012-Jul-02 06:19 UTC
Re: [PATCH] Btrfs: fix dio write vs buffered read race V2
On Fri, Jun 29, 2012 at 09:05:10AM -0400, Chris Mason wrote:> > It is better that update i_size in endio, I think. because during endio, we are sure that > > the data is flushed into the disk successfully, and can update i_size at ease. and if the > > error happens when flushing the data into the disk, we also needn''t reset i_size. > > I think the i_size update should happen sooner. The rest of the > filesystems work that way, and it will have fewer interaction problems > with the VM.FYI: XFS only updates i_size in the end_io handler. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html