Christoph Hellwig
2023-Mar-07 14:31 UTC
[Ocfs2-devel] remove most callers of write_one_page v4
Hi all, this series removes most users of the write_one_page API. These helpers internally call ->writepage which we are gradually removing from the kernel. Changes since v3: - drop all patches merged in v6.3-rc1 - re-add the jfs patch Changes since v2: - more minix error handling fixes Changes since v1: - drop the btrfs changes (queue up in the btrfs tree) - drop the finaly move to jfs (can't be done without the btrfs patches) - fix the existing minix code to properly propagate errors Diffstat: fs/jfs/jfs_metapage.c | 39 ++++++++++++++++++++++++++++++++++----- fs/ocfs2/refcounttree.c | 9 +++++---- fs/ufs/dir.c | 29 +++++++++++++++++++---------- include/linux/pagemap.h | 6 ------ mm/page-writeback.c | 40 ---------------------------------------- 5 files changed, 58 insertions(+), 65 deletions(-)
Christoph Hellwig
2023-Mar-07 14:31 UTC
[Ocfs2-devel] [PATCH 1/3] ufs: don't flush page immediately for DIRSYNC directories
We do not need to writeout modified directory blocks immediately when modifying them while the page is locked. It is enough to do the flush somewhat later which has the added benefit that inode times can be flushed as well. It also allows us to stop depending on write_one_page() function. Ported from an ext2 patch by Jan Kara. Signed-off-by: Christoph Hellwig <hch at lst.de> --- fs/ufs/dir.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/fs/ufs/dir.c b/fs/ufs/dir.c index 391efaf1d52897..379d75796a5ce3 100644 --- a/fs/ufs/dir.c +++ b/fs/ufs/dir.c @@ -42,11 +42,10 @@ static inline int ufs_match(struct super_block *sb, int len, return !memcmp(name, de->d_name, len); } -static int ufs_commit_chunk(struct page *page, loff_t pos, unsigned len) +static void ufs_commit_chunk(struct page *page, loff_t pos, unsigned len) { struct address_space *mapping = page->mapping; struct inode *dir = mapping->host; - int err = 0; inode_inc_iversion(dir); block_write_end(NULL, mapping, pos, len, len, page, NULL); @@ -54,10 +53,16 @@ static int ufs_commit_chunk(struct page *page, loff_t pos, unsigned len) i_size_write(dir, pos+len); mark_inode_dirty(dir); } - if (IS_DIRSYNC(dir)) - err = write_one_page(page); - else - unlock_page(page); + unlock_page(page); +} + +static int ufs_handle_dirsync(struct inode *dir) +{ + int err; + + err = filemap_write_and_wait(dir->i_mapping); + if (!err) + err = sync_inode_metadata(dir, 1); return err; } @@ -99,11 +104,12 @@ void ufs_set_link(struct inode *dir, struct ufs_dir_entry *de, de->d_ino = cpu_to_fs32(dir->i_sb, inode->i_ino); ufs_set_de_type(dir->i_sb, de, inode->i_mode); - err = ufs_commit_chunk(page, pos, len); + ufs_commit_chunk(page, pos, len); ufs_put_page(page); if (update_times) dir->i_mtime = dir->i_ctime = current_time(dir); mark_inode_dirty(dir); + ufs_handle_dirsync(dir); } @@ -390,10 +396,11 @@ int ufs_add_link(struct dentry *dentry, struct inode *inode) de->d_ino = cpu_to_fs32(sb, inode->i_ino); ufs_set_de_type(sb, de, inode->i_mode); - err = ufs_commit_chunk(page, pos, rec_len); + ufs_commit_chunk(page, pos, rec_len); dir->i_mtime = dir->i_ctime = current_time(dir); mark_inode_dirty(dir); + err = ufs_handle_dirsync(dir); /* OFFSET_CACHE */ out_put: ufs_put_page(page); @@ -531,9 +538,10 @@ int ufs_delete_entry(struct inode *inode, struct ufs_dir_entry *dir, if (pde) pde->d_reclen = cpu_to_fs16(sb, to - from); dir->d_ino = 0; - err = ufs_commit_chunk(page, pos, to - from); + ufs_commit_chunk(page, pos, to - from); inode->i_ctime = inode->i_mtime = current_time(inode); mark_inode_dirty(inode); + err = ufs_handle_dirsync(inode); out: ufs_put_page(page); UFSD("EXIT\n"); @@ -579,7 +587,8 @@ int ufs_make_empty(struct inode * inode, struct inode *dir) strcpy (de->d_name, ".."); kunmap(page); - err = ufs_commit_chunk(page, 0, chunk_size); + ufs_commit_chunk(page, 0, chunk_size); + err = ufs_handle_dirsync(inode); fail: put_page(page); return err; -- 2.39.1
Christoph Hellwig
2023-Mar-07 14:31 UTC
[Ocfs2-devel] [PATCH 2/3] ocfs2: don't use write_one_page in ocfs2_duplicate_clusters_by_page
Use filemap_write_and_wait_range to write back the range of the dirty page instead of write_one_page in preparation of removing write_one_page and eventually ->writepage. Signed-off-by: Christoph Hellwig <hch at lst.de> Reviewed-by: Jan Kara <jack at suse.cz> Reviewed-by: Joseph Qi <joseph.qi at linux.alibaba.com> --- fs/ocfs2/refcounttree.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c index 5a656dc683f108..564ab48d03effa 100644 --- a/fs/ocfs2/refcounttree.c +++ b/fs/ocfs2/refcounttree.c @@ -2952,10 +2952,11 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle, */ if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) { if (PageDirty(page)) { - /* - * write_on_page will unlock the page on return - */ - ret = write_one_page(page); + unlock_page(page); + put_page(page); + + ret = filemap_write_and_wait_range(mapping, + offset, map_end - 1); goto retry; } } -- 2.39.1
Christoph Hellwig
2023-Mar-07 14:31 UTC
[Ocfs2-devel] [PATCH 3/3] mm, jfs: move write_one_page/folio_write_one to jfs
The last remaining user of folio_write_one through the write_one_page wrapper is jfs, so move the functionality there and hard code the call to metapage_writepage. Note that the use of the pagecache by the JFS 'metapage' buffer cache is a bit odd, and we could probably do without VM-level dirty tracking at all, but that's a change for another time. Signed-off-by: Christoph Hellwig <hch at lst.de> Acked-by: Dave Kleikamp <dave.kleikamp at oracle.com> --- fs/jfs/jfs_metapage.c | 39 ++++++++++++++++++++++++++++++++++----- include/linux/pagemap.h | 6 ------ mm/page-writeback.c | 40 ---------------------------------------- 3 files changed, 34 insertions(+), 51 deletions(-) diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c index 2e8461ce74de69..961569c1115901 100644 --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -691,6 +691,35 @@ void grab_metapage(struct metapage * mp) unlock_page(mp->page); } +static int metapage_write_one(struct page *page) +{ + struct folio *folio = page_folio(page); + struct address_space *mapping = folio->mapping; + struct writeback_control wbc = { + .sync_mode = WB_SYNC_ALL, + .nr_to_write = folio_nr_pages(folio), + }; + int ret = 0; + + BUG_ON(!folio_test_locked(folio)); + + folio_wait_writeback(folio); + + if (folio_clear_dirty_for_io(folio)) { + folio_get(folio); + ret = metapage_writepage(page, &wbc); + if (ret == 0) + folio_wait_writeback(folio); + folio_put(folio); + } else { + folio_unlock(folio); + } + + if (!ret) + ret = filemap_check_errors(mapping); + return ret; +} + void force_metapage(struct metapage *mp) { struct page *page = mp->page; @@ -700,8 +729,8 @@ void force_metapage(struct metapage *mp) get_page(page); lock_page(page); set_page_dirty(page); - if (write_one_page(page)) - jfs_error(mp->sb, "write_one_page() failed\n"); + if (metapage_write_one(page)) + jfs_error(mp->sb, "metapage_write_one() failed\n"); clear_bit(META_forcewrite, &mp->flag); put_page(page); } @@ -746,9 +775,9 @@ void release_metapage(struct metapage * mp) set_page_dirty(page); if (test_bit(META_sync, &mp->flag)) { clear_bit(META_sync, &mp->flag); - if (write_one_page(page)) - jfs_error(mp->sb, "write_one_page() failed\n"); - lock_page(page); /* write_one_page unlocks the page */ + if (metapage_write_one(page)) + jfs_error(mp->sb, "metapage_write_one() failed\n"); + lock_page(page); } } else if (mp->lsn) /* discard_metapage doesn't remove it */ remove_from_logsync(mp); diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 0acb8e1fb7afdc..853184a46411f4 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -1066,12 +1066,6 @@ static inline void folio_cancel_dirty(struct folio *folio) bool folio_clear_dirty_for_io(struct folio *folio); bool clear_page_dirty_for_io(struct page *page); void folio_invalidate(struct folio *folio, size_t offset, size_t length); -int __must_check folio_write_one(struct folio *folio); -static inline int __must_check write_one_page(struct page *page) -{ - return folio_write_one(page_folio(page)); -} - int __set_page_dirty_nobuffers(struct page *page); bool noop_dirty_folio(struct address_space *mapping, struct folio *folio); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 516b1aa247e83f..db794399900734 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2583,46 +2583,6 @@ int do_writepages(struct address_space *mapping, struct writeback_control *wbc) return ret; } -/** - * folio_write_one - write out a single folio and wait on I/O. - * @folio: The folio to write. - * - * The folio must be locked by the caller and will be unlocked upon return. - * - * Note that the mapping's AS_EIO/AS_ENOSPC flags will be cleared when this - * function returns. - * - * Return: %0 on success, negative error code otherwise - */ -int folio_write_one(struct folio *folio) -{ - struct address_space *mapping = folio->mapping; - int ret = 0; - struct writeback_control wbc = { - .sync_mode = WB_SYNC_ALL, - .nr_to_write = folio_nr_pages(folio), - }; - - BUG_ON(!folio_test_locked(folio)); - - folio_wait_writeback(folio); - - if (folio_clear_dirty_for_io(folio)) { - folio_get(folio); - ret = mapping->a_ops->writepage(&folio->page, &wbc); - if (ret == 0) - folio_wait_writeback(folio); - folio_put(folio); - } else { - folio_unlock(folio); - } - - if (!ret) - ret = filemap_check_errors(mapping); - return ret; -} -EXPORT_SYMBOL(folio_write_one); - /* * For address_spaces which do not use buffers nor write back. */ -- 2.39.1
Maybe Matching Threads
- remove most callers of write_one_page v3
- + ufs-dont-flush-page-immediately-for-dirsync-directories.patch added to mm-unstable branch
- [RFC PATCH v1 00/10] mm: Introduce and use folio_owner_ops
- [PATCH v2] mm: Take a page reference when removing device exclusive entries
- [RFC PATCH v1 00/10] mm: Introduce and use folio_owner_ops