Displaying 11 results from an estimated 11 matches for "xfs_break_dax_layout".
Did you mean:
xfs_break_dax_layouts
2020 Sep 26
1
[PATCH 1/2] ext4/xfs: add page refcount helper
...alph Campbell wrote:
> error = ___wait_var_event(&page->_refcount,
> - atomic_read(&page->_refcount) == 1,
> + dax_layout_is_idle_page(page),
> TASK_INTERRUPTIBLE, 0, 0,
> ext4_wait_dax_page(ei));
> +++ b/fs/xfs/xfs_file.c
> @@ -750,7 +750,7 @@ xfs_break_dax_layouts(
>
> *retry = true;
> return ___wait_var_event(&page->_refcount,
> - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
> + dax_layout_is_idle_page(page), TASK_INTERRUPTIBLE,
> 0, 0, xfs_wait_dax_page(inode));
> }
I still think a litte helper...
2020 Sep 16
1
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
On Mon, Sep 14, 2020 at 04:10:38PM -0700, Dan Williams wrote:
> You also need to fix up ext4_break_layouts() and
> xfs_break_dax_layouts() to expect ->_refcount is 0 instead of 1. This
> also needs some fstests exposure.
While we're at it, can we add a wait_fsdax_unref helper macro that hides
the _refcount access from the file systems?
2020 Sep 25
1
[PATCH 1/2] ext4/xfs: add page refcount helper
...gt;_refcount) == 1,
+ dax_layout_is_idle_page(page),
TASK_INTERRUPTIBLE, 0, 0,
ext4_wait_dax_page(ei));
} while (error == 0);
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index a29f78a663ca..29ab96541bc1 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -750,7 +750,7 @@ xfs_break_dax_layouts(
*retry = true;
return ___wait_var_event(&page->_refcount,
- atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
+ dax_layout_is_idle_page(page), TASK_INTERRUPTIBLE,
0, 0, xfs_wait_dax_page(inode));
}
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 4...
2020 Sep 17
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
On 9/15/20 11:10 PM, Christoph Hellwig wrote:
> On Mon, Sep 14, 2020 at 04:10:38PM -0700, Dan Williams wrote:
>> You also need to fix up ext4_break_layouts() and
>> xfs_break_dax_layouts() to expect ->_refcount is 0 instead of 1. This
>> also needs some fstests exposure.
>
> While we're at it, can we add a wait_fsdax_unref helper macro that hides
> the _refcount access from the file systems?
Sure. I'll add a separate patch for it in v2.
2020 Oct 01
0
[RFC PATCH v3 1/2] ext4/xfs: add page refcount helper
...TIBLE, 0, 0,
- ext4_wait_dax_page(ei));
+ error = dax_wait_page(ei, page, ext4_wait_dax_page);
} while (error == 0);
return error;
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 3d1b95124744..a5304aaeaa3a 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -749,9 +749,7 @@ xfs_break_dax_layouts(
return 0;
*retry = true;
- return ___wait_var_event(&page->_refcount,
- atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
- 0, 0, xfs_wait_dax_page(inode));
+ return dax_wait_page(inode, page, xfs_wait_dax_page);
}
int
diff --git a/include/linux/dax.h b/include/...
2020 Sep 25
0
[PATCH 1/2] ext4/xfs: add page refcount helper
...TERRUPTIBLE, 0, 0,
> ext4_wait_dax_page(ei));
> } while (error == 0);
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index a29f78a663ca..29ab96541bc1 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -750,7 +750,7 @@ xfs_break_dax_layouts(
>
> *retry = true;
> return ___wait_var_event(&page->_refcount,
> - atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
> + dax_layout_is_idle_page(page), TASK_INTERRUPTIBLE,
>...
2020 Sep 14
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...with refcount==0 were on an lru list. Since
then, struct page has been reorganized to not collide the ->pgmap back
pointer with the ->lru list and there have been other cleanups for
page pinning that might make this incremental cleanup viable.
You also need to fix up ext4_break_layouts() and
xfs_break_dax_layouts() to expect ->_refcount is 0 instead of 1. This
also needs some fstests exposure.
> I have a modified THP migration patch series that applies on top of
> this one and is cleaner since I don't have to add code to handle the
> +1 reference count. The link below is for the earlier v2...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE
struct page reference counting is ugly because they are "free" when the
reference count is one instead of zero. This leads to explicit checks
for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and
page migration which have to adjust the expected reference count when
determining if the page is
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration, etc.). Clean up the code so the reference count doesn't need to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell <rcampbell at
2020 Sep 14
2
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...u list. Since
> then, struct page has been reorganized to not collide the ->pgmap back
> pointer with the ->lru list and there have been other cleanups for
> page pinning that might make this incremental cleanup viable.
>
> You also need to fix up ext4_break_layouts() and
> xfs_break_dax_layouts() to expect ->_refcount is 0 instead of 1. This
> also needs some fstests exposure.
Got it. Thanks!
>> I have a modified THP migration patch series that applies on top of
>> this one and is cleaner since I don't have to add code to handle the
>> +1 reference count. Th...
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
This is still an RFC because after looking at the pmem/dax code some
more, I realized that the ZONE_DEVICE struct pages are being inserted
into the process' page tables with vmf_insert_mixed() and a zero
refcount on the ZONE_DEVICE struct page. This is sort of OK because
insert_pfn() increments the reference count on the pgmap which is what
prevents memunmap_pages() from freeing the struct