Displaying 20 results from an estimated 151 matches for "trylock_page".
2019 Apr 24
1
[PATCH v3 1/4] mm/balloon_compaction: list interfaces
...ing the 'page' when we get around to
> + * establishing additional references. We should be the only one
> + * holding a reference to the 'page' at this point. If we are not, then
> + * memory corruption is possible and we should stop execution.
> + */
> + BUG_ON(!trylock_page(page));
> + list_del(&page->lru);
> + balloon_page_insert(b_dev_info, page);
> + unlock_page(page);
> + __count_vm_event(BALLOON_INFLATE);
> +}
> +
> +/**
> + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page
> + * list.
> + * @b_...
2019 Apr 23
0
[PATCH v3 1/4] mm/balloon_compaction: list interfaces
...Block others from accessing the 'page' when we get around to
+ * establishing additional references. We should be the only one
+ * holding a reference to the 'page' at this point. If we are not, then
+ * memory corruption is possible and we should stop execution.
+ */
+ BUG_ON(!trylock_page(page));
+ list_del(&page->lru);
+ balloon_page_insert(b_dev_info, page);
+ unlock_page(page);
+ __count_vm_event(BALLOON_INFLATE);
+}
+
+/**
+ * balloon_page_list_enqueue() - inserts a list of pages into the balloon page
+ * list.
+ * @b_dev_info: balloon device descriptor where we will...
2019 Apr 25
0
[PATCH v4 1/4] mm/balloon_compaction: List interfaces
...Block others from accessing the 'page' when we get around to
+ * establishing additional references. We should be the only one
+ * holding a reference to the 'page' at this point. If we are not, then
+ * memory corruption is possible and we should stop execution.
+ */
+ BUG_ON(!trylock_page(page));
+ list_del(&page->lru);
+ balloon_page_insert(b_dev_info, page);
+ unlock_page(page);
+ __count_vm_event(BALLOON_INFLATE);
+}
+
+/**
+ * balloon_page_list_enqueue() - inserts a list of pages into the balloon page
+ * list.
+ * @b_dev_info: balloon device descriptor where we will...
2019 Feb 07
0
[PATCH 3/6] mm/balloon_compaction: list interfaces
..._info,
> + struct page *page)
> +{
> + /*
> + * Block others from accessing the 'page' when we get around to
> + * establishing additional references. We should be the only one
> + * holding a reference to the 'page' at this point.
> + */
> + if (!trylock_page(page)) {
> + WARN_ONCE(1, "balloon inflation failed to enqueue page\n");
> + return -EFAULT;
> + }
> + list_del(&page->lru);
> + balloon_page_insert(b_dev_info, page);
> + unlock_page(page);
> + __count_vm_event(BALLOON_INFLATE);
> + return 0;
> +}
>...
2019 Apr 19
0
[PATCH v2 1/4] mm/balloon_compaction: list interfaces
..._info,
> + struct page *page)
> +{
> + /*
> + * Block others from accessing the 'page' when we get around to
> + * establishing additional references. We should be the only one
> + * holding a reference to the 'page' at this point.
> + */
> + if (!trylock_page(page)) {
> + WARN_ONCE(1, "balloon inflation failed to enqueue page\n");
> + return -EFAULT;
Looks like all callers bug on a failure. So let's just do it here,
and then make this void?
> + }
> + list_del(&page->lru);
> + balloon_page_insert(b_dev_info, page);...
2016 Apr 04
1
[PATCH v3 02/16] mm/compaction: support non-lru movable page migration
...goto out;
>
> a_op = mapping->a_op;
> if (!aop)
> goto out;
> if (a_op->isolate_page)
> ret = 1;
> out:
> return ret;
>
> }
>
> It works under PG_lock but with this, we need trylock_page to peek
> whether it's movable non-lru or not for scanning pfn.
Hm I hoped that with READ_ONCE() we could do the peek safely without
trylock_page, if we use it only as a heuristic. But I guess it would
require at least RCU-level protection of the
page->mapping->a_op->isolate_pag...
2016 Apr 04
1
[PATCH v3 02/16] mm/compaction: support non-lru movable page migration
...goto out;
>
> a_op = mapping->a_op;
> if (!aop)
> goto out;
> if (a_op->isolate_page)
> ret = 1;
> out:
> return ret;
>
> }
>
> It works under PG_lock but with this, we need trylock_page to peek
> whether it's movable non-lru or not for scanning pfn.
Hm I hoped that with READ_ONCE() we could do the peek safely without
trylock_page, if we use it only as a heuristic. But I guess it would
require at least RCU-level protection of the
page->mapping->a_op->isolate_pag...
2016 Jun 13
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
On 05/31/2016 05:31 AM, Minchan Kim wrote:
> @@ -791,6 +921,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
> int rc = -EAGAIN;
> int page_was_mapped = 0;
> struct anon_vma *anon_vma = NULL;
> + bool is_lru = !__PageMovable(page);
>
> if (!trylock_page(page)) {
> if (!force || mode == MIGRATE_ASYNC)
> @@ -871,6 +1002,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
> goto out_unlock_both;
> }
>
> + if (unlikely(!is_lru)) {
> + rc = move_to_new_page(newpage, page, mode);
> + goto out_un...
2016 Jun 13
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
On 05/31/2016 05:31 AM, Minchan Kim wrote:
> @@ -791,6 +921,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
> int rc = -EAGAIN;
> int page_was_mapped = 0;
> struct anon_vma *anon_vma = NULL;
> + bool is_lru = !__PageMovable(page);
>
> if (!trylock_page(page)) {
> if (!force || mode == MIGRATE_ASYNC)
> @@ -871,6 +1002,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
> goto out_unlock_both;
> }
>
> + if (unlikely(!is_lru)) {
> + rc = move_to_new_page(newpage, page, mode);
> + goto out_un...
2016 Mar 21
0
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
...ULL);
}
+int trylock_zspage(struct page *first_page, struct page *locked_page)
+{
+ struct page *cursor, *fail;
+
+ VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+
+ for (cursor = first_page; cursor != NULL; cursor =
+ get_next_page(cursor)) {
+ if (cursor != locked_page) {
+ if (!trylock_page(cursor)) {
+ fail = cursor;
+ goto unlock;
+ }
+ }
+ }
+
+ return 1;
+unlock:
+ for (cursor = first_page; cursor != fail; cursor =
+ get_next_page(cursor)) {
+ if (cursor != locked_page)
+ unlock_page(cursor);
+ }
+
+ return 0;
+}
+
+void unlock_zspage(struct page *first_page, struct...
2019 Apr 19
0
[PATCH v2 1/4] mm/balloon_compaction: list interfaces
...>> + /*
> >> + * Block others from accessing the 'page' when we get around to
> >> + * establishing additional references. We should be the only one
> >> + * holding a reference to the 'page' at this point.
> >> + */
> >> + if (!trylock_page(page)) {
> >> + WARN_ONCE(1, "balloon inflation failed to enqueue page\n");
> >> + return -EFAULT;
> >
> > Looks like all callers bug on a failure. So let's just do it here,
> > and then make this void?
>
> As you noted below, actually bal...
2019 Apr 23
5
[PATCH v3 0/4] vmw_balloon: compaction and shrinker support
VMware balloon enhancements: adding support for memory compaction,
memory shrinker (to prevent OOM) and splitting of refused pages to
prevent recurring inflations.
Patches 1-2: Support for compaction
Patch 3: Support for memory shrinker - disabled by default
Patch 4: Split refused pages to improve performance
v2->v3:
* Fixing wrong argument type (int->size_t) [Michael]
* Fixing a comment
2016 Jan 01
5
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...first_entry(&b_dev_info->pages, typeof(*page), lru);
+ /* move to processed list to avoid going over it another time */
+ list_move(&page->lru, &processed);
+
+ if (!get_page_unless_zero(page))
+ continue;
+ /*
+ * pages_lock nests within page lock,
+ * so drop it before trylock_page
+ */
+ spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
+
/*
* Block others from accessing the 'page' while we get around
* establishing additional references and preparing the 'page'
@@ -72,6 +94,7 @@ struct page *balloon_page_dequeue(struct balloon_dev_...
2016 Jan 01
5
[PATCH 2/2] virtio_balloon: fix race between migration and ballooning
...first_entry(&b_dev_info->pages, typeof(*page), lru);
+ /* move to processed list to avoid going over it another time */
+ list_move(&page->lru, &processed);
+
+ if (!get_page_unless_zero(page))
+ continue;
+ /*
+ * pages_lock nests within page lock,
+ * so drop it before trylock_page
+ */
+ spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
+
/*
* Block others from accessing the 'page' while we get around
* establishing additional references and preparing the 'page'
@@ -72,6 +94,7 @@ struct page *balloon_page_dequeue(struct balloon_dev_...
2016 Mar 21
2
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
Hi Minchan,
[auto build test WARNING on next-20160318]
[cannot apply to v4.5-rc7 v4.5-rc6 v4.5-rc5 v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160321-143339
coccinelle warnings: (new ones prefixed by >>)
>>
2016 Mar 21
2
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
Hi Minchan,
[auto build test WARNING on next-20160318]
[cannot apply to v4.5-rc7 v4.5-rc6 v4.5-rc5 v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160321-143339
coccinelle warnings: (new ones prefixed by >>)
>>
2016 Mar 23
1
[PATCH v2 13/18] mm/compaction: support non-lru movable page migration
...g the ref counter, we need to re-check PageMovable()
> > to ensure that we indeed handle PageMovable() type page. Without it,
> > the page we handle can be freed and re-allocated to someone else
> > that isn't related to PageMovable() before grabbing the page. Trying
> > trylock_page() in this case could cause a problem.
>
> I don't get it. Why do you think trylock_page could cause a problem?
> Could you elaborate it more?
Okay. Consider following sequence.
CPU-A CPU-B
check PageMovable() in compacton
......
2016 Mar 23
1
[PATCH v2 13/18] mm/compaction: support non-lru movable page migration
...g the ref counter, we need to re-check PageMovable()
> > to ensure that we indeed handle PageMovable() type page. Without it,
> > the page we handle can be freed and re-allocated to someone else
> > that isn't related to PageMovable() before grabbing the page. Trying
> > trylock_page() in this case could cause a problem.
>
> I don't get it. Why do you think trylock_page could cause a problem?
> Could you elaborate it more?
Okay. Consider following sequence.
CPU-A CPU-B
check PageMovable() in compacton
......
2016 Jun 15
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...ge *page, struct page *newpage,
>>> > > int rc = -EAGAIN;
>>> > > int page_was_mapped = 0;
>>> > > struct anon_vma *anon_vma = NULL;
>>> > > + bool is_lru = !__PageMovable(page);
>>> > >
>>> > > if (!trylock_page(page)) {
>>> > > if (!force || mode == MIGRATE_ASYNC)
>>> > > @@ -871,6 +1002,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>>> > > goto out_unlock_both;
>>> > > }
>>> > >
>>>...
2016 Jun 15
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...ge *page, struct page *newpage,
>>> > > int rc = -EAGAIN;
>>> > > int page_was_mapped = 0;
>>> > > struct anon_vma *anon_vma = NULL;
>>> > > + bool is_lru = !__PageMovable(page);
>>> > >
>>> > > if (!trylock_page(page)) {
>>> > > if (!force || mode == MIGRATE_ASYNC)
>>> > > @@ -871,6 +1002,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>>> > > goto out_unlock_both;
>>> > > }
>>> > >
>>>...