Displaying 20 results from an estimated 21 matches for "alloc_zspage".
Did you mean:
alloc_page
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
On 2016/3/11 15:30, Minchan Kim wrote:
> For migration, we need to create sub-page chain of zspage
> dynamically so this patch factors it out from alloc_zspage.
>
> As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> more clear in obj_malloc(it could be another patch but it's
> trivial so I want to put together in this patch).
>
> Signed-off-by: Minchan Kim <minchan at kernel.org>
> ---
> mm/zsmalloc.c | 78 ++++...
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
On 2016/3/11 15:30, Minchan Kim wrote:
> For migration, we need to create sub-page chain of zspage
> dynamically so this patch factors it out from alloc_zspage.
>
> As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> more clear in obj_malloc(it could be another patch but it's
> trivial so I want to put together in this patch).
>
> Signed-off-by: Minchan Kim <minchan at kernel.org>
> ---
> mm/zsmalloc.c | 78 ++++...
2016 Mar 14
0
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
On Sat, Mar 12, 2016 at 11:09:36AM +0800, xuyiping wrote:
>
>
> On 2016/3/11 15:30, Minchan Kim wrote:
> >For migration, we need to create sub-page chain of zspage
> >dynamically so this patch factors it out from alloc_zspage.
> >
> >As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> >more clear in obj_malloc(it could be another patch but it's
> >trivial so I want to put together in this patch).
> >
> >Signed-off-by: Minchan Kim <minchan at kernel.org>
> >---...
2016 Mar 11
0
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.
As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).
Signed-off-by: Minchan Kim <minchan at kernel.org>
---
mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++---------...
2016 Mar 30
0
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.
As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).
Signed-off-by: Minchan Kim <minchan at kernel.org>
---
mm/zsmalloc.c | 80 ++++++++++++++++++++++++++++++++++---------...
2016 Mar 12
1
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...ess_group currfg, newfg;
>
> get_zspage_mapping(first_page, &class_idx, &currfg);
> - newfg = get_fullness_group(first_page);
> + newfg = get_fullness_group(class, first_page);
> if (newfg == currfg)
> goto out;
>
> @@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> init_zspage(class, first_page);
>
> first_page->freelist = location_to_obj(first_page, 0);
> - /* Maximum number of objects we can store in this zspage */
> - first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->...
2016 Mar 12
1
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...ess_group currfg, newfg;
>
> get_zspage_mapping(first_page, &class_idx, &currfg);
> - newfg = get_fullness_group(first_page);
> + newfg = get_fullness_group(class, first_page);
> if (newfg == currfg)
> goto out;
>
> @@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> init_zspage(class, first_page);
>
> first_page->freelist = location_to_obj(first_page, 0);
> - /* Maximum number of objects we can store in this zspage */
> - first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->...
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 30
0
[PATCH v3 09/16] zsmalloc: move struct zs_meta from mapping to freelist
...lass_idx;
}
@@ -946,7 +946,6 @@ static void reset_page(struct page *page)
clear_bit(PG_private, &page->flags);
clear_bit(PG_private_2, &page->flags);
set_page_private(page, 0);
- page->mapping = NULL;
page->freelist = NULL;
}
@@ -1056,6 +1055,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
INIT_LIST_HEAD(&page->lru);
if (i == 0) { /* first page */
+ page->freelist = NULL;
SetPagePrivate(page);
set_page_private(page, 0);
first_page = page;
@@ -2068,9 +2068,9 @@ static int __init zs_init(void)
/*
* A zspage...
2016 Mar 11
0
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...(struct size_class *class,
enum fullness_group currfg, newfg;
get_zspage_mapping(first_page, &class_idx, &currfg);
- newfg = get_fullness_group(first_page);
+ newfg = get_fullness_group(class, first_page);
if (newfg == currfg)
goto out;
@@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
init_zspage(class, first_page);
first_page->freelist = location_to_obj(first_page, 0);
- /* Maximum number of objects we can store in this zspage */
- first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
-
error = 0; /*...
2016 Mar 30
0
[PATCH v3 05/16] zsmalloc: keep max_object in size_class
...(struct size_class *class,
enum fullness_group currfg, newfg;
get_zspage_mapping(first_page, &class_idx, &currfg);
- newfg = get_fullness_group(first_page);
+ newfg = get_fullness_group(class, first_page);
if (newfg == currfg)
goto out;
@@ -1008,9 +1008,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
init_zspage(class, first_page);
first_page->freelist = location_to_obj(first_page, 0);
- /* Maximum number of objects we can store in this zspage */
- first_page->objects = class->pages_per_zspage * PAGE_SIZE / class->size;
-
error = 0; /*...
2016 Mar 14
0
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...> > get_zspage_mapping(first_page, &class_idx, &currfg);
> >- newfg = get_fullness_group(first_page);
> >+ newfg = get_fullness_group(class, first_page);
> > if (newfg == currfg)
> > goto out;
> >
> >@@ -1003,9 +1003,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> > init_zspage(class, first_page);
> >
> > first_page->freelist = location_to_obj(first_page, 0);
> >- /* Maximum number of objects we can store in this zspage */
> >- first_page->objects = class->pages_per_zspage *...
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 11
0
[PATCH v1 07/19] zsmalloc: reordering function parameter
.../* Initialize a newly allocated zspage */
-static void init_zspage(struct page *first_page, struct size_class *class)
+static void init_zspage(struct size_class *class, struct page *first_page)
{
unsigned long off = 0;
struct page *page = first_page;
@@ -998,7 +1000,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
prev_page = page;
}
- init_zspage(first_page, class);
+ init_zspage(class, first_page);
first_page->freelist = location_to_obj(first_page, 0);
/* Maximum number of objects we can store in this zspage */
@@ -1345,8 +1347,8 @@ void zs_unmap_objec...
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 30
0
[PATCH v3 06/16] zsmalloc: squeeze inuse into page->mapping
...*nextp, *tmp, *head_extra;
VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
- VM_BUG_ON_PAGE(first_page->inuse, first_page);
+ VM_BUG_ON_PAGE(get_zspage_inuse(first_page), first_page);
head_extra = (struct page *)page_private(first_page);
@@ -992,7 +1026,7 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
SetPagePrivate(page);
set_page_private(page, 0);
first_page = page;
- first_page->inuse = 0;
+ set_zspage_inuse(page, 0);
}
if (i == 1)
set_page_private(first_page, (unsigned long)page);
@@ -1237,9 +1271,7 @@ static bool can_merge(...
2016 Mar 30
0
[PATCH v3 08/16] zsmalloc: squeeze freelist into page->mapping
..._TAG bit to last link for
+ * migration to know it is allocated object or not.
+ */
+ link->next = -1 << OBJ_ALLOCATED_TAG;
+ }
kunmap_atomic(vaddr);
page = next_page;
off %= PAGE_SIZE;
}
+
+ set_freeobj(first_page, 0);
}
/*
@@ -1040,7 +1074,6 @@ static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
init_zspage(class, first_page);
- first_page->freelist = location_to_obj(first_page, 0);
error = 0; /* Success */
cleanup:
@@ -1320,7 +1353,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
obj_to_location(obj, &page, &am...