Displaying 20 results from an estimated 20 matches for "obj_allocated_tag".
2016 Apr 18
1
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
Hello,
On (03/30/16 16:12), Minchan Kim wrote:
> @@ -1421,7 +1434,6 @@ static unsigned long obj_malloc(struct size_class *class,
> unsigned long m_offset;
> void *vaddr;
>
> - handle |= OBJ_ALLOCATED_TAG;
a nitpick, why did you replace this ALLOCATED_TAG assignment
with 2 'handle | OBJ_ALLOCATED_TAG'?
-ss
> obj = get_freeobj(first_page);
> objidx_to_page_and_offset(class, first_page, obj,
> &m_page, &m_offset);
> @@ -1431,10 +1443,10 @@ static unsigned long...
2016 Apr 18
1
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
Hello,
On (03/30/16 16:12), Minchan Kim wrote:
> @@ -1421,7 +1434,6 @@ static unsigned long obj_malloc(struct size_class *class,
> unsigned long m_offset;
> void *vaddr;
>
> - handle |= OBJ_ALLOCATED_TAG;
a nitpick, why did you replace this ALLOCATED_TAG assignment
with 2 'handle | OBJ_ALLOCATED_TAG'?
-ss
> obj = get_freeobj(first_page);
> objidx_to_page_and_offset(class, first_page, obj,
> &m_page, &m_offset);
> @@ -1431,10 +1443,10 @@ static unsigned long...
2016 Mar 11
0
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.
As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).
Signed-off-by: Minchan Kim <minchan at kernel.org>
---
mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------------------------
1 file changed, 45 insertions(+), 3...
2016 Mar 30
0
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
For migration, we need to create sub-page chain of zspage
dynamically so this patch factors it out from alloc_zspage.
As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
more clear in obj_malloc(it could be another patch but it's
trivial so I want to put together in this patch).
Signed-off-by: Minchan Kim <minchan at kernel.org>
---
mm/zsmalloc.c | 80 ++++++++++++++++++++++++++++++++++-------------------------
1 file changed, 46 insertions(+), 3...
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
On 2016/3/11 15:30, Minchan Kim wrote:
> For migration, we need to create sub-page chain of zspage
> dynamically so this patch factors it out from alloc_zspage.
>
> As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> more clear in obj_malloc(it could be another patch but it's
> trivial so I want to put together in this patch).
>
> Signed-off-by: Minchan Kim <minchan at kernel.org>
> ---
> mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------------------------
>...
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
On 2016/3/11 15:30, Minchan Kim wrote:
> For migration, we need to create sub-page chain of zspage
> dynamically so this patch factors it out from alloc_zspage.
>
> As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> more clear in obj_malloc(it could be another patch but it's
> trivial so I want to put together in this patch).
>
> Signed-off-by: Minchan Kim <minchan at kernel.org>
> ---
> mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------------------------
>...
2016 Mar 30
0
[PATCH v3 08/16] zsmalloc: squeeze freelist into page->mapping
...ff;
@@ -976,7 +1000,7 @@ static void init_zspage(struct size_class *class, struct page *first_page)
link = (struct link_free *)vaddr + off / sizeof(*link);
while ((off += class->size) < PAGE_SIZE) {
- link->next = location_to_obj(page, i++);
+ link->next = freeobj++ << OBJ_ALLOCATED_TAG;
link += class->size / sizeof(*link);
}
@@ -986,11 +1010,21 @@ static void init_zspage(struct size_class *class, struct page *first_page)
* page (if present)
*/
next_page = get_next_page(page);
- link->next = location_to_obj(next_page, 0);
+ if (next_page) {
+ link->...
2016 Mar 11
0
[PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
...off;
@@ -972,7 +995,7 @@ static void init_zspage(struct size_class *class, struct page *first_page)
link = (struct link_free *)vaddr + off / sizeof(*link);
while ((off += class->size) < PAGE_SIZE) {
- link->next = location_to_obj(page, i++);
+ link->next = freeobj++ << OBJ_ALLOCATED_TAG;
link += class->size / sizeof(*link);
}
@@ -982,11 +1005,21 @@ static void init_zspage(struct size_class *class, struct page *first_page)
* page (if present)
*/
next_page = get_next_page(page);
- link->next = location_to_obj(next_page, 0);
+ if (next_page) {
+ link->...
2016 Mar 17
1
[PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
...~PAGE_MASK);
}
all functions called "objidx_to_page_and_ofs" could use it like this,
for example:
static unsigned long handle_from_obj(struct size_class *class,
struct page *first_page, int obj_idx)
{
unsigned long *head = map_handle(class, first_page, obj_idx);
if (*head & OBJ_ALLOCATED_TAG)
handle = *head & ~OBJ_ALLOCATED_TAG;
unmap_handle(*head);
return handle;
}
'freeze_zspage', u'nfreeze_zspage' use it in the same way.
but in 'obj_malloc', we still have to get the page to get obj.
obj = location_to_obj(m_page, obj);
> Indeed. I will cha...
2016 Mar 17
1
[PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
...~PAGE_MASK);
}
all functions called "objidx_to_page_and_ofs" could use it like this,
for example:
static unsigned long handle_from_obj(struct size_class *class,
struct page *first_page, int obj_idx)
{
unsigned long *head = map_handle(class, first_page, obj_idx);
if (*head & OBJ_ALLOCATED_TAG)
handle = *head & ~OBJ_ALLOCATED_TAG;
unmap_handle(*head);
return handle;
}
'freeze_zspage', u'nfreeze_zspage' use it in the same way.
but in 'obj_malloc', we still have to get the page to get obj.
obj = location_to_obj(m_page, obj);
> Indeed. I will cha...
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 15
2
[PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
On (03/11/16 16:30), Minchan Kim wrote:
> -static void *location_to_obj(struct page *page, unsigned long obj_idx)
> +static void objidx_to_page_and_ofs(struct size_class *class,
> + struct page *first_page,
> + unsigned long obj_idx,
> + struct page **obj_page,
> + unsigned long *ofs_in_page)
this looks big; 5 params, function "returning" both page and
2016 Mar 15
2
[PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
On (03/11/16 16:30), Minchan Kim wrote:
> -static void *location_to_obj(struct page *page, unsigned long obj_idx)
> +static void objidx_to_page_and_ofs(struct size_class *class,
> + struct page *first_page,
> + unsigned long obj_idx,
> + struct page **obj_page,
> + unsigned long *ofs_in_page)
this looks big; 5 params, function "returning" both page and
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 14
0
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
...Sat, Mar 12, 2016 at 11:09:36AM +0800, xuyiping wrote:
>
>
> On 2016/3/11 15:30, Minchan Kim wrote:
> >For migration, we need to create sub-page chain of zspage
> >dynamically so this patch factors it out from alloc_zspage.
> >
> >As a minor refactoring, it makes OBJ_ALLOCATED_TAG assign
> >more clear in obj_malloc(it could be another patch but it's
> >trivial so I want to put together in this patch).
> >
> >Signed-off-by: Minchan Kim <minchan at kernel.org>
> >---
> > mm/zsmalloc.c | 78 ++++++++++++++++++++++++++++++++++-------...
2016 Mar 11
0
[PATCH v1 06/19] zsmalloc: clean up many BUG_ON
...dle);
obj_to_location(obj, &page, &obj_idx);
get_zspage_mapping(get_first_page(page), &class_idx, &fg);
@@ -1445,8 +1441,6 @@ static void obj_free(struct zs_pool *pool, struct size_class *class,
unsigned long f_objidx, f_offset;
void *vaddr;
- BUG_ON(!obj);
-
obj &= ~OBJ_ALLOCATED_TAG;
obj_to_location(obj, &f_page, &f_objidx);
first_page = get_first_page(f_page);
@@ -1546,7 +1540,6 @@ static void zs_object_copy(unsigned long dst, unsigned long src,
kunmap_atomic(d_addr);
kunmap_atomic(s_addr);
s_page = get_next_page(s_page);
- BUG_ON(!s_page);
s_add...