Displaying 15 results from an estimated 15 matches for "obj_us".
Did you mean:
obj_bss
2016 Apr 18
1
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
...andle | OBJ_ALLOCATED_TAG;
> else
> /* record handle in first_page->private */
> - set_page_private(first_page, handle);
> + set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
> kunmap_atomic(vaddr);
> mod_zspage_inuse(first_page, 1);
> zs_stat_inc(class, OBJ_USED, 1);
2016 Apr 18
1
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
...andle | OBJ_ALLOCATED_TAG;
> else
> /* record handle in first_page->private */
> - set_page_private(first_page, handle);
> + set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
> kunmap_atomic(vaddr);
> mod_zspage_inuse(first_page, 1);
> zs_stat_inc(class, OBJ_USED, 1);
2016 Mar 30
0
[PATCH v3 06/16] zsmalloc: squeeze inuse into page->mapping
...s(struct zs_pool *pool)
@@ -1372,7 +1404,7 @@ static unsigned long obj_malloc(struct size_class *class,
/* record handle in first_page->private */
set_page_private(first_page, handle);
kunmap_atomic(vaddr);
- first_page->inuse++;
+ mod_zspage_inuse(first_page, 1);
zs_stat_inc(class, OBJ_USED, 1);
return obj;
@@ -1457,7 +1489,7 @@ static void obj_free(struct size_class *class, unsigned long obj)
set_page_private(first_page, 0);
kunmap_atomic(vaddr);
first_page->freelist = (void *)obj;
- first_page->inuse--;
+ mod_zspage_inuse(first_page, -1);
zs_stat_dec(class, OBJ_...
2016 Mar 30
0
[PATCH v3 08/16] zsmalloc: squeeze freelist into page->mapping
...(first_page, link->next >> OBJ_ALLOCATED_TAG);
if (!class->huge)
/* record handle in the header of allocated chunk */
link->handle = handle;
@@ -1406,6 +1439,8 @@ static unsigned long obj_malloc(struct size_class *class,
mod_zspage_inuse(first_page, 1);
zs_stat_inc(class, OBJ_USED, 1);
+ obj = location_to_obj(m_page, obj);
+
return obj;
}
@@ -1475,19 +1510,17 @@ static void obj_free(struct size_class *class, unsigned long obj)
obj &= ~OBJ_ALLOCATED_TAG;
obj_to_location(obj, &f_page, &f_objidx);
+ f_offset = (class->size * f_objidx) & ~PAGE_M...
2016 Mar 11
0
[PATCH v1 11/19] zsmalloc: squeeze freelist into page->mapping
...(first_page, link->next >> OBJ_ALLOCATED_TAG);
if (!class->huge)
/* record handle in the header of allocated chunk */
link->handle = handle;
@@ -1404,6 +1436,8 @@ static unsigned long obj_malloc(struct size_class *class,
mod_zspage_inuse(first_page, 1);
zs_stat_inc(class, OBJ_USED, 1);
+ obj = location_to_obj(m_page, obj);
+
return obj;
}
@@ -1473,19 +1507,17 @@ static void obj_free(struct size_class *class, unsigned long obj)
obj &= ~OBJ_ALLOCATED_TAG;
obj_to_location(obj, &f_page, &f_objidx);
+ f_offset = (class->size * f_objidx) & ~PAGE_M...
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 11
0
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
...le = handle;
+ link->handle = handle | OBJ_ALLOCATED_TAG;
else
/* record handle in first_page->private */
- set_page_private(first_page, handle);
+ set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
kunmap_atomic(vaddr);
mod_zspage_inuse(first_page, 1);
zs_stat_inc(class, OBJ_USED, 1);
--
1.9.1
2016 Mar 30
0
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
...le = handle;
+ link->handle = handle | OBJ_ALLOCATED_TAG;
else
/* record handle in first_page->private */
- set_page_private(first_page, handle);
+ set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
kunmap_atomic(vaddr);
mod_zspage_inuse(first_page, 1);
zs_stat_inc(class, OBJ_USED, 1);
--
1.9.1
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
Recently, I got many reports about perfermance degradation
in embedded system(Android mobile phone, webOS TV and so on)
and failed to fork easily.
The problem was fragmentation caused by zram and GPU driver
pages. Their pages cannot be migrated so compaction cannot
work well, either so reclaimer ends up shrinking all of working
set pages. It made system very slow and even to fail to fork
easily.
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
...| OBJ_ALLOCATED_TAG;
> else
> /* record handle in first_page->private */
> - set_page_private(first_page, handle);
> + set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
> kunmap_atomic(vaddr);
> mod_zspage_inuse(first_page, 1);
> zs_stat_inc(class, OBJ_USED, 1);
>
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
...| OBJ_ALLOCATED_TAG;
> else
> /* record handle in first_page->private */
> - set_page_private(first_page, handle);
> + set_page_private(first_page, handle | OBJ_ALLOCATED_TAG);
> kunmap_atomic(vaddr);
> mod_zspage_inuse(first_page, 1);
> zs_stat_inc(class, OBJ_USED, 1);
>