Displaying 20 results from an estimated 35 matches for "free_zspag".
Did you mean:
free_zspage
2016 Mar 30
0
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
Currently, putback_zspage does free zspage under class->lock
if fullness become ZS_EMPTY but it makes trouble to implement
locking scheme for new zspage migration.
So, this patch is to separate free_zspage from putback_zspage
and free zspage out of class->lock which is preparation for
zspage migration.
Signed-off-by: Minchan Kim <minchan at kernel.org>
---
mm/zsmalloc.c | 46 +++++++++++++++++++++++-----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/mm/...
2016 Apr 18
2
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
...compaction, is there any specific reason these macros
were added?
> + if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
> pool->stats.pages_compacted += class->pages_per_zspage;
> - spin_unlock(&class->lock);
> + spin_unlock(&class->lock);
> + free_zspage(pool, class, src_page);
do we really need to free_zspage() out of class->lock?
wouldn't something like this
if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
pool->stats.pages_compacted += class->pages_per_zspage;
free_zspage(pool, class, src_page);
}
spin_unlock(...
2016 Apr 18
2
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
...compaction, is there any specific reason these macros
were added?
> + if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
> pool->stats.pages_compacted += class->pages_per_zspage;
> - spin_unlock(&class->lock);
> + spin_unlock(&class->lock);
> + free_zspage(pool, class, src_page);
do we really need to free_zspage() out of class->lock?
wouldn't something like this
if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
pool->stats.pages_compacted += class->pages_per_zspage;
free_zspage(pool, class, src_page);
}
spin_unlock(...
2016 Apr 19
0
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
...se macros
> were added?
>
>
>
> > + if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
> > pool->stats.pages_compacted += class->pages_per_zspage;
> > - spin_unlock(&class->lock);
> > + spin_unlock(&class->lock);
> > + free_zspage(pool, class, src_page);
>
> do we really need to free_zspage() out of class->lock?
> wouldn't something like this
>
> if (putback_zspage(pool, class, src_page) == ZS_EMPTY) {
> pool->stats.pages_compacted += class->pages_per_zspage;
> free_zspage(pool, c...
2016 Mar 21
2
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
Hi Minchan,
[auto build test WARNING on next-20160318]
[cannot apply to v4.5-rc7 v4.5-rc6 v4.5-rc5 v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160321-143339
coccinelle warnings: (new ones prefixed by >>)
>>
2016 Mar 21
2
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
Hi Minchan,
[auto build test WARNING on next-20160318]
[cannot apply to v4.5-rc7 v4.5-rc6 v4.5-rc5 v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160321-143339
coccinelle warnings: (new ones prefixed by >>)
>>
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
...ve unused pool param in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
Gioh Kim (1):
fs/anon_inodes: new interface to create new ino...
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
...ve unused pool param in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
Gioh Kim (1):
fs/anon_inodes: new interface to create new ino...
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
...ve unused pool param in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
3. add non-lru page migration feature
mm/compaction: support non-lru movable page migration
4. rework KVM memory-ballooning
mm/balloon: use general movable page feature into balloon
5. add zsmalloc page migration
zsmalloc: migrate hea...
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
...ve unused pool param in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
3. add non-lru page migration feature
mm/compaction: support non-lru movable page migration
4. rework KVM memory-ballooning
mm/balloon: use general movable page feature into balloon
5. add zsmalloc page migration
zsmalloc: migrate hea...
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
...lloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: remove page_mapcount_reset
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
5. add zsmalloc page migration
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
* From v2
* rebase on mmotm-2...
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
...lloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: remove page_mapcount_reset
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
5. add zsmalloc page migration
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
* From v2
* rebase on mmotm-2...
2016 Mar 21
0
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
...+
+void unlock_zspage(struct page *first_page, struct page *locked_page)
+{
+ struct page *cursor = first_page;
+
+ for (; cursor != NULL; cursor = get_next_page(cursor)) {
+ VM_BUG_ON_PAGE(!PageLocked(cursor), cursor);
+ if (cursor != locked_page)
+ unlock_page(cursor);
+ };
+}
+
static void free_zspage(struct zs_pool *pool, struct page *first_page)
{
struct page *nextp, *tmp;
@@ -1090,16 +1141,17 @@ static void init_zspage(struct size_class *class, struct page *first_page,
first_page->freelist = NULL;
INIT_LIST_HEAD(&first_page->lru);
set_zspage_inuse(first_page, 0);
- BUG_ON(...
2016 Mar 21
0
[PATCH] zsmalloc: fix semicolon.cocci warnings
...malloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1100,7 +1100,7 @@ void unlock_zspage(struct page *first_pa
VM_BUG_ON_PAGE(!PageLocked(cursor), cursor);
if (cursor != locked_page)
unlock_page(cursor);
- };
+ }
}
static void free_zspage(struct zs_pool *pool, struct page *first_page)
2016 Mar 30
0
[PATCH v3 07/16] zsmalloc: remove page_mapcount_reset
...zsmalloc.c b/mm/zsmalloc.c
index 4dd72a803568..0f6cce9b9119 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -922,7 +922,6 @@ static void reset_page(struct page *page)
set_page_private(page, 0);
page->mapping = NULL;
page->freelist = NULL;
- page_mapcount_reset(page);
}
static void free_zspage(struct page *first_page)
--
1.9.1
2016 Mar 11
0
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
...) /* last page */
- SetPagePrivate2(page);
- prev_page = page;
+
+ pages[i] = page;
}
+ create_page_chain(pages, class->pages_per_zspage);
+ first_page = pages[0];
init_zspage(class, first_page);
- error = 0; /* Success */
-
-cleanup:
- if (unlikely(error) && first_page) {
- free_zspage(first_page);
- first_page = NULL;
- }
-
return first_page;
}
@@ -1419,7 +1432,6 @@ static unsigned long obj_malloc(struct size_class *class,
unsigned long m_offset;
void *vaddr;
- handle |= OBJ_ALLOCATED_TAG;
obj = get_freeobj(first_page);
objidx_to_page_and_ofs(class, first_page,...
2016 Mar 30
0
[PATCH v3 10/16] zsmalloc: factor page chain functionality out
...) /* last page */
- SetPagePrivate2(page);
- prev_page = page;
+
+ pages[i] = page;
}
+ create_page_chain(pages, class->pages_per_zspage);
+ first_page = pages[0];
init_zspage(class, first_page);
- error = 0; /* Success */
-
-cleanup:
- if (unlikely(error) && first_page) {
- free_zspage(first_page);
- first_page = NULL;
- }
-
return first_page;
}
@@ -1421,7 +1434,6 @@ static unsigned long obj_malloc(struct size_class *class,
unsigned long m_offset;
void *vaddr;
- handle |= OBJ_ALLOCATED_TAG;
obj = get_freeobj(first_page);
objidx_to_page_and_offset(class, first_pag...
2016 Mar 11
0
[PATCH v1 06/19] zsmalloc: clean up many BUG_ON
...unsigned long obj_to_head(struct size_class *class, struct page *page,
void *obj)
{
if (class->huge) {
- VM_BUG_ON(!is_first_page(page));
+ VM_BUG_ON_PAGE(!is_first_page(page), page);
return page_private(page);
} else
return *(unsigned long *)obj;
@@ -889,8 +888,8 @@ static void free_zspage(struct page *first_page)
{
struct page *nextp, *tmp, *head_extra;
- BUG_ON(!is_first_page(first_page));
- BUG_ON(first_page->inuse);
+ VM_BUG_ON_PAGE(!is_first_page(first_page), first_page);
+ VM_BUG_ON_PAGE(first_page->inuse, first_page);
head_extra = (struct page *)page_private(fi...
2016 Mar 11
0
[PATCH v1 07/19] zsmalloc: reordering function parameter
...(newfg == currfg)
goto out;
- remove_zspage(first_page, class, currfg);
- insert_zspage(first_page, class, newfg);
+ remove_zspage(class, currfg, first_page);
+ insert_zspage(class, newfg, first_page);
set_zspage_mapping(first_page, class_idx, newfg);
out:
@@ -910,7 +912,7 @@ static void free_zspage(struct page *first_page)
}
/* Initialize a newly allocated zspage */
-static void init_zspage(struct page *first_page, struct size_class *class)
+static void init_zspage(struct size_class *class, struct page *first_page)
{
unsigned long off = 0;
struct page *page = first_page;
@@ -998,7 +...
2016 Mar 12
1
[PATCH v1 13/19] zsmalloc: factor page chain functionality out
...e;
> +
> + pages[i] = page;
> }
>
> + create_page_chain(pages, class->pages_per_zspage);
> + first_page = pages[0];
> init_zspage(class, first_page);
>
> - error = 0; /* Success */
> -
> -cleanup:
> - if (unlikely(error) && first_page) {
> - free_zspage(first_page);
> - first_page = NULL;
> - }
> -
> return first_page;
> }
>
> @@ -1419,7 +1432,6 @@ static unsigned long obj_malloc(struct size_class *class,
> unsigned long m_offset;
> void *vaddr;
>
> - handle |= OBJ_ALLOCATED_TAG;
> obj = get_free...