Displaying 20 results from an estimated 28 matches for "putback_zspage".
2016 Apr 18
2
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
Hello Minchan,
On (03/30/16 16:12), Minchan Kim wrote:
[..]
> @@ -1835,23 +1827,31 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> if (!migrate_zspage(pool, class, &cc))
> break;
>
> - putback_zspage(pool, class, dst_page);
> + VM_BUG_ON_PAGE(putback_zspage(pool, class,
> + dst_page) == ZS_EMPTY, dst_page);
can this VM_BUG_ON_PAGE() condition ever be true?
> }
> /* Stop if we couldn't find slot */
> if (dst_page == NULL)
> break;
> - putback_zspage(...
2016 Apr 18
2
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
Hello Minchan,
On (03/30/16 16:12), Minchan Kim wrote:
[..]
> @@ -1835,23 +1827,31 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> if (!migrate_zspage(pool, class, &cc))
> break;
>
> - putback_zspage(pool, class, dst_page);
> + VM_BUG_ON_PAGE(putback_zspage(pool, class,
> + dst_page) == ZS_EMPTY, dst_page);
can this VM_BUG_ON_PAGE() condition ever be true?
> }
> /* Stop if we couldn't find slot */
> if (dst_page == NULL)
> break;
> - putback_zspage(...
2016 Mar 30
0
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
Currently, putback_zspage does free zspage under class->lock
if fullness become ZS_EMPTY but it makes trouble to implement
locking scheme for new zspage migration.
So, this patch is to separate free_zspage from putback_zspage
and free zspage out of class->lock which is preparation for
zspage migration.
Signed-off-by:...
2016 Apr 19
0
[PATCH v3 11/16] zsmalloc: separate free_zspage from putback_zspage
...wrote:
> Hello Minchan,
>
> On (03/30/16 16:12), Minchan Kim wrote:
> [..]
> > @@ -1835,23 +1827,31 @@ static void __zs_compact(struct zs_pool *pool, struct size_class *class)
> > if (!migrate_zspage(pool, class, &cc))
> > break;
> >
> > - putback_zspage(pool, class, dst_page);
> > + VM_BUG_ON_PAGE(putback_zspage(pool, class,
> > + dst_page) == ZS_EMPTY, dst_page);
>
> can this VM_BUG_ON_PAGE() condition ever be true?
I guess it is remained thing after I rebased to catch any mistake.
But I'm heavily chainging this part....
2016 Mar 21
0
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
...struct link_free *link;
void *vaddr;
+ BUG_ON(!trylock_page(page));
+ page->mapping = mapping;
+ __SetPageMovable(page);
+ unlock_page(page);
+
vaddr = kmap_atomic(page);
link = (struct link_free *)vaddr + off / sizeof(*link);
@@ -1850,6 +1902,7 @@ static enum fullness_group putback_zspage(struct size_class *class,
VM_BUG_ON_PAGE(!list_empty(&first_page->lru), first_page);
VM_BUG_ON_PAGE(ZsPageIsolate(first_page), first_page);
+ VM_BUG_ON_PAGE(check_isolated_page(first_page), first_page);
fullness = get_fullness_group(class, first_page);
insert_zspage(class, fullne...
2016 Mar 21
2
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
Hi Minchan,
[auto build test WARNING on next-20160318]
[cannot apply to v4.5-rc7 v4.5-rc6 v4.5-rc5 v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160321-143339
coccinelle warnings: (new ones prefixed by >>)
>>
2016 Mar 21
2
[PATCH v2 17/18] zsmalloc: migrate tail pages in zspage
Hi Minchan,
[auto build test WARNING on next-20160318]
[cannot apply to v4.5-rc7 v4.5-rc6 v4.5-rc5 v4.5]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/Support-non-lru-page-migration/20160321-143339
coccinelle warnings: (new ones prefixed by >>)
>>
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
...ject in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: remove page_mapcount_reset
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
5. add zsmalloc page migration
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
* From v2
* rebase on mmotm-2016-03-29-15-54-16...
2016 Mar 30
33
[PATCH v3 00/16] Support non-lru page migration
...ject in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: remove page_mapcount_reset
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
5. add zsmalloc page migration
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
* From v2
* rebase on mmotm-2016-03-29-15-54-16...
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
...ram in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
Gioh Kim (1):
fs/anon_inodes: new interface to create new inode
Minchan Kim (18):...
2016 Mar 11
31
[PATCH v1 00/19] Support non-lru page migration
...ram in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
zsmalloc: migrate head page of zspage
zsmalloc: use single linked list for page chain
zsmalloc: migrate tail pages in zspage
zram: use __GFP_MOVABLE for memory allocation
Gioh Kim (1):
fs/anon_inodes: new interface to create new inode
Minchan Kim (18):...
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
...ram in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
3. add non-lru page migration feature
mm/compaction: support non-lru movable page migration
4. rework KVM memory-ballooning
mm/balloon: use general movable page feature into balloon
5. add zsmalloc page migration
zsmalloc: migrate head page of zspage
zs...
2016 Mar 21
22
[PATCH v2 00/18] Support non-lru page migration
...ram in obj_free
zsmalloc: keep max_object in size_class
zsmalloc: squeeze inuse into page->mapping
zsmalloc: squeeze freelist into page->mapping
zsmalloc: move struct zs_meta from mapping to freelist
zsmalloc: factor page chain functionality out
zsmalloc: separate free_zspage from putback_zspage
zsmalloc: zs_compact refactoring
3. add non-lru page migration feature
mm/compaction: support non-lru movable page migration
4. rework KVM memory-ballooning
mm/balloon: use general movable page feature into balloon
5. add zsmalloc page migration
zsmalloc: migrate head page of zspage
zs...
2016 Mar 12
1
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...ol *pool, struct size_class *class,
> }
>
> /* Stop if there is no more space */
> - if (zspage_full(d_page)) {
> + if (zspage_full(class, d_page)) {
> unpin_tag(handle);
> ret = -ENOMEM;
> break;
> @@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
> {
> enum fullness_group fullness;
>
> - fullness = get_fullness_group(first_page);
> + fullness = get_fullness_group(class, first_page);
> insert_zspage(class, fullness, first_page);
> set_zspage_mapping(first_page, class->index, fullness);...
2016 Mar 12
1
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...ol *pool, struct size_class *class,
> }
>
> /* Stop if there is no more space */
> - if (zspage_full(d_page)) {
> + if (zspage_full(class, d_page)) {
> unpin_tag(handle);
> ret = -ENOMEM;
> break;
> @@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
> {
> enum fullness_group fullness;
>
> - fullness = get_fullness_group(first_page);
> + fullness = get_fullness_group(class, first_page);
> insert_zspage(class, fullness, first_page);
> set_zspage_mapping(first_page, class->index, fullness);...
2016 Mar 11
0
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...1622,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
}
/* Stop if there is no more space */
- if (zspage_full(d_page)) {
+ if (zspage_full(class, d_page)) {
unpin_tag(handle);
ret = -ENOMEM;
break;
@@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
{
enum fullness_group fullness;
- fullness = get_fullness_group(first_page);
+ fullness = get_fullness_group(class, first_page);
insert_zspage(class, fullness, first_page);
set_zspage_mapping(first_page, class->index, fullness);
@@ -1933,6 +1930,8 @@ struct zs_po...
2016 Mar 30
0
[PATCH v3 05/16] zsmalloc: keep max_object in size_class
...1625,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
}
/* Stop if there is no more space */
- if (zspage_full(d_page)) {
+ if (zspage_full(class, d_page)) {
unpin_tag(handle);
ret = -ENOMEM;
break;
@@ -1687,7 +1684,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
{
enum fullness_group fullness;
- fullness = get_fullness_group(first_page);
+ fullness = get_fullness_group(class, first_page);
insert_zspage(class, fullness, first_page);
set_zspage_mapping(first_page, class->index, fullness);
@@ -1936,8 +1933,9 @@ struct zs_po...
2016 Mar 14
0
[PATCH v1 09/19] zsmalloc: keep max_object in size_class
...gt; > }
> >
> > /* Stop if there is no more space */
> >- if (zspage_full(d_page)) {
> >+ if (zspage_full(class, d_page)) {
> > unpin_tag(handle);
> > ret = -ENOMEM;
> > break;
> >@@ -1684,7 +1681,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
> > {
> > enum fullness_group fullness;
> >
> >- fullness = get_fullness_group(first_page);
> >+ fullness = get_fullness_group(class, first_page);
> > insert_zspage(class, fullness, first_page);
> > set_zspage_mapping(first_page...
2016 Mar 11
0
[PATCH v1 06/19] zsmalloc: clean up many BUG_ON
...ect_copy(unsigned long dst, unsigned long src,
if (d_off >= PAGE_SIZE) {
kunmap_atomic(d_addr);
d_page = get_next_page(d_page);
- BUG_ON(!d_page);
d_addr = kmap_atomic(d_page);
d_size = class->size - written;
d_off = 0;
@@ -1691,8 +1683,6 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
{
enum fullness_group fullness;
- BUG_ON(!is_first_page(first_page));
-
fullness = get_fullness_group(first_page);
insert_zspage(first_page, class, fullness);
set_zspage_mapping(first_page, class->index, fullness);
@@ -1753,8 +1743,6 @@ static void __zs_compact(s...
2016 Mar 11
0
[PATCH v1 07/19] zsmalloc: reordering function parameter
...uct page *isolate_target_page(struct size_class *class)
for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
page = class->fullness_list[i];
if (page) {
- remove_zspage(page, class, i);
+ remove_zspage(class, i, page);
break;
}
}
@@ -1684,7 +1686,7 @@ static enum fullness_group putback_zspage(struct zs_pool *pool,
enum fullness_group fullness;
fullness = get_fullness_group(first_page);
- insert_zspage(first_page, class, fullness);
+ insert_zspage(class, fullness, first_page);
set_zspage_mapping(first_page, class->index, fullness);
if (fullness == ZS_EMPTY) {
@@ -1709,7 +1...