Recently, I got many reports about perfermance degradation in embedded system(Android mobile phone, webOS TV and so on) and easy fork fail. The problem was fragmentation caused by zram and GPU driver mainly. With memory pressure, their pages were spread out all of pageblock and it cannot be migrated with current compaction algorithm which supports only LRU pages. In the end, compaction cannot work well so reclaimer shrinks all of working set pages. It made system very slow and even to fail to fork easily which requires order-[2 or 3] allocations. Other pain point is that they cannot use CMA memory space so when OOM kill happens, I can see many free pages in CMA area, which is not memory efficient. In our product which has big CMA memory, it reclaims zones too exccessively to allocate GPU and zram page although there are lots of free space in CMA so system becomes very slow easily. To solve these problem, this patch tries to add facility to migrate non-lru pages via introducing new functions and page flags to help migration. struct address_space_operations { .. .. bool (*isolate_page)(struct page *, isolate_mode_t); void (*putback_page)(struct page *); .. } new page flags PG_movable PG_isolated For details, please read description in "mm: migrate: support non-lru movable page migration". Originally, Gioh Kim had tried to support this feature but he moved so I took over the work. I took many code from his work and changed a little bit and Konstantin Khlebnikov helped Gioh a lot so he should deserve to have many credit, too. And I should mention Chulmin who have tested this patchset heavily so I can find many bugs from him. :) Thanks, Gioh, Konstantin and Chulmin! This patchset consists of five parts. 1. clean up migration mm: use put_page to free page instead of putback_lru_page 2. add non-lru page migration feature mm: migrate: support non-lru movable page migration 3. rework KVM memory-ballooning mm: balloon: use general non-lru movable page feature 4. zsmalloc refactoring for preparing page migration zsmalloc: keep max_object in size_class zsmalloc: use bit_spin_lock zsmalloc: use accessor zsmalloc: factor page chain functionality out zsmalloc: introduce zspage structure zsmalloc: separate free_zspage from putback_zspage zsmalloc: use freeobj for index 5. zsmalloc page migration zsmalloc: page migration support zram: use __GFP_MOVABLE for memory allocation * From v6 * rebase on mmotm-2016-05-27-15-19 * clean up zsmalloc - Sergey * clean up non-lru page migration - Vlastimil * From v5 * rebase on next-20160520 * move utility functions to compaction.c and export - Sergey * zsmalloc dobule free fix - Sergey * add additional Reviewed-by for zsmalloc - Sergey * From v4 * rebase on mmotm-2016-05-05-17-19 * fix huge object migration - Chulmin * !CONFIG_COMPACTION support for zsmalloc * From v3 * rebase on mmotm-2016-04-06-20-40 * fix swap_info deadlock - Chulmin * race without page_lock - Vlastimil * no use page._mapcount for potential user-mapped page driver - Vlastimil * fix and enhance doc/description - Vlastimil * use page->mapping lower bits to represent PG_movable * make driver side's rule simple. * From v2 * rebase on mmotm-2016-03-29-15-54-16 * check PageMovable before lock_page - Joonsoo * check PageMovable before PageIsolated checking - Joonsoo * add more description about rule * From v1 * rebase on v4.5-mmotm-2016-03-17-15-04 * reordering patches to merge clean-up patches first * add Acked-by/Reviewed-by from Vlastimil and Sergey * use each own mount model instead of reusing anon_inode_fs - Al Viro * small changes - YiPing, Gioh Cc: Vlastimil Babka <vbabka at suse.cz> Cc: dri-devel at lists.freedesktop.org Cc: Hugh Dickins <hughd at google.com> Cc: John Einar Reitan <john.reitan at foss.arm.com> Cc: Jonathan Corbet <corbet at lwn.net> Cc: Joonsoo Kim <iamjoonsoo.kim at lge.com> Cc: Konstantin Khlebnikov <koct9i at gmail.com> Cc: Mel Gorman <mgorman at suse.de> Cc: Naoya Horiguchi <n-horiguchi at ah.jp.nec.com> Cc: Rafael Aquini <aquini at redhat.com> Cc: Rik van Riel <riel at redhat.com> Cc: Sergey Senozhatsky <sergey.senozhatsky at gmail.com> Cc: virtualization at lists.linux-foundation.org Cc: Gioh Kim <gi-oh.kim at profitbricks.com> Cc: Chan Gyun Jeong <chan.jeong at lge.com> Cc: Sangseok Lee <sangseok.lee at lge.com> Cc: Kyeongdon Kim <kyeongdon.kim at lge.com> Cc: Chulmin Kim <cmlaika.kim at samsung.com> Minchan Kim (12): mm: use put_page to free page instead of putback_lru_page mm: migrate: support non-lru movable page migration mm: balloon: use general non-lru movable page feature zsmalloc: keep max_object in size_class zsmalloc: use bit_spin_lock zsmalloc: use accessor zsmalloc: factor page chain functionality out zsmalloc: introduce zspage structure zsmalloc: separate free_zspage from putback_zspage zsmalloc: use freeobj for index zsmalloc: page migration support zram: use __GFP_MOVABLE for memory allocation Documentation/filesystems/Locking | 4 + Documentation/filesystems/vfs.txt | 11 + Documentation/vm/page_migration | 107 ++- drivers/block/zram/zram_drv.c | 6 +- drivers/virtio/virtio_balloon.c | 54 +- include/linux/balloon_compaction.h | 53 +- include/linux/compaction.h | 17 + include/linux/fs.h | 2 + include/linux/ksm.h | 3 +- include/linux/migrate.h | 2 + include/linux/mm.h | 1 + include/linux/page-flags.h | 33 +- include/uapi/linux/magic.h | 2 + mm/balloon_compaction.c | 94 +-- mm/compaction.c | 79 ++- mm/ksm.c | 4 +- mm/migrate.c | 257 +++++-- mm/page_alloc.c | 2 +- mm/util.c | 6 +- mm/vmscan.c | 2 +- mm/zsmalloc.c | 1349 +++++++++++++++++++++++++----------- 21 files changed, 1479 insertions(+), 609 deletions(-) -- 1.9.1
Minchan Kim
2016-May-31 23:21 UTC
[PATCH v7 02/12] mm: migrate: support non-lru movable page migration
We have allowed migration for only LRU pages until now and it was enough to make high-order pages. But recently, embedded system(e.g., webOS, android) uses lots of non-movable pages(e.g., zram, GPU memory) so we have seen several reports about troubles of small high-order allocation. For fixing the problem, there were several efforts (e,g,. enhance compaction algorithm, SLUB fallback to 0-order page, reserved memory, vmalloc and so on) but if there are lots of non-movable pages in system, their solutions are void in the long run. So, this patch is to support facility to change non-movable pages with movable. For the feature, this patch introduces functions related to migration to address_space_operations as well as some page flags. If a driver want to make own pages movable, it should define three functions which are function pointers of struct address_space_operations. 1. bool (*isolate_page) (struct page *page, isolate_mode_t mode); What VM expects on isolate_page function of driver is to return *true* if driver isolates page successfully. On returing true, VM marks the page as PG_isolated so concurrent isolation in several CPUs skip the page for isolation. If a driver cannot isolate the page, it should return *false*. Once page is successfully isolated, VM uses page.lru fields so driver shouldn't expect to preserve values in that fields. 2. int (*migratepage) (struct address_space *mapping, struct page *newpage, struct page *oldpage, enum migrate_mode); After isolation, VM calls migratepage of driver with isolated page. The function of migratepage is to move content of the old page to new page and set up fields of struct page newpage. Keep in mind that you should indicate to the VM the oldpage is no longer movable via __ClearPageMovable() under page_lock if you migrated the oldpage successfully and returns 0. If driver cannot migrate the page at the moment, driver can return -EAGAIN. On -EAGAIN, VM will retry page migration in a short time because VM interprets -EAGAIN as "temporal migration failure". On returning any error except -EAGAIN, VM will give up the page migration without retrying in this time. Driver shouldn't touch page.lru field VM using in the functions. 3. void (*putback_page)(struct page *); If migration fails on isolated page, VM should return the isolated page to the driver so VM calls driver's putback_page with migration failed page. In this function, driver should put the isolated page back to the own data structure. 4. non-lru movable page flags There are two page flags for supporting non-lru movable page. * PG_movable Driver should use the below function to make page movable under page_lock. void __SetPageMovable(struct page *page, struct address_space *mapping) It needs argument of address_space for registering migration family functions which will be called by VM. Exactly speaking, PG_movable is not a real flag of struct page. Rather than, VM reuses page->mapping's lower bits to represent it. #define PAGE_MAPPING_MOVABLE 0x2 page->mapping = page->mapping | PAGE_MAPPING_MOVABLE; so driver shouldn't access page->mapping directly. Instead, driver should use page_mapping which mask off the low two bits of page->mapping so it can get right struct address_space. For testing of non-lru movable page, VM supports __PageMovable function. However, it doesn't guarantee to identify non-lru movable page because page->mapping field is unified with other variables in struct page. As well, if driver releases the page after isolation by VM, page->mapping doesn't have stable value although it has PAGE_MAPPING_MOVABLE (Look at __ClearPageMovable). But __PageMovable is cheap to catch whether page is LRU or non-lru movable once the page has been isolated. Because LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also good for just peeking to test non-lru movable pages before more expensive checking with lock_page in pfn scanning to select victim. For guaranteeing non-lru movable page, VM provides PageMovable function. Unlike __PageMovable, PageMovable functions validates page->mapping and mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden destroying of page->mapping. Driver using __SetPageMovable should clear the flag via __ClearMovablePage under page_lock before the releasing the page. * PG_isolated To prevent concurrent isolation among several CPUs, VM marks isolated page as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru movable page, it can skip it. Driver doesn't need to manipulate the flag because VM will set/clear it automatically. Keep in mind that if driver sees PG_isolated page, it means the page have been isolated by VM so it shouldn't touch page.lru field. PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag for own purpose. Cc: Rik van Riel <riel at redhat.com> Cc: Joonsoo Kim <iamjoonsoo.kim at lge.com> Cc: Mel Gorman <mgorman at suse.de> Cc: Hugh Dickins <hughd at google.com> Cc: Rafael Aquini <aquini at redhat.com> Cc: virtualization at lists.linux-foundation.org Cc: Jonathan Corbet <corbet at lwn.net> Cc: John Einar Reitan <john.reitan at foss.arm.com> Cc: dri-devel at lists.freedesktop.org Cc: Sergey Senozhatsky <sergey.senozhatsky at gmail.com> Acked-by: Vlastimil Babka <vbabka at suse.cz> Signed-off-by: Gioh Kim <gi-oh.kim at profitbricks.com> Signed-off-by: Minchan Kim <minchan at kernel.org> --- Documentation/filesystems/Locking | 4 + Documentation/filesystems/vfs.txt | 11 +++ Documentation/vm/page_migration | 107 ++++++++++++++++++++- include/linux/compaction.h | 17 ++++ include/linux/fs.h | 2 + include/linux/ksm.h | 3 +- include/linux/migrate.h | 2 + include/linux/mm.h | 1 + include/linux/page-flags.h | 33 +++++-- mm/compaction.c | 85 +++++++++++++---- mm/ksm.c | 4 +- mm/migrate.c | 192 ++++++++++++++++++++++++++++++++++---- mm/page_alloc.c | 2 +- mm/util.c | 6 +- 14 files changed, 417 insertions(+), 52 deletions(-) diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking index af7c030a0368..3991a976cf43 100644 --- a/Documentation/filesystems/Locking +++ b/Documentation/filesystems/Locking @@ -195,7 +195,9 @@ unlocks and drops the reference. int (*releasepage) (struct page *, int); void (*freepage)(struct page *); int (*direct_IO)(struct kiocb *, struct iov_iter *iter); + bool (*isolate_page) (struct page *, isolate_mode_t); int (*migratepage)(struct address_space *, struct page *, struct page *); + void (*putback_page) (struct page *); int (*launder_page)(struct page *); int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long); int (*error_remove_page)(struct address_space *, struct page *); @@ -219,7 +221,9 @@ invalidatepage: yes releasepage: yes freepage: yes direct_IO: +isolate_page: yes migratepage: yes (both) +putback_page: yes launder_page: yes is_partially_uptodate: yes error_remove_page: yes diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt index 19366fef2652..9d4ae317fdcb 100644 --- a/Documentation/filesystems/vfs.txt +++ b/Documentation/filesystems/vfs.txt @@ -591,9 +591,14 @@ struct address_space_operations { int (*releasepage) (struct page *, int); void (*freepage)(struct page *); ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter); + /* isolate a page for migration */ + bool (*isolate_page) (struct page *, isolate_mode_t); /* migrate the contents of a page to the specified target */ int (*migratepage) (struct page *, struct page *); + /* put migration-failed page back to right list */ + void (*putback_page) (struct page *); int (*launder_page) (struct page *); + int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); void (*is_dirty_writeback) (struct page *, bool *, bool *); @@ -739,6 +744,10 @@ struct address_space_operations { and transfer data directly between the storage and the application's address space. + isolate_page: Called by the VM when isolating a movable non-lru page. + If page is successfully isolated, VM marks the page as PG_isolated + via __SetPageIsolated. + migrate_page: This is used to compact the physical memory usage. If the VM wants to relocate a page (maybe off a memory card that is signalling imminent failure) it will pass a new page @@ -746,6 +755,8 @@ struct address_space_operations { transfer any private data across and update any references that it has to the page. + putback_page: Called by the VM when isolated page's migration fails. + launder_page: Called before freeing a page - it writes back the dirty page. To prevent redirtying the page, it is kept locked during the whole operation. diff --git a/Documentation/vm/page_migration b/Documentation/vm/page_migration index fea5c0864170..18d37c7ac50b 100644 --- a/Documentation/vm/page_migration +++ b/Documentation/vm/page_migration @@ -142,5 +142,110 @@ is increased so that the page cannot be freed while page migration occurs. 20. The new page is moved to the LRU and can be scanned by the swapper etc again. -Christoph Lameter, May 8, 2006. +C. Non-LRU page migration +------------------------- + +Although original migration aimed for reducing the latency of memory access +for NUMA, compaction who want to create high-order page is also main customer. + +Current problem of the implementation is that it is designed to migrate only +*LRU* pages. However, there are potential non-lru pages which can be migrated +in drivers, for example, zsmalloc, virtio-balloon pages. + +For virtio-balloon pages, some parts of migration code path have been hooked +up and added virtio-balloon specific functions to intercept migration logics. +It's too specific to a driver so other drivers who want to make their pages +movable would have to add own specific hooks in migration path. + +To overclome the problem, VM supports non-LRU page migration which provides +generic functions for non-LRU movable pages without driver specific hooks +migration path. + +If a driver want to make own pages movable, it should define three functions +which are function pointers of struct address_space_operations. + +1. bool (*isolate_page) (struct page *page, isolate_mode_t mode); + +What VM expects on isolate_page function of driver is to return *true* +if driver isolates page successfully. On returing true, VM marks the page +as PG_isolated so concurrent isolation in several CPUs skip the page +for isolation. If a driver cannot isolate the page, it should return *false*. + +Once page is successfully isolated, VM uses page.lru fields so driver +shouldn't expect to preserve values in that fields. + +2. int (*migratepage) (struct address_space *mapping, + struct page *newpage, struct page *oldpage, enum migrate_mode); + +After isolation, VM calls migratepage of driver with isolated page. +The function of migratepage is to move content of the old page to new page +and set up fields of struct page newpage. Keep in mind that you should +indicate to the VM the oldpage is no longer movable via __ClearPageMovable() +under page_lock if you migrated the oldpage successfully and returns 0. +If driver cannot migrate the page at the moment, driver can return -EAGAIN. +On -EAGAIN, VM will retry page migration in a short time because VM interprets +-EAGAIN as "temporal migration failure". On returning any error except -EAGAIN, +VM will give up the page migration without retrying in this time. + +Driver shouldn't touch page.lru field VM using in the functions. + +3. void (*putback_page)(struct page *); + +If migration fails on isolated page, VM should return the isolated page +to the driver so VM calls driver's putback_page with migration failed page. +In this function, driver should put the isolated page back to the own data +structure. +4. non-lru movable page flags + +There are two page flags for supporting non-lru movable page. + +* PG_movable + +Driver should use the below function to make page movable under page_lock. + + void __SetPageMovable(struct page *page, struct address_space *mapping) + +It needs argument of address_space for registering migration family functions +which will be called by VM. Exactly speaking, PG_movable is not a real flag of +struct page. Rather than, VM reuses page->mapping's lower bits to represent it. + + #define PAGE_MAPPING_MOVABLE 0x2 + page->mapping = page->mapping | PAGE_MAPPING_MOVABLE; + +so driver shouldn't access page->mapping directly. Instead, driver should +use page_mapping which mask off the low two bits of page->mapping under +page lock so it can get right struct address_space. + +For testing of non-lru movable page, VM supports __PageMovable function. +However, it doesn't guarantee to identify non-lru movable page because +page->mapping field is unified with other variables in struct page. +As well, if driver releases the page after isolation by VM, page->mapping +doesn't have stable value although it has PAGE_MAPPING_MOVABLE +(Look at __ClearPageMovable). But __PageMovable is cheap to catch whether +page is LRU or non-lru movable once the page has been isolated. Because +LRU pages never can have PAGE_MAPPING_MOVABLE in page->mapping. It is also +good for just peeking to test non-lru movable pages before more expensive +checking with lock_page in pfn scanning to select victim. + +For guaranteeing non-lru movable page, VM provides PageMovable function. +Unlike __PageMovable, PageMovable functions validates page->mapping and +mapping->a_ops->isolate_page under lock_page. The lock_page prevents sudden +destroying of page->mapping. + +Driver using __SetPageMovable should clear the flag via __ClearMovablePage +under page_lock before the releasing the page. + +* PG_isolated + +To prevent concurrent isolation among several CPUs, VM marks isolated page +as PG_isolated under lock_page. So if a CPU encounters PG_isolated non-lru +movable page, it can skip it. Driver doesn't need to manipulate the flag +because VM will set/clear it automatically. Keep in mind that if driver +sees PG_isolated page, it means the page have been isolated by VM so it +shouldn't touch page.lru field. +PG_isolated is alias with PG_reclaim flag so driver shouldn't use the flag +for own purpose. + +Christoph Lameter, May 8, 2006. +Minchan Kim, Mar 28, 2016. diff --git a/include/linux/compaction.h b/include/linux/compaction.h index a58c852a268f..c6b47c861cea 100644 --- a/include/linux/compaction.h +++ b/include/linux/compaction.h @@ -54,6 +54,9 @@ enum compact_result { struct alloc_context; /* in mm/internal.h */ #ifdef CONFIG_COMPACTION +extern int PageMovable(struct page *page); +extern void __SetPageMovable(struct page *page, struct address_space *mapping); +extern void __ClearPageMovable(struct page *page); extern int sysctl_compact_memory; extern int sysctl_compaction_handler(struct ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos); @@ -151,6 +154,19 @@ extern void kcompactd_stop(int nid); extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx); #else +static inline int PageMovable(struct page *page) +{ + return 0; +} +static inline void __SetPageMovable(struct page *page, + struct address_space *mapping) +{ +} + +static inline void __ClearPageMovable(struct page *page) +{ +} + static inline enum compact_result try_to_compact_pages(gfp_t gfp_mask, unsigned int order, int alloc_flags, const struct alloc_context *ac, @@ -212,6 +228,7 @@ static inline void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_i #endif /* CONFIG_COMPACTION */ #if defined(CONFIG_COMPACTION) && defined(CONFIG_SYSFS) && defined(CONFIG_NUMA) +struct node; extern int compaction_register_node(struct node *node); extern void compaction_unregister_node(struct node *node); diff --git a/include/linux/fs.h b/include/linux/fs.h index 0cfdf2aec8f7..39ef97414033 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -402,6 +402,8 @@ struct address_space_operations { */ int (*migratepage) (struct address_space *, struct page *, struct page *, enum migrate_mode); + bool (*isolate_page)(struct page *, isolate_mode_t); + void (*putback_page)(struct page *); int (*launder_page) (struct page *); int (*is_partially_uptodate) (struct page *, unsigned long, unsigned long); diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 7ae216a39c9e..481c8c4627ca 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -43,8 +43,7 @@ static inline struct stable_node *page_stable_node(struct page *page) static inline void set_page_stable_node(struct page *page, struct stable_node *stable_node) { - page->mapping = (void *)stable_node + - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM); } /* diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 9b50325e4ddf..404fbfefeb33 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -37,6 +37,8 @@ extern int migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free, unsigned long private, enum migrate_mode mode, int reason); +extern bool isolate_movable_page(struct page *page, isolate_mode_t mode); +extern void putback_movable_page(struct page *page); extern int migrate_prep(void); extern int migrate_prep_local(void); diff --git a/include/linux/mm.h b/include/linux/mm.h index a00ec816233a..33eaec57e997 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1035,6 +1035,7 @@ static inline pgoff_t page_file_index(struct page *page) } bool page_mapped(struct page *page); +struct address_space *page_mapping(struct page *page); /* * Return true only if the page has been allocated with diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index e5a32445f930..f36dbb3a3060 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -129,6 +129,9 @@ enum pageflags { /* Compound pages. Stored in first tail page's flags */ PG_double_map = PG_private_2, + + /* non-lru isolated movable page */ + PG_isolated = PG_reclaim, }; #ifndef __GENERATING_BOUNDS_H @@ -357,29 +360,37 @@ PAGEFLAG(Idle, idle, PF_ANY) * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h. * * On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled, - * the PAGE_MAPPING_KSM bit may be set along with the PAGE_MAPPING_ANON bit; - * and then page->mapping points, not to an anon_vma, but to a private + * the PAGE_MAPPING_MOVABLE bit may be set along with the PAGE_MAPPING_ANON + * bit; and then page->mapping points, not to an anon_vma, but to a private * structure which KSM associates with that merged page. See ksm.h. * - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is currently never used. + * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable + * page and then page->mapping points a struct address_space. * * Please note that, confusingly, "page_mapping" refers to the inode * address_space which maps the page from disk; whereas "page_mapped" * refers to user virtual address space into which the page is mapped. */ -#define PAGE_MAPPING_ANON 1 -#define PAGE_MAPPING_KSM 2 -#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM) +#define PAGE_MAPPING_ANON 0x1 +#define PAGE_MAPPING_MOVABLE 0x2 +#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) +#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE) -static __always_inline int PageAnonHead(struct page *page) +static __always_inline int PageMappingFlags(struct page *page) { - return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; + return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0; } static __always_inline int PageAnon(struct page *page) { page = compound_head(page); - return PageAnonHead(page); + return ((unsigned long)page->mapping & PAGE_MAPPING_ANON) != 0; +} + +static __always_inline int __PageMovable(struct page *page) +{ + return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) =+ PAGE_MAPPING_MOVABLE; } #ifdef CONFIG_KSM @@ -393,7 +404,7 @@ static __always_inline int PageKsm(struct page *page) { page = compound_head(page); return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) =- (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + PAGE_MAPPING_KSM; } #else TESTPAGEFLAG_FALSE(Ksm) @@ -641,6 +652,8 @@ static inline void __ClearPageBalloon(struct page *page) atomic_set(&page->_mapcount, -1); } +__PAGEFLAG(Isolated, isolated, PF_ANY); + /* * If network-based swap is enabled, sl*b must keep track of whether pages * were allocated from pfmemalloc reserves. diff --git a/mm/compaction.c b/mm/compaction.c index 1427366ad673..a680b52e190b 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -81,6 +81,44 @@ static inline bool migrate_async_suitable(int migratetype) #ifdef CONFIG_COMPACTION +int PageMovable(struct page *page) +{ + struct address_space *mapping; + + VM_BUG_ON_PAGE(!PageLocked(page), page); + if (!__PageMovable(page)) + return 0; + + mapping = page_mapping(page); + if (mapping && mapping->a_ops && mapping->a_ops->isolate_page) + return 1; + + return 0; +} +EXPORT_SYMBOL(PageMovable); + +void __SetPageMovable(struct page *page, struct address_space *mapping) +{ + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE((unsigned long)mapping & PAGE_MAPPING_MOVABLE, page); + page->mapping = (void *)((unsigned long)mapping | PAGE_MAPPING_MOVABLE); +} +EXPORT_SYMBOL(__SetPageMovable); + +void __ClearPageMovable(struct page *page) +{ + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(!PageMovable(page), page); + /* + * Clear registered address_space val with keeping PAGE_MAPPING_MOVABLE + * flag so that VM can catch up released page by driver after isolation. + * With it, VM migration doesn't try to put it back. + */ + page->mapping = (void *)((unsigned long)page->mapping & + PAGE_MAPPING_MOVABLE); +} +EXPORT_SYMBOL(__ClearPageMovable); + /* Do not skip compaction more than 64 times */ #define COMPACT_MAX_DEFER_SHIFT 6 @@ -735,21 +773,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, } /* - * Check may be lockless but that's ok as we recheck later. - * It's possible to migrate LRU pages and balloon pages - * Skip any other type of page - */ - is_lru = PageLRU(page); - if (!is_lru) { - if (unlikely(balloon_page_movable(page))) { - if (balloon_page_isolate(page)) { - /* Successfully isolated */ - goto isolate_success; - } - } - } - - /* * Regardless of being on LRU, compound pages such as THP and * hugetlbfs are not to be compacted. We can potentially save * a lot of iterations if we skip them at once. The check is @@ -765,8 +788,38 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, goto isolate_fail; } - if (!is_lru) + /* + * Check may be lockless but that's ok as we recheck later. + * It's possible to migrate LRU and non-lru movable pages. + * Skip any other type of page + */ + is_lru = PageLRU(page); + if (!is_lru) { + if (unlikely(balloon_page_movable(page))) { + if (balloon_page_isolate(page)) { + /* Successfully isolated */ + goto isolate_success; + } + } + + /* + * __PageMovable can return false positive so we need + * to verify it under page_lock. + */ + if (unlikely(__PageMovable(page)) && + !PageIsolated(page)) { + if (locked) { + spin_unlock_irqrestore(&zone->lru_lock, + flags); + locked = false; + } + + if (isolate_movable_page(page, isolate_mode)) + goto isolate_success; + } + goto isolate_fail; + } /* * Migration will fail if an anonymous page is pinned in memory, diff --git a/mm/ksm.c b/mm/ksm.c index 4786b4150f62..35b8aef867a9 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -532,8 +532,8 @@ static struct page *get_ksm_page(struct stable_node *stable_node, bool lock_it) void *expected_mapping; unsigned long kpfn; - expected_mapping = (void *)stable_node + - (PAGE_MAPPING_ANON | PAGE_MAPPING_KSM); + expected_mapping = (void *)((unsigned long)stable_node | + PAGE_MAPPING_KSM); again: kpfn = READ_ONCE(stable_node->kpfn); page = pfn_to_page(kpfn); diff --git a/mm/migrate.c b/mm/migrate.c index 2666f28b5236..60abcf379b51 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -31,6 +31,7 @@ #include <linux/vmalloc.h> #include <linux/security.h> #include <linux/backing-dev.h> +#include <linux/compaction.h> #include <linux/syscalls.h> #include <linux/hugetlb.h> #include <linux/hugetlb_cgroup.h> @@ -73,6 +74,81 @@ int migrate_prep_local(void) return 0; } +bool isolate_movable_page(struct page *page, isolate_mode_t mode) +{ + struct address_space *mapping; + + /* + * Avoid burning cycles with pages that are yet under __free_pages(), + * or just got freed under us. + * + * In case we 'win' a race for a movable page being freed under us and + * raise its refcount preventing __free_pages() from doing its job + * the put_page() at the end of this block will take care of + * release this page, thus avoiding a nasty leakage. + */ + if (unlikely(!get_page_unless_zero(page))) + goto out; + + /* + * Check PageMovable before holding a PG_lock because page's owner + * assumes anybody doesn't touch PG_lock of newly allocated page + * so unconditionally grapping the lock ruins page's owner side. + */ + if (unlikely(!__PageMovable(page))) + goto out_putpage; + /* + * As movable pages are not isolated from LRU lists, concurrent + * compaction threads can race against page migration functions + * as well as race against the releasing a page. + * + * In order to avoid having an already isolated movable page + * being (wrongly) re-isolated while it is under migration, + * or to avoid attempting to isolate pages being released, + * lets be sure we have the page lock + * before proceeding with the movable page isolation steps. + */ + if (unlikely(!trylock_page(page))) + goto out_putpage; + + if (!PageMovable(page) || PageIsolated(page)) + goto out_no_isolated; + + mapping = page_mapping(page); + VM_BUG_ON_PAGE(!mapping, page); + + if (!mapping->a_ops->isolate_page(page, mode)) + goto out_no_isolated; + + /* Driver shouldn't use PG_isolated bit of page->flags */ + WARN_ON_ONCE(PageIsolated(page)); + __SetPageIsolated(page); + unlock_page(page); + + return true; + +out_no_isolated: + unlock_page(page); +out_putpage: + put_page(page); +out: + return false; +} + +/* It should be called on page which is PG_movable */ +void putback_movable_page(struct page *page) +{ + struct address_space *mapping; + + VM_BUG_ON_PAGE(!PageLocked(page), page); + VM_BUG_ON_PAGE(!PageMovable(page), page); + VM_BUG_ON_PAGE(!PageIsolated(page), page); + + mapping = page_mapping(page); + mapping->a_ops->putback_page(page); + __ClearPageIsolated(page); +} + /* * Put previously isolated pages back onto the appropriate lists * from where they were once taken off for compaction/migration. @@ -94,10 +170,25 @@ void putback_movable_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - if (unlikely(isolated_balloon_page(page))) + if (unlikely(isolated_balloon_page(page))) { balloon_page_putback(page); - else + /* + * We isolated non-lru movable page so here we can use + * __PageMovable because LRU page's mapping cannot have + * PAGE_MAPPING_MOVABLE. + */ + } else if (unlikely(__PageMovable(page))) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + lock_page(page); + if (PageMovable(page)) + putback_movable_page(page); + else + __ClearPageIsolated(page); + unlock_page(page); + put_page(page); + } else { putback_lru_page(page); + } } } @@ -592,7 +683,7 @@ void migrate_page_copy(struct page *newpage, struct page *page) ***********************************************************/ /* - * Common logic to directly migrate a single page suitable for + * Common logic to directly migrate a single LRU page suitable for * pages that do not use PagePrivate/PagePrivate2. * * Pages are locked upon entry and exit. @@ -755,33 +846,72 @@ static int move_to_new_page(struct page *newpage, struct page *page, enum migrate_mode mode) { struct address_space *mapping; - int rc; + int rc = -EAGAIN; + bool is_lru = !__PageMovable(page); VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); mapping = page_mapping(page); - if (!mapping) - rc = migrate_page(mapping, newpage, page, mode); - else if (mapping->a_ops->migratepage) + + if (likely(is_lru)) { + if (!mapping) + rc = migrate_page(mapping, newpage, page, mode); + else if (mapping->a_ops->migratepage) + /* + * Most pages have a mapping and most filesystems + * provide a migratepage callback. Anonymous pages + * are part of swap space which also has its own + * migratepage callback. This is the most common path + * for page migration. + */ + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + else + rc = fallback_migrate_page(mapping, newpage, + page, mode); + } else { /* - * Most pages have a mapping and most filesystems provide a - * migratepage callback. Anonymous pages are part of swap - * space which also has its own migratepage callback. This - * is the most common path for page migration. + * In case of non-lru page, it could be released after + * isolation step. In that case, we shouldn't try migration. */ - rc = mapping->a_ops->migratepage(mapping, newpage, page, mode); - else - rc = fallback_migrate_page(mapping, newpage, page, mode); + VM_BUG_ON_PAGE(!PageIsolated(page), page); + if (!PageMovable(page)) { + rc = MIGRATEPAGE_SUCCESS; + __ClearPageIsolated(page); + goto out; + } + + rc = mapping->a_ops->migratepage(mapping, newpage, + page, mode); + WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS && + !PageIsolated(page)); + } /* * When successful, old pagecache page->mapping must be cleared before * page is freed; but stats require that PageAnon be left as PageAnon. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (!PageAnon(page)) + if (__PageMovable(page)) { + VM_BUG_ON_PAGE(!PageIsolated(page), page); + + /* + * We clear PG_movable under page_lock so any compactor + * cannot try to migrate this page. + */ + __ClearPageIsolated(page); + } + + /* + * Anonymous and movable page->mapping will be cleard by + * free_pages_prepare so don't reset it here for keeping + * the type to work PageAnon, for example. + */ + if (!PageMappingFlags(page)) page->mapping = NULL; } +out: return rc; } @@ -791,6 +921,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, int rc = -EAGAIN; int page_was_mapped = 0; struct anon_vma *anon_vma = NULL; + bool is_lru = !__PageMovable(page); if (!trylock_page(page)) { if (!force || mode == MIGRATE_ASYNC) @@ -871,6 +1002,11 @@ static int __unmap_and_move(struct page *page, struct page *newpage, goto out_unlock_both; } + if (unlikely(!is_lru)) { + rc = move_to_new_page(newpage, page, mode); + goto out_unlock_both; + } + /* * Corner case handling: * 1. When a new swap-cache page is read into, it is added to the LRU @@ -920,7 +1056,8 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * list in here. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (unlikely(__is_movable_balloon_page(newpage))) + if (unlikely(__is_movable_balloon_page(newpage) || + __PageMovable(newpage))) put_page(newpage); else putback_lru_page(newpage); @@ -961,6 +1098,12 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, /* page was freed from under us. So we are done. */ ClearPageActive(page); ClearPageUnevictable(page); + if (unlikely(__PageMovable(page))) { + lock_page(page); + if (!PageMovable(page)) + __ClearPageIsolated(page); + unlock_page(page); + } if (put_new_page) put_new_page(newpage, private); else @@ -1010,8 +1153,21 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, num_poisoned_pages_inc(); } } else { - if (rc != -EAGAIN) - putback_lru_page(page); + if (rc != -EAGAIN) { + if (likely(!__PageMovable(page))) { + putback_lru_page(page); + goto put_new; + } + + lock_page(page); + if (PageMovable(page)) + putback_movable_page(page); + else + __ClearPageIsolated(page); + unlock_page(page); + put_page(page); + } +put_new: if (put_new_page) put_new_page(newpage, private); else diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7da8310b86e9..4b3a07ce824d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1014,7 +1014,7 @@ static __always_inline bool free_pages_prepare(struct page *page, (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } - if (PageAnonHead(page)) + if (PageMappingFlags(page)) page->mapping = NULL; if (check_free) bad += free_pages_check(page); diff --git a/mm/util.c b/mm/util.c index 917e0e3d0f8e..b756ee36f7f0 100644 --- a/mm/util.c +++ b/mm/util.c @@ -399,10 +399,12 @@ struct address_space *page_mapping(struct page *page) } mapping = page->mapping; - if ((unsigned long)mapping & PAGE_MAPPING_FLAGS) + if ((unsigned long)mapping & PAGE_MAPPING_ANON) return NULL; - return mapping; + + return (void *)((unsigned long)mapping & ~PAGE_MAPPING_FLAGS); } +EXPORT_SYMBOL(page_mapping); /* Slow path of page_mapcount() for compound pages */ int __page_mapcount(struct page *page) -- 1.9.1
Minchan Kim
2016-May-31 23:21 UTC
[PATCH v7 03/12] mm: balloon: use general non-lru movable page feature
Now, VM has a feature to migrate non-lru movable pages so balloon doesn't need custom migration hooks in migrate.c and compaction.c. Instead, this patch implements page->mapping->a_ops->{isolate|migrate|putback} functions. With that, we could remove hooks for ballooning in general migration functions and make balloon compaction simple. Cc: virtualization at lists.linux-foundation.org Cc: Rafael Aquini <aquini at redhat.com> Cc: Konstantin Khlebnikov <koct9i at gmail.com> Acked-by: Vlastimil Babka <vbabka at suse.cz> Signed-off-by: Gioh Kim <gi-oh.kim at profitbricks.com> Signed-off-by: Minchan Kim <minchan at kernel.org> --- drivers/virtio/virtio_balloon.c | 54 +++++++++++++++++++--- include/linux/balloon_compaction.h | 53 +++++++-------------- include/uapi/linux/magic.h | 1 + mm/balloon_compaction.c | 94 +++++++------------------------------- mm/compaction.c | 7 --- mm/migrate.c | 19 +------- mm/vmscan.c | 2 +- 7 files changed, 85 insertions(+), 145 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 476c0e3a7150..88d5609375de 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -30,6 +30,7 @@ #include <linux/oom.h> #include <linux/wait.h> #include <linux/mm.h> +#include <linux/mount.h> /* * Balloon device works in 4K page units. So each page is pointed to by @@ -45,6 +46,10 @@ static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); +#ifdef CONFIG_BALLOON_COMPACTION +static struct vfsmount *balloon_mnt; +#endif + struct virtio_balloon { struct virtio_device *vdev; struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; @@ -488,8 +493,26 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info, put_page(page); /* balloon reference */ - return MIGRATEPAGE_SUCCESS; + return 0; } + +static struct dentry *balloon_mount(struct file_system_type *fs_type, + int flags, const char *dev_name, void *data) +{ + static const struct dentry_operations ops = { + .d_dname = simple_dname, + }; + + return mount_pseudo(fs_type, "balloon-kvm:", NULL, &ops, + BALLOON_KVM_MAGIC); +} + +static struct file_system_type balloon_fs = { + .name = "balloon-kvm", + .mount = balloon_mount, + .kill_sb = kill_anon_super, +}; + #endif /* CONFIG_BALLOON_COMPACTION */ static int virtballoon_probe(struct virtio_device *vdev) @@ -519,9 +542,6 @@ static int virtballoon_probe(struct virtio_device *vdev) vb->vdev = vdev; balloon_devinfo_init(&vb->vb_dev_info); -#ifdef CONFIG_BALLOON_COMPACTION - vb->vb_dev_info.migratepage = virtballoon_migratepage; -#endif err = init_vqs(vb); if (err) @@ -531,13 +551,33 @@ static int virtballoon_probe(struct virtio_device *vdev) vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY; err = register_oom_notifier(&vb->nb); if (err < 0) - goto out_oom_notify; + goto out_del_vqs; + +#ifdef CONFIG_BALLOON_COMPACTION + balloon_mnt = kern_mount(&balloon_fs); + if (IS_ERR(balloon_mnt)) { + err = PTR_ERR(balloon_mnt); + unregister_oom_notifier(&vb->nb); + goto out_del_vqs; + } + + vb->vb_dev_info.migratepage = virtballoon_migratepage; + vb->vb_dev_info.inode = alloc_anon_inode(balloon_mnt->mnt_sb); + if (IS_ERR(vb->vb_dev_info.inode)) { + err = PTR_ERR(vb->vb_dev_info.inode); + kern_unmount(balloon_mnt); + unregister_oom_notifier(&vb->nb); + vb->vb_dev_info.inode = NULL; + goto out_del_vqs; + } + vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops; +#endif virtio_device_ready(vdev); return 0; -out_oom_notify: +out_del_vqs: vdev->config->del_vqs(vdev); out_free_vb: kfree(vb); @@ -571,6 +611,8 @@ static void virtballoon_remove(struct virtio_device *vdev) cancel_work_sync(&vb->update_balloon_stats_work); remove_common(vb); + if (vb->vb_dev_info.inode) + iput(vb->vb_dev_info.inode); kfree(vb); } diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index 9b0a15d06a4f..c0c430d06a9b 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -45,9 +45,10 @@ #define _LINUX_BALLOON_COMPACTION_H #include <linux/pagemap.h> #include <linux/page-flags.h> -#include <linux/migrate.h> +#include <linux/compaction.h> #include <linux/gfp.h> #include <linux/err.h> +#include <linux/fs.h> /* * Balloon device information descriptor. @@ -62,6 +63,7 @@ struct balloon_dev_info { struct list_head pages; /* Pages enqueued & handled to Host */ int (*migratepage)(struct balloon_dev_info *, struct page *newpage, struct page *page, enum migrate_mode mode); + struct inode *inode; }; extern struct page *balloon_page_enqueue(struct balloon_dev_info *b_dev_info); @@ -73,45 +75,19 @@ static inline void balloon_devinfo_init(struct balloon_dev_info *balloon) spin_lock_init(&balloon->pages_lock); INIT_LIST_HEAD(&balloon->pages); balloon->migratepage = NULL; + balloon->inode = NULL; } #ifdef CONFIG_BALLOON_COMPACTION -extern bool balloon_page_isolate(struct page *page); +extern const struct address_space_operations balloon_aops; +extern bool balloon_page_isolate(struct page *page, + isolate_mode_t mode); extern void balloon_page_putback(struct page *page); -extern int balloon_page_migrate(struct page *newpage, +extern int balloon_page_migrate(struct address_space *mapping, + struct page *newpage, struct page *page, enum migrate_mode mode); /* - * __is_movable_balloon_page - helper to perform @page PageBalloon tests - */ -static inline bool __is_movable_balloon_page(struct page *page) -{ - return PageBalloon(page); -} - -/* - * balloon_page_movable - test PageBalloon to identify balloon pages - * and PagePrivate to check that the page is not - * isolated and can be moved by compaction/migration. - * - * As we might return false positives in the case of a balloon page being just - * released under us, this need to be re-tested later, under the page lock. - */ -static inline bool balloon_page_movable(struct page *page) -{ - return PageBalloon(page) && PagePrivate(page); -} - -/* - * isolated_balloon_page - identify an isolated balloon page on private - * compaction/migration page lists. - */ -static inline bool isolated_balloon_page(struct page *page) -{ - return PageBalloon(page); -} - -/* * balloon_page_insert - insert a page into the balloon's page list and make * the page->private assignment accordingly. * @balloon : pointer to balloon device @@ -124,7 +100,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon, struct page *page) { __SetPageBalloon(page); - SetPagePrivate(page); + __SetPageMovable(page, balloon->inode->i_mapping); set_page_private(page, (unsigned long)balloon); list_add(&page->lru, &balloon->pages); } @@ -140,11 +116,14 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon, static inline void balloon_page_delete(struct page *page) { __ClearPageBalloon(page); + __ClearPageMovable(page); set_page_private(page, 0); - if (PagePrivate(page)) { - ClearPagePrivate(page); + /* + * No touch page.lru field once @page has been isolated + * because VM is using the field. + */ + if (!PageIsolated(page)) list_del(&page->lru); - } } /* diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index 546b38886e11..d829ce63529d 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -80,5 +80,6 @@ #define BPF_FS_MAGIC 0xcafe4a11 /* Since UDF 2.01 is ISO 13346 based... */ #define UDF_SUPER_MAGIC 0x15013346 +#define BALLOON_KVM_MAGIC 0x13661366 #endif /* __LINUX_MAGIC_H__ */ diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index 57b3e9bd6bc5..da91df50ba31 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -70,7 +70,7 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) */ if (trylock_page(page)) { #ifdef CONFIG_BALLOON_COMPACTION - if (!PagePrivate(page)) { + if (PageIsolated(page)) { /* raced with isolation */ unlock_page(page); continue; @@ -106,110 +106,50 @@ EXPORT_SYMBOL_GPL(balloon_page_dequeue); #ifdef CONFIG_BALLOON_COMPACTION -static inline void __isolate_balloon_page(struct page *page) +bool balloon_page_isolate(struct page *page, isolate_mode_t mode) + { struct balloon_dev_info *b_dev_info = balloon_page_device(page); unsigned long flags; spin_lock_irqsave(&b_dev_info->pages_lock, flags); - ClearPagePrivate(page); list_del(&page->lru); b_dev_info->isolated_pages++; spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + + return true; } -static inline void __putback_balloon_page(struct page *page) +void balloon_page_putback(struct page *page) { struct balloon_dev_info *b_dev_info = balloon_page_device(page); unsigned long flags; spin_lock_irqsave(&b_dev_info->pages_lock, flags); - SetPagePrivate(page); list_add(&page->lru, &b_dev_info->pages); b_dev_info->isolated_pages--; spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); } -/* __isolate_lru_page() counterpart for a ballooned page */ -bool balloon_page_isolate(struct page *page) -{ - /* - * Avoid burning cycles with pages that are yet under __free_pages(), - * or just got freed under us. - * - * In case we 'win' a race for a balloon page being freed under us and - * raise its refcount preventing __free_pages() from doing its job - * the put_page() at the end of this block will take care of - * release this page, thus avoiding a nasty leakage. - */ - if (likely(get_page_unless_zero(page))) { - /* - * As balloon pages are not isolated from LRU lists, concurrent - * compaction threads can race against page migration functions - * as well as race against the balloon driver releasing a page. - * - * In order to avoid having an already isolated balloon page - * being (wrongly) re-isolated while it is under migration, - * or to avoid attempting to isolate pages being released by - * the balloon driver, lets be sure we have the page lock - * before proceeding with the balloon page isolation steps. - */ - if (likely(trylock_page(page))) { - /* - * A ballooned page, by default, has PagePrivate set. - * Prevent concurrent compaction threads from isolating - * an already isolated balloon page by clearing it. - */ - if (balloon_page_movable(page)) { - __isolate_balloon_page(page); - unlock_page(page); - return true; - } - unlock_page(page); - } - put_page(page); - } - return false; -} - -/* putback_lru_page() counterpart for a ballooned page */ -void balloon_page_putback(struct page *page) -{ - /* - * 'lock_page()' stabilizes the page and prevents races against - * concurrent isolation threads attempting to re-isolate it. - */ - lock_page(page); - - if (__is_movable_balloon_page(page)) { - __putback_balloon_page(page); - /* drop the extra ref count taken for page isolation */ - put_page(page); - } else { - WARN_ON(1); - dump_page(page, "not movable balloon page"); - } - unlock_page(page); -} /* move_to_new_page() counterpart for a ballooned page */ -int balloon_page_migrate(struct page *newpage, - struct page *page, enum migrate_mode mode) +int balloon_page_migrate(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode) { struct balloon_dev_info *balloon = balloon_page_device(page); - int rc = -EAGAIN; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(!PageLocked(newpage), newpage); - if (WARN_ON(!__is_movable_balloon_page(page))) { - dump_page(page, "not movable balloon page"); - return rc; - } + return balloon->migratepage(balloon, newpage, page, mode); +} - if (balloon && balloon->migratepage) - rc = balloon->migratepage(balloon, newpage, page, mode); +const struct address_space_operations balloon_aops = { + .migratepage = balloon_page_migrate, + .isolate_page = balloon_page_isolate, + .putback_page = balloon_page_putback, +}; +EXPORT_SYMBOL_GPL(balloon_aops); - return rc; -} #endif /* CONFIG_BALLOON_COMPACTION */ diff --git a/mm/compaction.c b/mm/compaction.c index a680b52e190b..b7bfdf94b545 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -795,13 +795,6 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, */ is_lru = PageLRU(page); if (!is_lru) { - if (unlikely(balloon_page_movable(page))) { - if (balloon_page_isolate(page)) { - /* Successfully isolated */ - goto isolate_success; - } - } - /* * __PageMovable can return false positive so we need * to verify it under page_lock. diff --git a/mm/migrate.c b/mm/migrate.c index 60abcf379b51..e6daf49e224f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -170,14 +170,12 @@ void putback_movable_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - if (unlikely(isolated_balloon_page(page))) { - balloon_page_putback(page); /* * We isolated non-lru movable page so here we can use * __PageMovable because LRU page's mapping cannot have * PAGE_MAPPING_MOVABLE. */ - } else if (unlikely(__PageMovable(page))) { + if (unlikely(__PageMovable(page))) { VM_BUG_ON_PAGE(!PageIsolated(page), page); lock_page(page); if (PageMovable(page)) @@ -990,18 +988,6 @@ static int __unmap_and_move(struct page *page, struct page *newpage, if (unlikely(!trylock_page(newpage))) goto out_unlock; - if (unlikely(isolated_balloon_page(page))) { - /* - * A ballooned page does not need any special attention from - * physical to virtual reverse mapping procedures. - * Skip any attempt to unmap PTEs or to remap swap cache, - * in order to avoid burning cycles at rmap level, and perform - * the page migration right away (proteced by page lock). - */ - rc = balloon_page_migrate(newpage, page, mode); - goto out_unlock_both; - } - if (unlikely(!is_lru)) { rc = move_to_new_page(newpage, page, mode); goto out_unlock_both; @@ -1056,8 +1042,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage, * list in here. */ if (rc == MIGRATEPAGE_SUCCESS) { - if (unlikely(__is_movable_balloon_page(newpage) || - __PageMovable(newpage))) + if (unlikely(__PageMovable(newpage))) put_page(newpage); else putback_lru_page(newpage); diff --git a/mm/vmscan.c b/mm/vmscan.c index c4a2f4512fca..93ba33789ac6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1254,7 +1254,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone, list_for_each_entry_safe(page, next, page_list, lru) { if (page_is_file_cache(page) && !PageDirty(page) && - !isolated_balloon_page(page)) { + !__PageMovable(page)) { ClearPageActive(page); list_move(&page->lru, &clean_pages); } -- 1.9.1
On Wed, 1 Jun 2016 08:21:09 +0900 Minchan Kim <minchan at kernel.org> wrote:> Recently, I got many reports about perfermance degradation in embedded > system(Android mobile phone, webOS TV and so on) and easy fork fail. > > The problem was fragmentation caused by zram and GPU driver mainly. > With memory pressure, their pages were spread out all of pageblock and > it cannot be migrated with current compaction algorithm which supports > only LRU pages. In the end, compaction cannot work well so reclaimer > shrinks all of working set pages. It made system very slow and even to > fail to fork easily which requires order-[2 or 3] allocations. > > Other pain point is that they cannot use CMA memory space so when OOM > kill happens, I can see many free pages in CMA area, which is not > memory efficient. In our product which has big CMA memory, it reclaims > zones too exccessively to allocate GPU and zram page although there are > lots of free space in CMA so system becomes very slow easily.But this isn't presently implemented for GPU drivers or for CMA, yes? What's the story there?
On Wed, Jun 01, 2016 at 02:41:51PM -0700, Andrew Morton wrote:> On Wed, 1 Jun 2016 08:21:09 +0900 Minchan Kim <minchan at kernel.org> wrote: > > > Recently, I got many reports about perfermance degradation in embedded > > system(Android mobile phone, webOS TV and so on) and easy fork fail. > > > > The problem was fragmentation caused by zram and GPU driver mainly. > > With memory pressure, their pages were spread out all of pageblock and > > it cannot be migrated with current compaction algorithm which supports > > only LRU pages. In the end, compaction cannot work well so reclaimer > > shrinks all of working set pages. It made system very slow and even to > > fail to fork easily which requires order-[2 or 3] allocations. > > > > Other pain point is that they cannot use CMA memory space so when OOM > > kill happens, I can see many free pages in CMA area, which is not > > memory efficient. In our product which has big CMA memory, it reclaims > > zones too exccessively to allocate GPU and zram page although there are > > lots of free space in CMA so system becomes very slow easily. > > But this isn't presently implemented for GPU drivers or for CMA, yes? > > What's the story there?Broken (out-of-tree) drivers that don't allocate their gpu stuff correctly. There's piles of drivers that get_user_page all over the place but then fail to timely get off these pages again. The fix is to get off those pages again (either by unpinning timely, or registering an mmu_notifier if the driver wants to keep the pages pinned indefinitely, as a caching optimization). At least that's my guess, and iirc it was confirmed first time around this series showed up. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
On Wed, Jun 01, 2016 at 02:41:51PM -0700, Andrew Morton wrote:> On Wed, 1 Jun 2016 08:21:09 +0900 Minchan Kim <minchan at kernel.org> wrote: > > > Recently, I got many reports about perfermance degradation in embedded > > system(Android mobile phone, webOS TV and so on) and easy fork fail. > > > > The problem was fragmentation caused by zram and GPU driver mainly. > > With memory pressure, their pages were spread out all of pageblock and > > it cannot be migrated with current compaction algorithm which supports > > only LRU pages. In the end, compaction cannot work well so reclaimer > > shrinks all of working set pages. It made system very slow and even to > > fail to fork easily which requires order-[2 or 3] allocations. > > > > Other pain point is that they cannot use CMA memory space so when OOM > > kill happens, I can see many free pages in CMA area, which is not > > memory efficient. In our product which has big CMA memory, it reclaims > > zones too exccessively to allocate GPU and zram page although there are > > lots of free space in CMA so system becomes very slow easily. > > But this isn't presently implemented for GPU drivers or for CMA, yes?For GPU driver, Gioh implemented but it was proprietary so couldn't contribute. For CMA, [zram: use __GFP_MOVABLE for memory allocation] added __GFP_MOVABLE for zsmalloc page allocation so it can use CMA area automatically now.> > What's the story there?
Hello Minchan, -next 4.7.0-rc3-next-20160614 [ 315.146533] kasan: CONFIG_KASAN_INLINE enabled [ 315.146538] kasan: GPF could be caused by NULL-ptr deref or user memory access [ 315.146546] general protection fault: 0000 [#1] PREEMPT SMP KASAN [ 315.146576] Modules linked in: lzo zram zsmalloc mousedev coretemp hwmon crc32c_intel r8169 i2c_i801 mii snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core acpi_cpufreq snd_pcm snd_timer snd soundcore lpc_ich mfd_core processor sch_fq_codel sd_mod hid_generic usbhid hid ahci libahci libata ehci_pci ehci_hcd scsi_mod usbcore usb_common [ 315.146785] CPU: 3 PID: 38 Comm: khugepaged Not tainted 4.7.0-rc3-next-20160614-dbg-00004-ga1c2cbc-dirty #488 [ 315.146841] task: ffff8800bfaf2900 ti: ffff880112468000 task.ti: ffff880112468000 [ 315.146859] RIP: 0010:[<ffffffffa02c413d>] [<ffffffffa02c413d>] zs_page_migrate+0x355/0xaa0 [zsmalloc] [ 315.146892] RSP: 0000:ffff88011246f138 EFLAGS: 00010293 [ 315.146906] RAX: 736761742d6f6e2c RBX: ffff880017ad9a80 RCX: 0000000000000000 [ 315.146924] RDX: 1ffffffff064d704 RSI: ffff88000511469a RDI: ffffffff8326ba20 [ 315.146942] RBP: ffff88011246f328 R08: 0000000000000001 R09: 0000000000000000 [ 315.146959] R10: ffff88011246f0a8 R11: ffff8800bfc07fff R12: ffff88011246f300 [ 315.146977] R13: ffffed0015523e6f R14: ffff8800aa91f378 R15: ffffea0000144500 [ 315.146995] FS: 0000000000000000(0000) GS:ffff880113780000(0000) knlGS:0000000000000000 [ 315.147015] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 315.147030] CR2: 00007f3f97911000 CR3: 0000000002209000 CR4: 00000000000006e0 [ 315.147046] Stack: [ 315.147052] 1ffff10015523e0f ffff88011246f240 ffff880005116800 00017f80e0000000 [ 315.147083] ffff880017ad9aa8 736761742d6f6e2c 1ffff1002248de34 ffff880017ad9a90 [ 315.147113] 0000069a1246f660 000000000000069a ffff880005114000 ffffea0002ff0180 [ 315.147143] Call Trace: [ 315.147154] [<ffffffffa02c3de8>] ? obj_to_head+0x9d/0x9d [zsmalloc] [ 315.147175] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c [ 315.147195] [<ffffffff812275b1>] ? isolate_freepages_block+0x2f9/0x5a6 [ 315.147213] [<ffffffff8127f15c>] ? kasan_poison_shadow+0x2f/0x31 [ 315.147230] [<ffffffff8127f66a>] ? kasan_alloc_pages+0x39/0x3b [ 315.147246] [<ffffffff812267e6>] ? map_pages+0x1f3/0x3ad [ 315.147262] [<ffffffff812265f3>] ? update_pageblock_skip+0x18d/0x18d [ 315.147280] [<ffffffff81115972>] ? up_read+0x1a/0x30 [ 315.147296] [<ffffffff8111ec7e>] ? debug_check_no_locks_freed+0x150/0x22b [ 315.147315] [<ffffffff812842d1>] move_to_new_page+0x4dd/0x615 [ 315.147332] [<ffffffff81283df4>] ? migrate_page+0x75/0x75 [ 315.147347] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 [ 315.147366] [<ffffffff812851c1>] migrate_pages+0xadd/0x131a [ 315.147382] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 [ 315.147399] [<ffffffff81226375>] ? kzfree+0x2b/0x2b [ 315.147414] [<ffffffff812846e4>] ? buffer_migrate_page+0x2db/0x2db [ 315.147431] [<ffffffff8122a6cf>] compact_zone+0xcdb/0x1155 [ 315.147448] [<ffffffff812299f4>] ? compaction_suitable+0x76/0x76 [ 315.147465] [<ffffffff8122ac29>] compact_zone_order+0xe0/0x167 [ 315.147481] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 [ 315.147499] [<ffffffff8122ab49>] ? compact_zone+0x1155/0x1155 [ 315.147515] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 [ 315.147533] [<ffffffff8122bcff>] try_to_compact_pages+0x2f1/0x648 [ 315.147550] [<ffffffff8122bcff>] ? try_to_compact_pages+0x2f1/0x648 [ 315.147568] [<ffffffff8122ba0e>] ? compaction_zonelist_suitable+0x3a6/0x3a6 [ 315.147589] [<ffffffff811ee129>] ? get_page_from_freelist+0x2c0/0x129a [ 315.147608] [<ffffffff811ef1ed>] __alloc_pages_direct_compact+0xea/0x30d [ 315.147626] [<ffffffff811ef103>] ? get_page_from_freelist+0x129a/0x129a [ 315.147645] [<ffffffff811f0422>] __alloc_pages_nodemask+0x840/0x16b6 [ 315.147663] [<ffffffff810dba27>] ? try_to_wake_up+0x696/0x6c8 [ 315.149147] [<ffffffff811efbe2>] ? warn_alloc_failed+0x226/0x226 [ 315.150615] [<ffffffff810dba69>] ? wake_up_process+0x10/0x12 [ 315.152078] [<ffffffff810dbaf4>] ? wake_up_q+0x89/0xa7 [ 315.153539] [<ffffffff81128b6f>] ? rwsem_wake+0x131/0x15c [ 315.155007] [<ffffffff812922e7>] ? khugepaged+0x4072/0x484f [ 315.156471] [<ffffffff8128e449>] khugepaged+0x1d4/0x484f [ 315.157940] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef [ 315.159402] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 [ 315.160870] [<ffffffff81d31df8>] ? _raw_spin_unlock_irq+0x27/0x45 [ 315.162341] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 [ 315.163814] [<ffffffff8111112e>] ? prepare_to_wait_event+0x3f7/0x3f7 [ 315.165295] [<ffffffff81d27ad5>] ? __schedule+0xa4d/0xd16 [ 315.166763] [<ffffffff810ccde3>] kthread+0x252/0x261 [ 315.168214] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef [ 315.169646] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 [ 315.171056] [<ffffffff81d3277f>] ret_from_fork+0x1f/0x40 [ 315.172462] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 [ 315.173869] Code: 03 b5 60 fe ff ff e8 2e fc ff ff a8 01 74 4c 48 83 e0 fe bf 01 00 00 00 48 89 85 38 fe ff ff e8 41 18 e1 e0 48 8b 85 38 fe ff ff <f0> 0f ba 28 00 73 29 bf 01 00 00 00 41 bc f5 ff ff ff e8 ea 27 [ 315.175573] RIP [<ffffffffa02c413d>] zs_page_migrate+0x355/0xaa0 [zsmalloc] [ 315.177084] RSP <ffff88011246f138> [ 315.186572] ---[ end trace 0962b8ee48c98bbc ]--- [ 315.186577] BUG: sleeping function called from invalid context at include/linux/sched.h:2960 [ 315.186580] in_atomic(): 1, irqs_disabled(): 0, pid: 38, name: khugepaged [ 315.186581] INFO: lockdep is turned off. [ 315.186583] Preemption disabled at:[<ffffffffa02c3f1d>] zs_page_migrate+0x135/0xaa0 [zsmalloc] [ 315.186594] CPU: 3 PID: 38 Comm: khugepaged Tainted: G D 4.7.0-rc3-next-20160614-dbg-00004-ga1c2cbc-dirty #488 [ 315.186599] 0000000000000000 ffff88011246ed58 ffffffff814d56bf ffff8800bfaf2900 [ 315.186604] 0000000000000004 ffff88011246ed98 ffffffff810d5e6a 0000000000000000 [ 315.186609] ffff8800bfaf2900 ffffffff81e39820 0000000000000b90 0000000000000000 [ 315.186614] Call Trace: [ 315.186618] [<ffffffff814d56bf>] dump_stack+0x68/0x92 [ 315.186622] [<ffffffff810d5e6a>] ___might_sleep+0x3bd/0x3c9 [ 315.186625] [<ffffffff810d5fd1>] __might_sleep+0x15b/0x167 [ 315.186630] [<ffffffff810ac4c1>] exit_signals+0x7a/0x34f [ 315.186633] [<ffffffff810ac447>] ? get_signal+0xd9b/0xd9b [ 315.186636] [<ffffffff811aee21>] ? irq_work_queue+0x101/0x11c [ 315.186640] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 [ 315.186645] [<ffffffff81096357>] do_exit+0x34d/0x1b4e [ 315.186648] [<ffffffff81130e16>] ? vprintk_emit+0x4b1/0x4d3 [ 315.186652] [<ffffffff8109600a>] ? is_current_pgrp_orphaned+0x8c/0x8c [ 315.186655] [<ffffffff81122c56>] ? lock_acquire+0xec/0x147 [ 315.186658] [<ffffffff811321ef>] ? kmsg_dump+0x12/0x27a [ 315.186662] [<ffffffff81132448>] ? kmsg_dump+0x26b/0x27a [ 315.186666] [<ffffffff81036507>] oops_end+0x9d/0xa4 [ 315.186669] [<ffffffff8103662c>] die+0x55/0x5e [ 315.186672] [<ffffffff81032aa0>] do_general_protection+0x16c/0x337 [ 315.186676] [<ffffffff81d33abf>] general_protection+0x1f/0x30 [ 315.186681] [<ffffffffa02c413d>] ? zs_page_migrate+0x355/0xaa0 [zsmalloc] [ 315.186686] [<ffffffffa02c4136>] ? zs_page_migrate+0x34e/0xaa0 [zsmalloc] [ 315.186691] [<ffffffffa02c3de8>] ? obj_to_head+0x9d/0x9d [zsmalloc] [ 315.186695] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c [ 315.186699] [<ffffffff812275b1>] ? isolate_freepages_block+0x2f9/0x5a6 [ 315.186702] [<ffffffff8127f15c>] ? kasan_poison_shadow+0x2f/0x31 [ 315.186706] [<ffffffff8127f66a>] ? kasan_alloc_pages+0x39/0x3b [ 315.186709] [<ffffffff812267e6>] ? map_pages+0x1f3/0x3ad [ 315.186712] [<ffffffff812265f3>] ? update_pageblock_skip+0x18d/0x18d [ 315.186716] [<ffffffff81115972>] ? up_read+0x1a/0x30 [ 315.186719] [<ffffffff8111ec7e>] ? debug_check_no_locks_freed+0x150/0x22b [ 315.186723] [<ffffffff812842d1>] move_to_new_page+0x4dd/0x615 [ 315.186726] [<ffffffff81283df4>] ? migrate_page+0x75/0x75 [ 315.186730] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 [ 315.186733] [<ffffffff812851c1>] migrate_pages+0xadd/0x131a [ 315.186737] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 [ 315.186740] [<ffffffff81226375>] ? kzfree+0x2b/0x2b [ 315.186743] [<ffffffff812846e4>] ? buffer_migrate_page+0x2db/0x2db [ 315.186747] [<ffffffff8122a6cf>] compact_zone+0xcdb/0x1155 [ 315.186751] [<ffffffff812299f4>] ? compaction_suitable+0x76/0x76 [ 315.186754] [<ffffffff8122ac29>] compact_zone_order+0xe0/0x167 [ 315.186757] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 [ 315.186761] [<ffffffff8122ab49>] ? compact_zone+0x1155/0x1155 [ 315.186764] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 [ 315.186768] [<ffffffff8122bcff>] try_to_compact_pages+0x2f1/0x648 [ 315.186771] [<ffffffff8122bcff>] ? try_to_compact_pages+0x2f1/0x648 [ 315.186775] [<ffffffff8122ba0e>] ? compaction_zonelist_suitable+0x3a6/0x3a6 [ 315.186780] [<ffffffff811ee129>] ? get_page_from_freelist+0x2c0/0x129a [ 315.186783] [<ffffffff811ef1ed>] __alloc_pages_direct_compact+0xea/0x30d [ 315.186787] [<ffffffff811ef103>] ? get_page_from_freelist+0x129a/0x129a [ 315.186791] [<ffffffff811f0422>] __alloc_pages_nodemask+0x840/0x16b6 [ 315.186794] [<ffffffff810dba27>] ? try_to_wake_up+0x696/0x6c8 [ 315.186798] [<ffffffff811efbe2>] ? warn_alloc_failed+0x226/0x226 [ 315.186801] [<ffffffff810dba69>] ? wake_up_process+0x10/0x12 [ 315.186804] [<ffffffff810dbaf4>] ? wake_up_q+0x89/0xa7 [ 315.186807] [<ffffffff81128b6f>] ? rwsem_wake+0x131/0x15c [ 315.186811] [<ffffffff812922e7>] ? khugepaged+0x4072/0x484f [ 315.186815] [<ffffffff8128e449>] khugepaged+0x1d4/0x484f [ 315.186819] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef [ 315.186822] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 [ 315.186826] [<ffffffff81d31df8>] ? _raw_spin_unlock_irq+0x27/0x45 [ 315.186829] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 [ 315.186832] [<ffffffff8111112e>] ? prepare_to_wait_event+0x3f7/0x3f7 [ 315.186836] [<ffffffff81d27ad5>] ? __schedule+0xa4d/0xd16 [ 315.186840] [<ffffffff810ccde3>] kthread+0x252/0x261 [ 315.186843] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef [ 315.186846] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 [ 315.186851] [<ffffffff81d3277f>] ret_from_fork+0x1f/0x40 [ 315.186854] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 [ 315.186869] note: khugepaged[38] exited with preempt_count 4 [ 340.319852] NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [jbd2/zram0-8:405] [ 340.319856] Modules linked in: lzo zram zsmalloc mousedev coretemp hwmon crc32c_intel r8169 i2c_i801 mii snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core acpi_cpufreq snd_pcm snd_timer snd soundcore lpc_ich mfd_core processor sch_fq_codel sd_mod hid_generic usbhid hid ahci libahci libata ehci_pci ehci_hcd scsi_mod usbcore usb_common [ 340.319900] irq event stamp: 834296 [ 340.319902] hardirqs last enabled at (834295): [<ffffffff81280b07>] quarantine_put+0xa1/0xe6 [ 340.319911] hardirqs last disabled at (834296): [<ffffffff81d31e68>] _raw_write_lock_irqsave+0x13/0x4c [ 340.319917] softirqs last enabled at (833836): [<ffffffff81d3455e>] __do_softirq+0x406/0x48f [ 340.319922] softirqs last disabled at (833831): [<ffffffff8109914a>] irq_exit+0x6a/0x113 [ 340.319929] CPU: 2 PID: 405 Comm: jbd2/zram0-8 Tainted: G D 4.7.0-rc3-next-20160614-dbg-00004-ga1c2cbc-dirty #488 [ 340.319935] task: ffff8800bb512900 ti: ffff8800a69c0000 task.ti: ffff8800a69c0000 [ 340.319937] RIP: 0010:[<ffffffff814ed772>] [<ffffffff814ed772>] delay_tsc+0x0/0xa4 [ 340.319943] RSP: 0018:ffff8800a69c70f8 EFLAGS: 00000206 [ 340.319945] RAX: 0000000000000001 RBX: ffff8800aa91f300 RCX: 0000000000000000 [ 340.319947] RDX: 0000000000000003 RSI: ffffffff81ed2840 RDI: 0000000000000001 [ 340.319949] RBP: ffff8800a69c7100 R08: 0000000000000001 R09: 0000000000000000 [ 340.319951] R10: ffff8800a69c70e8 R11: 000000007e7516b9 R12: ffff8800aa91f310 [ 340.319954] R13: ffff8800aa91f308 R14: 000000001f3306fa R15: 0000000000000000 [ 340.319956] FS: 0000000000000000(0000) GS:ffff880113700000(0000) knlGS:0000000000000000 [ 340.319959] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 340.319961] CR2: 00007fc99caba080 CR3: 00000000b9796000 CR4: 00000000000006e0 [ 340.319963] Stack: [ 340.319964] ffffffff814ed89c ffff8800a69c7148 ffffffff8112795d ffffed0015523e60 [ 340.319970] 000000009e857390 ffff8800aa91f300 ffff8800bbe21cc0 ffff8800047d6f80 [ 340.319975] ffff8800a69c72b0 ffff8800aa91f300 ffff8800a69c7168 ffffffff81d31bed [ 340.319980] Call Trace: [ 340.319983] [<ffffffff814ed89c>] ? __delay+0xa/0xc [ 340.319988] [<ffffffff8112795d>] do_raw_spin_lock+0x197/0x257 [ 340.319991] [<ffffffff81d31bed>] _raw_spin_lock+0x35/0x3c [ 340.319998] [<ffffffffa02c6062>] ? zs_free+0x191/0x27a [zsmalloc] [ 340.320003] [<ffffffffa02c6062>] zs_free+0x191/0x27a [zsmalloc] [ 340.320008] [<ffffffffa02c5ed1>] ? free_zspage+0xe8/0xe8 [zsmalloc] [ 340.320012] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 [ 340.320015] [<ffffffff810d58a6>] ? finish_task_switch+0x3b3/0x484 [ 340.320021] [<ffffffff81d27ad5>] ? __schedule+0xa4d/0xd16 [ 340.320024] [<ffffffff81d28086>] ? preempt_schedule+0x1f/0x21 [ 340.320028] [<ffffffff81d27ff9>] ? preempt_schedule_common+0xb7/0xe8 [ 340.320034] [<ffffffffa02d3f0e>] zram_free_page+0x112/0x1f6 [zram] [ 340.320039] [<ffffffffa02d5e6c>] zram_make_request+0x45d/0x89f [zram] [ 340.320045] [<ffffffffa02d5a0f>] ? zram_rw_page+0x21d/0x21d [zram] [ 340.320048] [<ffffffff81493657>] ? blk_exit_rl+0x39/0x39 [ 340.320053] [<ffffffff8148fe3f>] ? handle_bad_sector+0x192/0x192 [ 340.320056] [<ffffffff8127f83e>] ? kasan_slab_alloc+0x12/0x14 [ 340.320059] [<ffffffff8127ca68>] ? kmem_cache_alloc+0xf3/0x101 [ 340.320062] [<ffffffff81494e37>] generic_make_request+0x2bc/0x496 [ 340.320066] [<ffffffff81494b7b>] ? blk_plug_queued_count+0x103/0x103 [ 340.320069] [<ffffffff8111ec7e>] ? debug_check_no_locks_freed+0x150/0x22b [ 340.320072] [<ffffffff81495309>] submit_bio+0x2f8/0x324 [ 340.320075] [<ffffffff81495011>] ? generic_make_request+0x496/0x496 [ 340.320078] [<ffffffff811190fc>] ? lockdep_init_map+0x1ef/0x4b0 [ 340.320082] [<ffffffff814880a4>] submit_bio_wait+0xff/0x138 [ 340.320085] [<ffffffff81487fa5>] ? bio_add_page+0x292/0x292 [ 340.320090] [<ffffffff814ab82c>] blkdev_issue_discard+0xee/0x148 [ 340.320093] [<ffffffff814ab73e>] ? __blkdev_issue_discard+0x399/0x399 [ 340.320097] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 [ 340.320101] [<ffffffff81404de8>] ext4_free_data_callback+0x2cc/0x8bc [ 340.320104] [<ffffffff81404de8>] ? ext4_free_data_callback+0x2cc/0x8bc [ 340.320107] [<ffffffff81404b1c>] ? ext4_mb_release_context+0x10aa/0x10aa [ 340.320111] [<ffffffff81122c56>] ? lock_acquire+0xec/0x147 [ 340.320115] [<ffffffff813c8a6a>] ? ext4_journal_commit_callback+0x203/0x220 [ 340.320119] [<ffffffff813c8a61>] ext4_journal_commit_callback+0x1fa/0x220 [ 340.320124] [<ffffffff81438bf5>] jbd2_journal_commit_transaction+0x3753/0x3c20 [ 340.320128] [<ffffffff814354a2>] ? journal_submit_commit_record+0x777/0x777 [ 340.320132] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 [ 340.320135] [<ffffffff811205a5>] ? __lock_acquire+0x14f9/0x33b8 [ 340.320139] [<ffffffff81d31db0>] ? _raw_spin_unlock_irqrestore+0x3b/0x5c [ 340.320143] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 [ 340.320146] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c [ 340.320151] [<ffffffff81156945>] ? try_to_del_timer_sync+0xa5/0xce [ 340.320154] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 [ 340.320157] [<ffffffff8143febd>] kjournald2+0x246/0x6e1 [ 340.320160] [<ffffffff8143febd>] ? kjournald2+0x246/0x6e1 [ 340.320163] [<ffffffff8143fc77>] ? commit_timeout+0xb/0xb [ 340.320167] [<ffffffff8111112e>] ? prepare_to_wait_event+0x3f7/0x3f7 [ 340.320171] [<ffffffff810ccde3>] kthread+0x252/0x261 [ 340.320174] [<ffffffff8143fc77>] ? commit_timeout+0xb/0xb [ 340.320177] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 [ 340.320181] [<ffffffff81d3277f>] ret_from_fork+0x1f/0x40 [ 340.320185] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 [ 340.320186] Code: 5c 5d c3 55 48 8d 04 bd 00 00 00 00 65 48 8b 15 8d 59 b2 7e 48 69 d2 fa 00 00 00 48 89 e5 f7 e2 48 8d 7a 01 e8 22 01 00 00 5d c3 <55> 48 89 e5 41 56 41 55 41 54 53 49 89 fd bf 01 00 00 00 e8 ed -ss
Hi Sergey, On Wed, Jun 15, 2016 at 04:59:09PM +0900, Sergey Senozhatsky wrote:> Hello Minchan, > > -next 4.7.0-rc3-next-20160614 > > > [ 315.146533] kasan: CONFIG_KASAN_INLINE enabled > [ 315.146538] kasan: GPF could be caused by NULL-ptr deref or user memory access > [ 315.146546] general protection fault: 0000 [#1] PREEMPT SMP KASAN > [ 315.146576] Modules linked in: lzo zram zsmalloc mousedev coretemp hwmon crc32c_intel r8169 i2c_i801 mii snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core acpi_cpufreq snd_pcm snd_timer snd soundcore lpc_ich mfd_core processor sch_fq_codel sd_mod hid_generic usbhid hid ahci libahci libata ehci_pci ehci_hcd scsi_mod usbcore usb_common > [ 315.146785] CPU: 3 PID: 38 Comm: khugepaged Not tainted 4.7.0-rc3-next-20160614-dbg-00004-ga1c2cbc-dirty #488 > [ 315.146841] task: ffff8800bfaf2900 ti: ffff880112468000 task.ti: ffff880112468000 > [ 315.146859] RIP: 0010:[<ffffffffa02c413d>] [<ffffffffa02c413d>] zs_page_migrate+0x355/0xaa0 [zsmalloc]Thanks for the report! zs_page_migrate+0x355? Could you tell me what line is it? It seems to be related to obj_to_head. Could you test with [zsmalloc: keep first object offset in struct page] in mmotm?> [ 315.146892] RSP: 0000:ffff88011246f138 EFLAGS: 00010293 > [ 315.146906] RAX: 736761742d6f6e2c RBX: ffff880017ad9a80 RCX: 0000000000000000 > [ 315.146924] RDX: 1ffffffff064d704 RSI: ffff88000511469a RDI: ffffffff8326ba20 > [ 315.146942] RBP: ffff88011246f328 R08: 0000000000000001 R09: 0000000000000000 > [ 315.146959] R10: ffff88011246f0a8 R11: ffff8800bfc07fff R12: ffff88011246f300 > [ 315.146977] R13: ffffed0015523e6f R14: ffff8800aa91f378 R15: ffffea0000144500 > [ 315.146995] FS: 0000000000000000(0000) GS:ffff880113780000(0000) knlGS:0000000000000000 > [ 315.147015] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 315.147030] CR2: 00007f3f97911000 CR3: 0000000002209000 CR4: 00000000000006e0 > [ 315.147046] Stack: > [ 315.147052] 1ffff10015523e0f ffff88011246f240 ffff880005116800 00017f80e0000000 > [ 315.147083] ffff880017ad9aa8 736761742d6f6e2c 1ffff1002248de34 ffff880017ad9a90 > [ 315.147113] 0000069a1246f660 000000000000069a ffff880005114000 ffffea0002ff0180 > [ 315.147143] Call Trace: > [ 315.147154] [<ffffffffa02c3de8>] ? obj_to_head+0x9d/0x9d [zsmalloc] > [ 315.147175] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c > [ 315.147195] [<ffffffff812275b1>] ? isolate_freepages_block+0x2f9/0x5a6 > [ 315.147213] [<ffffffff8127f15c>] ? kasan_poison_shadow+0x2f/0x31 > [ 315.147230] [<ffffffff8127f66a>] ? kasan_alloc_pages+0x39/0x3b > [ 315.147246] [<ffffffff812267e6>] ? map_pages+0x1f3/0x3ad > [ 315.147262] [<ffffffff812265f3>] ? update_pageblock_skip+0x18d/0x18d > [ 315.147280] [<ffffffff81115972>] ? up_read+0x1a/0x30 > [ 315.147296] [<ffffffff8111ec7e>] ? debug_check_no_locks_freed+0x150/0x22b > [ 315.147315] [<ffffffff812842d1>] move_to_new_page+0x4dd/0x615 > [ 315.147332] [<ffffffff81283df4>] ? migrate_page+0x75/0x75 > [ 315.147347] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 > [ 315.147366] [<ffffffff812851c1>] migrate_pages+0xadd/0x131a > [ 315.147382] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 > [ 315.147399] [<ffffffff81226375>] ? kzfree+0x2b/0x2b > [ 315.147414] [<ffffffff812846e4>] ? buffer_migrate_page+0x2db/0x2db > [ 315.147431] [<ffffffff8122a6cf>] compact_zone+0xcdb/0x1155 > [ 315.147448] [<ffffffff812299f4>] ? compaction_suitable+0x76/0x76 > [ 315.147465] [<ffffffff8122ac29>] compact_zone_order+0xe0/0x167 > [ 315.147481] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 > [ 315.147499] [<ffffffff8122ab49>] ? compact_zone+0x1155/0x1155 > [ 315.147515] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 > [ 315.147533] [<ffffffff8122bcff>] try_to_compact_pages+0x2f1/0x648 > [ 315.147550] [<ffffffff8122bcff>] ? try_to_compact_pages+0x2f1/0x648 > [ 315.147568] [<ffffffff8122ba0e>] ? compaction_zonelist_suitable+0x3a6/0x3a6 > [ 315.147589] [<ffffffff811ee129>] ? get_page_from_freelist+0x2c0/0x129a > [ 315.147608] [<ffffffff811ef1ed>] __alloc_pages_direct_compact+0xea/0x30d > [ 315.147626] [<ffffffff811ef103>] ? get_page_from_freelist+0x129a/0x129a > [ 315.147645] [<ffffffff811f0422>] __alloc_pages_nodemask+0x840/0x16b6 > [ 315.147663] [<ffffffff810dba27>] ? try_to_wake_up+0x696/0x6c8 > [ 315.149147] [<ffffffff811efbe2>] ? warn_alloc_failed+0x226/0x226 > [ 315.150615] [<ffffffff810dba69>] ? wake_up_process+0x10/0x12 > [ 315.152078] [<ffffffff810dbaf4>] ? wake_up_q+0x89/0xa7 > [ 315.153539] [<ffffffff81128b6f>] ? rwsem_wake+0x131/0x15c > [ 315.155007] [<ffffffff812922e7>] ? khugepaged+0x4072/0x484f > [ 315.156471] [<ffffffff8128e449>] khugepaged+0x1d4/0x484f > [ 315.157940] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef > [ 315.159402] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 > [ 315.160870] [<ffffffff81d31df8>] ? _raw_spin_unlock_irq+0x27/0x45 > [ 315.162341] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 > [ 315.163814] [<ffffffff8111112e>] ? prepare_to_wait_event+0x3f7/0x3f7 > [ 315.165295] [<ffffffff81d27ad5>] ? __schedule+0xa4d/0xd16 > [ 315.166763] [<ffffffff810ccde3>] kthread+0x252/0x261 > [ 315.168214] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef > [ 315.169646] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 > [ 315.171056] [<ffffffff81d3277f>] ret_from_fork+0x1f/0x40 > [ 315.172462] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 > [ 315.173869] Code: 03 b5 60 fe ff ff e8 2e fc ff ff a8 01 74 4c 48 83 e0 fe bf 01 00 00 00 48 89 85 38 fe ff ff e8 41 18 e1 e0 48 8b 85 38 fe ff ff <f0> 0f ba 28 00 73 29 bf 01 00 00 00 41 bc f5 ff ff ff e8 ea 27 > [ 315.175573] RIP [<ffffffffa02c413d>] zs_page_migrate+0x355/0xaa0 [zsmalloc] > [ 315.177084] RSP <ffff88011246f138> > [ 315.186572] ---[ end trace 0962b8ee48c98bbc ]--- > > > > > [ 315.186577] BUG: sleeping function called from invalid context at include/linux/sched.h:2960 > [ 315.186580] in_atomic(): 1, irqs_disabled(): 0, pid: 38, name: khugepaged > [ 315.186581] INFO: lockdep is turned off. > [ 315.186583] Preemption disabled at:[<ffffffffa02c3f1d>] zs_page_migrate+0x135/0xaa0 [zsmalloc] > > [ 315.186594] CPU: 3 PID: 38 Comm: khugepaged Tainted: G D 4.7.0-rc3-next-20160614-dbg-00004-ga1c2cbc-dirty #488 > [ 315.186599] 0000000000000000 ffff88011246ed58 ffffffff814d56bf ffff8800bfaf2900 > [ 315.186604] 0000000000000004 ffff88011246ed98 ffffffff810d5e6a 0000000000000000 > [ 315.186609] ffff8800bfaf2900 ffffffff81e39820 0000000000000b90 0000000000000000 > [ 315.186614] Call Trace: > [ 315.186618] [<ffffffff814d56bf>] dump_stack+0x68/0x92 > [ 315.186622] [<ffffffff810d5e6a>] ___might_sleep+0x3bd/0x3c9 > [ 315.186625] [<ffffffff810d5fd1>] __might_sleep+0x15b/0x167 > [ 315.186630] [<ffffffff810ac4c1>] exit_signals+0x7a/0x34f > [ 315.186633] [<ffffffff810ac447>] ? get_signal+0xd9b/0xd9b > [ 315.186636] [<ffffffff811aee21>] ? irq_work_queue+0x101/0x11c > [ 315.186640] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 > [ 315.186645] [<ffffffff81096357>] do_exit+0x34d/0x1b4e > [ 315.186648] [<ffffffff81130e16>] ? vprintk_emit+0x4b1/0x4d3 > [ 315.186652] [<ffffffff8109600a>] ? is_current_pgrp_orphaned+0x8c/0x8c > [ 315.186655] [<ffffffff81122c56>] ? lock_acquire+0xec/0x147 > [ 315.186658] [<ffffffff811321ef>] ? kmsg_dump+0x12/0x27a > [ 315.186662] [<ffffffff81132448>] ? kmsg_dump+0x26b/0x27a > [ 315.186666] [<ffffffff81036507>] oops_end+0x9d/0xa4 > [ 315.186669] [<ffffffff8103662c>] die+0x55/0x5e > [ 315.186672] [<ffffffff81032aa0>] do_general_protection+0x16c/0x337 > [ 315.186676] [<ffffffff81d33abf>] general_protection+0x1f/0x30 > [ 315.186681] [<ffffffffa02c413d>] ? zs_page_migrate+0x355/0xaa0 [zsmalloc] > [ 315.186686] [<ffffffffa02c4136>] ? zs_page_migrate+0x34e/0xaa0 [zsmalloc] > [ 315.186691] [<ffffffffa02c3de8>] ? obj_to_head+0x9d/0x9d [zsmalloc] > [ 315.186695] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c > [ 315.186699] [<ffffffff812275b1>] ? isolate_freepages_block+0x2f9/0x5a6 > [ 315.186702] [<ffffffff8127f15c>] ? kasan_poison_shadow+0x2f/0x31 > [ 315.186706] [<ffffffff8127f66a>] ? kasan_alloc_pages+0x39/0x3b > [ 315.186709] [<ffffffff812267e6>] ? map_pages+0x1f3/0x3ad > [ 315.186712] [<ffffffff812265f3>] ? update_pageblock_skip+0x18d/0x18d > [ 315.186716] [<ffffffff81115972>] ? up_read+0x1a/0x30 > [ 315.186719] [<ffffffff8111ec7e>] ? debug_check_no_locks_freed+0x150/0x22b > [ 315.186723] [<ffffffff812842d1>] move_to_new_page+0x4dd/0x615 > [ 315.186726] [<ffffffff81283df4>] ? migrate_page+0x75/0x75 > [ 315.186730] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 > [ 315.186733] [<ffffffff812851c1>] migrate_pages+0xadd/0x131a > [ 315.186737] [<ffffffff8122785e>] ? isolate_freepages_block+0x5a6/0x5a6 > [ 315.186740] [<ffffffff81226375>] ? kzfree+0x2b/0x2b > [ 315.186743] [<ffffffff812846e4>] ? buffer_migrate_page+0x2db/0x2db > [ 315.186747] [<ffffffff8122a6cf>] compact_zone+0xcdb/0x1155 > [ 315.186751] [<ffffffff812299f4>] ? compaction_suitable+0x76/0x76 > [ 315.186754] [<ffffffff8122ac29>] compact_zone_order+0xe0/0x167 > [ 315.186757] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 > [ 315.186761] [<ffffffff8122ab49>] ? compact_zone+0x1155/0x1155 > [ 315.186764] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 > [ 315.186768] [<ffffffff8122bcff>] try_to_compact_pages+0x2f1/0x648 > [ 315.186771] [<ffffffff8122bcff>] ? try_to_compact_pages+0x2f1/0x648 > [ 315.186775] [<ffffffff8122ba0e>] ? compaction_zonelist_suitable+0x3a6/0x3a6 > [ 315.186780] [<ffffffff811ee129>] ? get_page_from_freelist+0x2c0/0x129a > [ 315.186783] [<ffffffff811ef1ed>] __alloc_pages_direct_compact+0xea/0x30d > [ 315.186787] [<ffffffff811ef103>] ? get_page_from_freelist+0x129a/0x129a > [ 315.186791] [<ffffffff811f0422>] __alloc_pages_nodemask+0x840/0x16b6 > [ 315.186794] [<ffffffff810dba27>] ? try_to_wake_up+0x696/0x6c8 > [ 315.186798] [<ffffffff811efbe2>] ? warn_alloc_failed+0x226/0x226 > [ 315.186801] [<ffffffff810dba69>] ? wake_up_process+0x10/0x12 > [ 315.186804] [<ffffffff810dbaf4>] ? wake_up_q+0x89/0xa7 > [ 315.186807] [<ffffffff81128b6f>] ? rwsem_wake+0x131/0x15c > [ 315.186811] [<ffffffff812922e7>] ? khugepaged+0x4072/0x484f > [ 315.186815] [<ffffffff8128e449>] khugepaged+0x1d4/0x484f > [ 315.186819] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef > [ 315.186822] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 > [ 315.186826] [<ffffffff81d31df8>] ? _raw_spin_unlock_irq+0x27/0x45 > [ 315.186829] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 > [ 315.186832] [<ffffffff8111112e>] ? prepare_to_wait_event+0x3f7/0x3f7 > [ 315.186836] [<ffffffff81d27ad5>] ? __schedule+0xa4d/0xd16 > [ 315.186840] [<ffffffff810ccde3>] kthread+0x252/0x261 > [ 315.186843] [<ffffffff8128e275>] ? hugepage_vma_revalidate+0xef/0xef > [ 315.186846] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 > [ 315.186851] [<ffffffff81d3277f>] ret_from_fork+0x1f/0x40 > [ 315.186854] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 > [ 315.186869] note: khugepaged[38] exited with preempt_count 4 > > > > [ 340.319852] NMI watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [jbd2/zram0-8:405] > [ 340.319856] Modules linked in: lzo zram zsmalloc mousedev coretemp hwmon crc32c_intel r8169 i2c_i801 mii snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core acpi_cpufreq snd_pcm snd_timer snd soundcore lpc_ich mfd_core processor sch_fq_codel sd_mod hid_generic usbhid hid ahci libahci libata ehci_pci ehci_hcd scsi_mod usbcore usb_common > [ 340.319900] irq event stamp: 834296 > [ 340.319902] hardirqs last enabled at (834295): [<ffffffff81280b07>] quarantine_put+0xa1/0xe6 > [ 340.319911] hardirqs last disabled at (834296): [<ffffffff81d31e68>] _raw_write_lock_irqsave+0x13/0x4c > [ 340.319917] softirqs last enabled at (833836): [<ffffffff81d3455e>] __do_softirq+0x406/0x48f > [ 340.319922] softirqs last disabled at (833831): [<ffffffff8109914a>] irq_exit+0x6a/0x113 > [ 340.319929] CPU: 2 PID: 405 Comm: jbd2/zram0-8 Tainted: G D 4.7.0-rc3-next-20160614-dbg-00004-ga1c2cbc-dirty #488 > [ 340.319935] task: ffff8800bb512900 ti: ffff8800a69c0000 task.ti: ffff8800a69c0000 > [ 340.319937] RIP: 0010:[<ffffffff814ed772>] [<ffffffff814ed772>] delay_tsc+0x0/0xa4 > [ 340.319943] RSP: 0018:ffff8800a69c70f8 EFLAGS: 00000206 > [ 340.319945] RAX: 0000000000000001 RBX: ffff8800aa91f300 RCX: 0000000000000000 > [ 340.319947] RDX: 0000000000000003 RSI: ffffffff81ed2840 RDI: 0000000000000001 > [ 340.319949] RBP: ffff8800a69c7100 R08: 0000000000000001 R09: 0000000000000000 > [ 340.319951] R10: ffff8800a69c70e8 R11: 000000007e7516b9 R12: ffff8800aa91f310 > [ 340.319954] R13: ffff8800aa91f308 R14: 000000001f3306fa R15: 0000000000000000 > [ 340.319956] FS: 0000000000000000(0000) GS:ffff880113700000(0000) knlGS:0000000000000000 > [ 340.319959] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 340.319961] CR2: 00007fc99caba080 CR3: 00000000b9796000 CR4: 00000000000006e0 > [ 340.319963] Stack: > [ 340.319964] ffffffff814ed89c ffff8800a69c7148 ffffffff8112795d ffffed0015523e60 > [ 340.319970] 000000009e857390 ffff8800aa91f300 ffff8800bbe21cc0 ffff8800047d6f80 > [ 340.319975] ffff8800a69c72b0 ffff8800aa91f300 ffff8800a69c7168 ffffffff81d31bed > [ 340.319980] Call Trace: > [ 340.319983] [<ffffffff814ed89c>] ? __delay+0xa/0xc > [ 340.319988] [<ffffffff8112795d>] do_raw_spin_lock+0x197/0x257 > [ 340.319991] [<ffffffff81d31bed>] _raw_spin_lock+0x35/0x3c > [ 340.319998] [<ffffffffa02c6062>] ? zs_free+0x191/0x27a [zsmalloc] > [ 340.320003] [<ffffffffa02c6062>] zs_free+0x191/0x27a [zsmalloc] > [ 340.320008] [<ffffffffa02c5ed1>] ? free_zspage+0xe8/0xe8 [zsmalloc] > [ 340.320012] [<ffffffff810d58d1>] ? finish_task_switch+0x3de/0x484 > [ 340.320015] [<ffffffff810d58a6>] ? finish_task_switch+0x3b3/0x484 > [ 340.320021] [<ffffffff81d27ad5>] ? __schedule+0xa4d/0xd16 > [ 340.320024] [<ffffffff81d28086>] ? preempt_schedule+0x1f/0x21 > [ 340.320028] [<ffffffff81d27ff9>] ? preempt_schedule_common+0xb7/0xe8 > [ 340.320034] [<ffffffffa02d3f0e>] zram_free_page+0x112/0x1f6 [zram] > [ 340.320039] [<ffffffffa02d5e6c>] zram_make_request+0x45d/0x89f [zram] > [ 340.320045] [<ffffffffa02d5a0f>] ? zram_rw_page+0x21d/0x21d [zram] > [ 340.320048] [<ffffffff81493657>] ? blk_exit_rl+0x39/0x39 > [ 340.320053] [<ffffffff8148fe3f>] ? handle_bad_sector+0x192/0x192 > [ 340.320056] [<ffffffff8127f83e>] ? kasan_slab_alloc+0x12/0x14 > [ 340.320059] [<ffffffff8127ca68>] ? kmem_cache_alloc+0xf3/0x101 > [ 340.320062] [<ffffffff81494e37>] generic_make_request+0x2bc/0x496 > [ 340.320066] [<ffffffff81494b7b>] ? blk_plug_queued_count+0x103/0x103 > [ 340.320069] [<ffffffff8111ec7e>] ? debug_check_no_locks_freed+0x150/0x22b > [ 340.320072] [<ffffffff81495309>] submit_bio+0x2f8/0x324 > [ 340.320075] [<ffffffff81495011>] ? generic_make_request+0x496/0x496 > [ 340.320078] [<ffffffff811190fc>] ? lockdep_init_map+0x1ef/0x4b0 > [ 340.320082] [<ffffffff814880a4>] submit_bio_wait+0xff/0x138 > [ 340.320085] [<ffffffff81487fa5>] ? bio_add_page+0x292/0x292 > [ 340.320090] [<ffffffff814ab82c>] blkdev_issue_discard+0xee/0x148 > [ 340.320093] [<ffffffff814ab73e>] ? __blkdev_issue_discard+0x399/0x399 > [ 340.320097] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 > [ 340.320101] [<ffffffff81404de8>] ext4_free_data_callback+0x2cc/0x8bc > [ 340.320104] [<ffffffff81404de8>] ? ext4_free_data_callback+0x2cc/0x8bc > [ 340.320107] [<ffffffff81404b1c>] ? ext4_mb_release_context+0x10aa/0x10aa > [ 340.320111] [<ffffffff81122c56>] ? lock_acquire+0xec/0x147 > [ 340.320115] [<ffffffff813c8a6a>] ? ext4_journal_commit_callback+0x203/0x220 > [ 340.320119] [<ffffffff813c8a61>] ext4_journal_commit_callback+0x1fa/0x220 > [ 340.320124] [<ffffffff81438bf5>] jbd2_journal_commit_transaction+0x3753/0x3c20 > [ 340.320128] [<ffffffff814354a2>] ? journal_submit_commit_record+0x777/0x777 > [ 340.320132] [<ffffffff8111f0ac>] ? debug_show_all_locks+0x226/0x226 > [ 340.320135] [<ffffffff811205a5>] ? __lock_acquire+0x14f9/0x33b8 > [ 340.320139] [<ffffffff81d31db0>] ? _raw_spin_unlock_irqrestore+0x3b/0x5c > [ 340.320143] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 > [ 340.320146] [<ffffffff81d31dbc>] ? _raw_spin_unlock_irqrestore+0x47/0x5c > [ 340.320151] [<ffffffff81156945>] ? try_to_del_timer_sync+0xa5/0xce > [ 340.320154] [<ffffffff8111cde6>] ? trace_hardirqs_on_caller+0x3d2/0x492 > [ 340.320157] [<ffffffff8143febd>] kjournald2+0x246/0x6e1 > [ 340.320160] [<ffffffff8143febd>] ? kjournald2+0x246/0x6e1 > [ 340.320163] [<ffffffff8143fc77>] ? commit_timeout+0xb/0xb > [ 340.320167] [<ffffffff8111112e>] ? prepare_to_wait_event+0x3f7/0x3f7 > [ 340.320171] [<ffffffff810ccde3>] kthread+0x252/0x261 > [ 340.320174] [<ffffffff8143fc77>] ? commit_timeout+0xb/0xb > [ 340.320177] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 > [ 340.320181] [<ffffffff81d3277f>] ret_from_fork+0x1f/0x40 > [ 340.320185] [<ffffffff810ccb91>] ? kthread_create_on_node+0x377/0x377 > [ 340.320186] Code: 5c 5d c3 55 48 8d 04 bd 00 00 00 00 65 48 8b 15 8d 59 b2 7e 48 69 d2 fa 00 00 00 48 89 e5 f7 e2 48 8d 7a 01 e8 22 01 00 00 5d c3 <55> 48 89 e5 41 56 41 55 41 54 53 49 89 fd bf 01 00 00 00 e8 ed > > -ss