Rafael Aquini
2012-Aug-10 17:55 UTC
[PATCH v7 0/4] make balloon pages movable by compaction
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. This patch-set follows the main idea discussed at 2012 LSFMMS session: "Ballooning for transparent huge pages" -- http://lwn.net/Articles/490114/ to introduce the required changes to the virtio_balloon driver, as well as the changes to the core compaction & migration bits, in order to make those subsystems aware of ballooned pages and allow memory balloon pages become movable within a guest, thus avoiding the aforementioned fragmentation issue Rafael Aquini (4): mm: introduce compaction and migration for virtio ballooned pages virtio_balloon: introduce migration primitives to balloon pages mm: introduce putback_movable_pages() mm: add vm event counters for balloon pages compaction drivers/virtio/virtio_balloon.c | 139 +++++++++++++++++++++++++++++++++++++--- include/linux/migrate.h | 2 + include/linux/mm.h | 17 +++++ include/linux/virtio_balloon.h | 4 ++ include/linux/vm_event_item.h | 8 ++- mm/compaction.c | 131 +++++++++++++++++++++++++++++++------ mm/migrate.c | 51 ++++++++++++++- mm/page_alloc.c | 2 +- mm/vmstat.c | 10 ++- 9 files changed, 331 insertions(+), 33 deletions(-) Change log: v7: * fix a potential page leak case at 'putback_balloon_page' (Mel); * adjust vm-events-counter patch and remove its drop-on-merge message (Rik); * add 'putback_movable_pages' to avoid hacks on 'putback_lru_pages' (Minchan); v6: * rename 'is_balloon_page()' to 'movable_balloon_page()' (Rik); v5: * address Andrew Morton's review comments on the patch series; * address a couple extra nitpick suggestions on PATCH 01 (Minchan); v4: * address Rusty Russel's review comments on PATCH 02; * re-base virtio_balloon patch on 9c378abc5c0c6fc8e3acf5968924d274503819b3; V3: * address reviewers nitpick suggestions on PATCH 01 (Mel, Minchan); V2: * address Mel Gorman's review comments on PATCH 01; Preliminary test results: (2 VCPU 2048mB RAM KVM guest running 3.6.0_rc1+ -- after a reboot) * 64mB balloon: [root at localhost ~]# awk '/compact/ {print}' /proc/vmstat compact_blocks_moved 0 compact_pages_moved 0 compact_pagemigrate_failed 0 compact_stall 0 compact_fail 0 compact_success 0 compact_balloon_isolated 0 compact_balloon_migrated 0 compact_balloon_returned 0 compact_balloon_released 0 [root at localhost ~]# [root at localhost ~]# for i in $(seq 1 6); do echo 1 > /proc/sys/vm/compact_memory & done &>/dev/null [1] Done echo 1 > /proc/sys/vm/compact_memory [2] Done echo 1 > /proc/sys/vm/compact_memory [3] Done echo 1 > /proc/sys/vm/compact_memory [4] Done echo 1 > /proc/sys/vm/compact_memory [5]- Done echo 1 > /proc/sys/vm/compact_memory [6]+ Done echo 1 > /proc/sys/vm/compact_memory [root at localhost ~]# [root at localhost ~]# awk '/compact/ {print}' /proc/vmstat compact_blocks_moved 6579 compact_pages_moved 50114 compact_pagemigrate_failed 111 compact_stall 0 compact_fail 0 compact_success 0 compact_balloon_isolated 18361 compact_balloon_migrated 18306 compact_balloon_returned 55 compact_balloon_released 18306 * 128 mB balloon: [root at localhost ~]# awk '/compact/ {print}' /proc/vmstat compact_blocks_moved 0 compact_pages_moved 0 compact_pagemigrate_failed 0 compact_stall 0 compact_fail 0 compact_success 0 compact_balloon_isolated 0 compact_balloon_migrated 0 compact_balloon_returned 0 compact_balloon_released 0 [root at localhost ~]# [root at localhost ~]# for i in $(seq 1 6); do echo 1 > /proc/sys/vm/compact_memory & done &>/dev/null [1] Done echo 1 > /proc/sys/vm/compact_memory [2] Done echo 1 > /proc/sys/vm/compact_memory [3] Done echo 1 > /proc/sys/vm/compact_memory [4] Done echo 1 > /proc/sys/vm/compact_memory [5]- Done echo 1 > /proc/sys/vm/compact_memory [6]+ Done echo 1 > /proc/sys/vm/compact_memory [root at localhost ~]# [root at localhost ~]# awk '/compact/ {print}' /proc/vmstat compact_blocks_moved 6789 compact_pages_moved 64479 compact_pagemigrate_failed 127 compact_stall 0 compact_fail 0 compact_success 0 compact_balloon_isolated 33937 compact_balloon_migrated 33869 compact_balloon_returned 68 compact_balloon_released 33869 -- 1.7.11.2
Rafael Aquini
2012-Aug-10 17:55 UTC
[PATCH v7 1/4] mm: introduce compaction and migration for virtio ballooned pages
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. This patch introduces the helper functions as well as the necessary changes to teach compaction and migration bits how to cope with pages which are part of a guest memory balloon, in order to make them movable by memory compaction procedures. Signed-off-by: Rafael Aquini <aquini at redhat.com> --- include/linux/mm.h | 17 ++++++++ mm/compaction.c | 125 +++++++++++++++++++++++++++++++++++++++++++++-------- mm/migrate.c | 30 ++++++++++++- 3 files changed, 152 insertions(+), 20 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 311be90..56cc553 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1662,5 +1662,22 @@ static inline unsigned int debug_guardpage_minorder(void) { return 0; } static inline bool page_is_guard(struct page *page) { return false; } #endif /* CONFIG_DEBUG_PAGEALLOC */ +#if (defined(CONFIG_VIRTIO_BALLOON) || \ + defined(CONFIG_VIRTIO_BALLOON_MODULE)) && defined(CONFIG_COMPACTION) +extern bool isolate_balloon_page(struct page *); +extern void putback_balloon_page(struct page *); +extern struct address_space *balloon_mapping; + +static inline bool movable_balloon_page(struct page *page) +{ + return (page->mapping && page->mapping == balloon_mapping); +} + +#else +static inline bool isolate_balloon_page(struct page *page) { return false; } +static inline void putback_balloon_page(struct page *page) { return false; } +static inline bool movable_balloon_page(struct page *page) { return false; } +#endif /* (VIRTIO_BALLOON || VIRTIO_BALLOON_MODULE) && CONFIG_COMPACTION */ + #endif /* __KERNEL__ */ #endif /* _LINUX_MM_H */ diff --git a/mm/compaction.c b/mm/compaction.c index e78cb96..e4e871b 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -14,6 +14,7 @@ #include <linux/backing-dev.h> #include <linux/sysctl.h> #include <linux/sysfs.h> +#include <linux/export.h> #include "internal.h" #if defined CONFIG_COMPACTION || defined CONFIG_CMA @@ -21,6 +22,84 @@ #define CREATE_TRACE_POINTS #include <trace/events/compaction.h> +#if defined(CONFIG_VIRTIO_BALLOON) || defined(CONFIG_VIRTIO_BALLOON_MODULE) +/* + * Balloon pages special page->mapping. + * Users must properly allocate and initialize an instance of balloon_mapping, + * and set it as the page->mapping for balloon enlisted page instances. + * There is no need on utilizing struct address_space locking schemes for + * balloon_mapping as, once it gets initialized at balloon driver, it will + * remain just like a static reference that helps us on identifying a guest + * ballooned page by its mapping, as well as it will keep the 'a_ops' callback + * pointers to the functions that will execute the balloon page mobility tasks. + * + * address_space_operations necessary methods for ballooned pages: + * .migratepage - used to perform balloon's page migration (as is) + * .invalidatepage - used to isolate a page from balloon's page list + * .freepage - used to reinsert an isolated page to balloon's page list + */ +struct address_space *balloon_mapping; +EXPORT_SYMBOL_GPL(balloon_mapping); + +static inline void __isolate_balloon_page(struct page *page) +{ + page->mapping->a_ops->invalidatepage(page, 0); +} + +static inline void __putback_balloon_page(struct page *page) +{ + page->mapping->a_ops->freepage(page); +} + +/* __isolate_lru_page() counterpart for a ballooned page */ +bool isolate_balloon_page(struct page *page) +{ + if (WARN_ON(!movable_balloon_page(page))) + return false; + + if (likely(get_page_unless_zero(page))) { + /* + * As balloon pages are not isolated from LRU lists, concurrent + * compaction threads can race against page migration functions + * move_to_new_page() & __unmap_and_move(). + * In order to avoid having an already isolated balloon page + * being (wrongly) re-isolated while it is under migration, + * lets be sure we have the page lock before proceeding with + * the balloon page isolation steps. + */ + if (likely(trylock_page(page))) { + /* + * A ballooned page, by default, has just one refcount. + * Prevent concurrent compaction threads from isolating + * an already isolated balloon page. + */ + if (movable_balloon_page(page) && + (page_count(page) == 2)) { + __isolate_balloon_page(page); + unlock_page(page); + return true; + } + unlock_page(page); + } + /* Drop refcount taken for this already isolated page */ + put_page(page); + } + return false; +} + +/* putback_lru_page() counterpart for a ballooned page */ +void putback_balloon_page(struct page *page) +{ + if (WARN_ON(!movable_balloon_page(page))) + return; + + lock_page(page); + __putback_balloon_page(page); + put_page(page); + unlock_page(page); +} +#endif /* CONFIG_VIRTIO_BALLOON || CONFIG_VIRTIO_BALLOON_MODULE */ + static unsigned long release_freepages(struct list_head *freelist) { struct page *page, *next; @@ -312,32 +391,40 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, continue; } - if (!PageLRU(page)) - continue; - /* - * PageLRU is set, and lru_lock excludes isolation, - * splitting and collapsing (collapsing has already - * happened if PageLRU is set). + * It is possible to migrate LRU pages and balloon pages. + * Skip any other type of page. */ - if (PageTransHuge(page)) { - low_pfn += (1 << compound_order(page)) - 1; - continue; - } + if (PageLRU(page)) { + /* + * PageLRU is set, and lru_lock excludes isolation, + * splitting and collapsing (collapsing has already + * happened if PageLRU is set). + */ + if (PageTransHuge(page)) { + low_pfn += (1 << compound_order(page)) - 1; + continue; + } - if (!cc->sync) - mode |= ISOLATE_ASYNC_MIGRATE; + if (!cc->sync) + mode |= ISOLATE_ASYNC_MIGRATE; - lruvec = mem_cgroup_page_lruvec(page, zone); + lruvec = mem_cgroup_page_lruvec(page, zone); - /* Try isolate the page */ - if (__isolate_lru_page(page, mode) != 0) - continue; + /* Try isolate the page */ + if (__isolate_lru_page(page, mode) != 0) + continue; + + VM_BUG_ON(PageTransCompound(page)); - VM_BUG_ON(PageTransCompound(page)); + /* Successfully isolated */ + del_page_from_lru_list(page, lruvec, page_lru(page)); + } else if (unlikely(movable_balloon_page(page))) { + if (!isolate_balloon_page(page)) + continue; + } else + continue; - /* Successfully isolated */ - del_page_from_lru_list(page, lruvec, page_lru(page)); list_add(&page->lru, migratelist); cc->nr_migratepages++; nr_isolated++; diff --git a/mm/migrate.c b/mm/migrate.c index 77ed2d7..80f22bb 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -79,7 +79,10 @@ void putback_lru_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); - putback_lru_page(page); + if (unlikely(movable_balloon_page(page))) + putback_balloon_page(page); + else + putback_lru_page(page); } } @@ -778,6 +781,17 @@ static int __unmap_and_move(struct page *page, struct page *newpage, } } + if (unlikely(movable_balloon_page(page))) { + /* + * A ballooned page does not need any special attention from + * physical to virtual reverse mapping procedures. + * Skip any attempt to unmap PTEs or to remap swap cache, + * in order to avoid burning cycles at rmap level. + */ + remap_swapcache = 0; + goto skip_unmap; + } + /* * Corner case handling: * 1. When a new swap-cache page is read into, it is added to the LRU @@ -846,6 +860,20 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, goto out; rc = __unmap_and_move(page, newpage, force, offlining, mode); + + if (unlikely(movable_balloon_page(newpage))) { + /* + * A ballooned page has been migrated already. Now, it is the + * time to wrap-up counters, handle the old page back to Buddy + * and return. + */ + list_del(&page->lru); + dec_zone_page_state(page, NR_ISOLATED_ANON + + page_is_file_cache(page)); + put_page(page); + __free_page(page); + return rc; + } out: if (rc != -EAGAIN) { /* -- 1.7.11.2
Rafael Aquini
2012-Aug-10 17:55 UTC
[PATCH v7 2/4] virtio_balloon: introduce migration primitives to balloon pages
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. Besides making balloon pages movable at allocation time and introducing the necessary primitives to perform balloon page migration/compaction, this patch also introduces the following locking scheme to provide the proper synchronization and protection for struct virtio_balloon elements against concurrent accesses due to parallel operations introduced by memory compaction / page migration. - balloon_lock (mutex) : synchronizes the access demand to elements of struct virtio_balloon and its queue operations; - pages_lock (spinlock): special protection to balloon pages list against concurrent list handling operations; Signed-off-by: Rafael Aquini <aquini at redhat.com> --- drivers/virtio/virtio_balloon.c | 138 +++++++++++++++++++++++++++++++++++++--- include/linux/virtio_balloon.h | 4 ++ 2 files changed, 134 insertions(+), 8 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 0908e60..7c937a0 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -27,6 +27,7 @@ #include <linux/delay.h> #include <linux/slab.h> #include <linux/module.h> +#include <linux/fs.h> /* * Balloon device works in 4K page units. So each page is pointed to by @@ -35,6 +36,12 @@ */ #define VIRTIO_BALLOON_PAGES_PER_PAGE (PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT) +/* Synchronizes accesses/updates to the struct virtio_balloon elements */ +DEFINE_MUTEX(balloon_lock); + +/* Protects 'virtio_balloon->pages' list against concurrent handling */ +DEFINE_SPINLOCK(pages_lock); + struct virtio_balloon { struct virtio_device *vdev; @@ -51,6 +58,7 @@ struct virtio_balloon /* Number of balloon pages we've told the Host we're not using. */ unsigned int num_pages; + /* * The pages we've told the Host we're not using. * Each page on this list adds VIRTIO_BALLOON_PAGES_PER_PAGE @@ -125,10 +133,12 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) /* We can only do one array worth at a time. */ num = min(num, ARRAY_SIZE(vb->pfns)); + mutex_lock(&balloon_lock); for (vb->num_pfns = 0; vb->num_pfns < num; vb->num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE) { - struct page *page = alloc_page(GFP_HIGHUSER | __GFP_NORETRY | - __GFP_NOMEMALLOC | __GFP_NOWARN); + struct page *page = alloc_page(GFP_HIGHUSER_MOVABLE | + __GFP_NORETRY | __GFP_NOWARN | + __GFP_NOMEMALLOC); if (!page) { if (printk_ratelimit()) dev_printk(KERN_INFO, &vb->vdev->dev, @@ -141,7 +151,10 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) set_page_pfns(vb->pfns + vb->num_pfns, page); vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; totalram_pages--; + spin_lock(&pages_lock); list_add(&page->lru, &vb->pages); + page->mapping = balloon_mapping; + spin_unlock(&pages_lock); } /* Didn't get any? Oh well. */ @@ -149,6 +162,7 @@ static void fill_balloon(struct virtio_balloon *vb, size_t num) return; tell_host(vb, vb->inflate_vq); + mutex_unlock(&balloon_lock); } static void release_pages_by_pfn(const u32 pfns[], unsigned int num) @@ -169,10 +183,22 @@ static void leak_balloon(struct virtio_balloon *vb, size_t num) /* We can only do one array worth at a time. */ num = min(num, ARRAY_SIZE(vb->pfns)); + mutex_lock(&balloon_lock); for (vb->num_pfns = 0; vb->num_pfns < num; vb->num_pfns += VIRTIO_BALLOON_PAGES_PER_PAGE) { + /* + * We can race against virtballoon_isolatepage() and end up + * stumbling across a _temporarily_ empty 'pages' list. + */ + spin_lock(&pages_lock); + if (unlikely(list_empty(&vb->pages))) { + spin_unlock(&pages_lock); + break; + } page = list_first_entry(&vb->pages, struct page, lru); + page->mapping = NULL; list_del(&page->lru); + spin_unlock(&pages_lock); set_page_pfns(vb->pfns + vb->num_pfns, page); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -182,8 +208,11 @@ static void leak_balloon(struct virtio_balloon *vb, size_t num) * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); * is true, we *have* to do it in this order */ - tell_host(vb, vb->deflate_vq); - release_pages_by_pfn(vb->pfns, vb->num_pfns); + if (vb->num_pfns > 0) { + tell_host(vb, vb->deflate_vq); + release_pages_by_pfn(vb->pfns, vb->num_pfns); + } + mutex_unlock(&balloon_lock); } static inline void update_stat(struct virtio_balloon *vb, int idx, @@ -239,6 +268,7 @@ static void stats_handle_request(struct virtio_balloon *vb) struct scatterlist sg; unsigned int len; + mutex_lock(&balloon_lock); vb->need_stats_update = 0; update_balloon_stats(vb); @@ -249,6 +279,7 @@ static void stats_handle_request(struct virtio_balloon *vb) if (virtqueue_add_buf(vq, &sg, 1, 0, vb, GFP_KERNEL) < 0) BUG(); virtqueue_kick(vq); + mutex_unlock(&balloon_lock); } static void virtballoon_changed(struct virtio_device *vdev) @@ -261,22 +292,27 @@ static void virtballoon_changed(struct virtio_device *vdev) static inline s64 towards_target(struct virtio_balloon *vb) { __le32 v; - s64 target; + s64 target, actual; + mutex_lock(&balloon_lock); + actual = vb->num_pages; vb->vdev->config->get(vb->vdev, offsetof(struct virtio_balloon_config, num_pages), &v, sizeof(v)); target = le32_to_cpu(v); - return target - vb->num_pages; + mutex_unlock(&balloon_lock); + return target - actual; } static void update_balloon_size(struct virtio_balloon *vb) { - __le32 actual = cpu_to_le32(vb->num_pages); - + __le32 actual; + mutex_lock(&balloon_lock); + actual = cpu_to_le32(vb->num_pages); vb->vdev->config->set(vb->vdev, offsetof(struct virtio_balloon_config, actual), &actual, sizeof(actual)); + mutex_unlock(&balloon_lock); } static int balloon(void *_vballoon) @@ -339,6 +375,76 @@ static int init_vqs(struct virtio_balloon *vb) return 0; } +/* + * '*vb_ptr' allows virtballoon_migratepage() & virtballoon_putbackpage() to + * access pertinent elements from struct virtio_balloon + */ +struct virtio_balloon *vb_ptr; + +/* + * Populate balloon_mapping->a_ops->migratepage method to perform the balloon + * page migration task. + * + * After a ballooned page gets isolated by compaction procedures, this is the + * function that performs the page migration on behalf of move_to_new_page(), + * when the last calls (page)->mapping->a_ops->migratepage. + * + * Page migration for virtio balloon is done in a simple swap fashion which + * follows these two steps: + * 1) insert newpage into vb->pages list and update the host about it; + * 2) update the host about the removed old page from vb->pages list; + */ +int virtballoon_migratepage(struct address_space *mapping, + struct page *newpage, struct page *page, enum migrate_mode mode) +{ + mutex_lock(&balloon_lock); + + /* balloon's page migration 1st step */ + vb_ptr->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; + spin_lock(&pages_lock); + list_add(&newpage->lru, &vb_ptr->pages); + spin_unlock(&pages_lock); + set_page_pfns(vb_ptr->pfns, newpage); + tell_host(vb_ptr, vb_ptr->inflate_vq); + + /* balloon's page migration 2nd step */ + vb_ptr->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb_ptr->pfns, page); + tell_host(vb_ptr, vb_ptr->deflate_vq); + + mutex_unlock(&balloon_lock); + + return 0; +} + +/* + * Populate balloon_mapping->a_ops->invalidatepage method to help compaction on + * isolating a page from the balloon page list. + */ +void virtballoon_isolatepage(struct page *page, unsigned long mode) +{ + spin_lock(&pages_lock); + list_del(&page->lru); + spin_unlock(&pages_lock); +} + +/* + * Populate balloon_mapping->a_ops->freepage method to help compaction on + * re-inserting an isolated page into the balloon page list. + */ +void virtballoon_putbackpage(struct page *page) +{ + spin_lock(&pages_lock); + list_add(&page->lru, &vb_ptr->pages); + spin_unlock(&pages_lock); +} + +static const struct address_space_operations virtio_balloon_aops = { + .migratepage = virtballoon_migratepage, + .invalidatepage = virtballoon_isolatepage, + .freepage = virtballoon_putbackpage, +}; + static int virtballoon_probe(struct virtio_device *vdev) { struct virtio_balloon *vb; @@ -351,11 +457,25 @@ static int virtballoon_probe(struct virtio_device *vdev) } INIT_LIST_HEAD(&vb->pages); + vb->num_pages = 0; init_waitqueue_head(&vb->config_change); init_waitqueue_head(&vb->acked); vb->vdev = vdev; vb->need_stats_update = 0; + vb_ptr = vb; + + /* Init the ballooned page->mapping special balloon_mapping */ + balloon_mapping = kmalloc(sizeof(*balloon_mapping), GFP_KERNEL); + if (!balloon_mapping) { + err = -ENOMEM; + goto out_free_vb; + } + + INIT_RADIX_TREE(&balloon_mapping->page_tree, GFP_ATOMIC | __GFP_NOWARN); + INIT_LIST_HEAD(&balloon_mapping->i_mmap_nonlinear); + spin_lock_init(&balloon_mapping->tree_lock); + balloon_mapping->a_ops = &virtio_balloon_aops; err = init_vqs(vb); if (err) @@ -373,6 +493,7 @@ out_del_vqs: vdev->config->del_vqs(vdev); out_free_vb: kfree(vb); + kfree(balloon_mapping); out: return err; } @@ -397,6 +518,7 @@ static void __devexit virtballoon_remove(struct virtio_device *vdev) kthread_stop(vb->thread); remove_common(vb); kfree(vb); + kfree(balloon_mapping); } #ifdef CONFIG_PM diff --git a/include/linux/virtio_balloon.h b/include/linux/virtio_balloon.h index 652dc8b..930f1b7 100644 --- a/include/linux/virtio_balloon.h +++ b/include/linux/virtio_balloon.h @@ -56,4 +56,8 @@ struct virtio_balloon_stat { u64 val; } __attribute__((packed)); +#if !defined(CONFIG_COMPACTION) +struct address_space *balloon_mapping; +#endif + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 1.7.11.2
The PATCH "mm: introduce compaction and migration for virtio ballooned pages" hacks around putback_lru_pages() in order to allow ballooned pages to be re-inserted on balloon page list as if a ballooned page was like a LRU page. As ballooned pages are not legitimate LRU pages, this patch introduces putback_movable_pages() to properly cope with cases where the isolated pageset contains ballooned pages and LRU pages, thus fixing the mentioned inelegant hack around putback_lru_pages(). Signed-off-by: Rafael Aquini <aquini at redhat.com> --- include/linux/migrate.h | 2 ++ mm/compaction.c | 4 ++-- mm/migrate.c | 20 ++++++++++++++++++++ mm/page_alloc.c | 2 +- 4 files changed, 25 insertions(+), 3 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index ce7e667..ff103a1 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -10,6 +10,7 @@ typedef struct page *new_page_t(struct page *, unsigned long private, int **); #ifdef CONFIG_MIGRATION extern void putback_lru_pages(struct list_head *l); +extern void putback_movable_pages(struct list_head *l); extern int migrate_page(struct address_space *, struct page *, struct page *, enum migrate_mode); extern int migrate_pages(struct list_head *l, new_page_t x, @@ -33,6 +34,7 @@ extern int migrate_huge_page_move_mapping(struct address_space *mapping, #else static inline void putback_lru_pages(struct list_head *l) {} +static inline void putback_movable_pages(struct list_head *l) {} static inline int migrate_pages(struct list_head *l, new_page_t x, unsigned long private, bool offlining, enum migrate_mode mode) { return -ENOSYS; } diff --git a/mm/compaction.c b/mm/compaction.c index e4e871b..8567bb8 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -837,9 +837,9 @@ static int compact_zone(struct zone *zone, struct compact_control *cc) trace_mm_compaction_migratepages(nr_migrate - nr_remaining, nr_remaining); - /* Release LRU pages not migrated */ + /* Release isolated pages not migrated */ if (err) { - putback_lru_pages(&cc->migratepages); + putback_movable_pages(&cc->migratepages); cc->nr_migratepages = 0; if (err == -ENOMEM) { ret = COMPACT_PARTIAL; diff --git a/mm/migrate.c b/mm/migrate.c index 80f22bb..1165134 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -79,6 +79,26 @@ void putback_lru_pages(struct list_head *l) list_del(&page->lru); dec_zone_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); + putback_lru_page(page); + } +} + +/* + * Put previously isolated pages back onto the appropriated lists + * from where they were once taken off for compaction/migration. + * + * This function shall be used instead of putback_lru_pages(), + * whenever the isolated pageset has been built by isolate_migratepages_range() + */ +void putback_movable_pages(struct list_head *l) +{ + struct page *page; + struct page *page2; + + list_for_each_entry_safe(page, page2, l, lru) { + list_del(&page->lru); + dec_zone_page_state(page, NR_ISOLATED_ANON + + page_is_file_cache(page)); if (unlikely(movable_balloon_page(page))) putback_balloon_page(page); else diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 009ac28..78b7663 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5669,7 +5669,7 @@ static int __alloc_contig_migrate_range(unsigned long start, unsigned long end) 0, false, MIGRATE_SYNC); } - putback_lru_pages(&cc.migratepages); + putback_movable_pages(&cc.migratepages); return ret > 0 ? 0 : ret; } -- 1.7.11.2
Rafael Aquini
2012-Aug-10 17:55 UTC
[PATCH v7 4/4] mm: add vm event counters for balloon pages compaction
This patch introduces a new set of vm event counters to keep track of ballooned pages compaction activity. Signed-off-by: Rafael Aquini <aquini at redhat.com> --- drivers/virtio/virtio_balloon.c | 1 + include/linux/vm_event_item.h | 8 +++++++- mm/compaction.c | 2 ++ mm/migrate.c | 1 + mm/vmstat.c | 10 +++++++++- 5 files changed, 20 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 7c937a0..b8f7ea5 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -414,6 +414,7 @@ int virtballoon_migratepage(struct address_space *mapping, mutex_unlock(&balloon_lock); + count_vm_event(COMPACTBALLOONMIGRATED); return 0; } diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h index 57f7b10..b1841a2 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -41,7 +41,13 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_COMPACTION COMPACTBLOCKS, COMPACTPAGES, COMPACTPAGEFAILED, COMPACTSTALL, COMPACTFAIL, COMPACTSUCCESS, -#endif +#if defined(CONFIG_VIRTIO_BALLOON) || defined(CONFIG_VIRTIO_BALLOON_MODULE) + COMPACTBALLOONISOLATED, /* isolated from balloon pagelist */ + COMPACTBALLOONMIGRATED, /* balloon page sucessfully migrated */ + COMPACTBALLOONRETURNED, /* putback to pagelist, not-migrated */ + COMPACTBALLOONRELEASED, /* old-page released after migration */ +#endif /* CONFIG_VIRTIO_BALLOON || CONFIG_VIRTIO_BALLOON_MODULE */ +#endif /* CONFIG_COMPACTION */ #ifdef CONFIG_HUGETLB_PAGE HTLB_BUDDY_PGALLOC, HTLB_BUDDY_PGALLOC_FAIL, #endif diff --git a/mm/compaction.c b/mm/compaction.c index 8567bb8..ff0f9ac 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -77,6 +77,7 @@ bool isolate_balloon_page(struct page *page) (page_count(page) == 2)) { __isolate_balloon_page(page); unlock_page(page); + count_vm_event(COMPACTBALLOONISOLATED); return true; } unlock_page(page); @@ -97,6 +98,7 @@ void putback_balloon_page(struct page *page) __putback_balloon_page(page); put_page(page); unlock_page(page); + count_vm_event(COMPACTBALLOONRETURNED); } #endif /* CONFIG_VIRTIO_BALLOON || CONFIG_VIRTIO_BALLOON_MODULE */ diff --git a/mm/migrate.c b/mm/migrate.c index 1165134..024566f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -892,6 +892,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, page_is_file_cache(page)); put_page(page); __free_page(page); + count_vm_event(COMPACTBALLOONRELEASED); return rc; } out: diff --git a/mm/vmstat.c b/mm/vmstat.c index df7a674..ad5c4f1 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -768,7 +768,15 @@ const char * const vmstat_text[] = { "compact_stall", "compact_fail", "compact_success", -#endif + +#if defined(CONFIG_VIRTIO_BALLOON) || defined(CONFIG_VIRTIO_BALLOON_MODULE) + "compact_balloon_isolated", + "compact_balloon_migrated", + "compact_balloon_returned", + "compact_balloon_released", +#endif /* CONFIG_VIRTIO_BALLOON || CONFIG_VIRTIO_BALLOON_MODULE */ + +#endif /* CONFIG_COMPACTION */ #ifdef CONFIG_HUGETLB_PAGE "htlb_buddy_alloc_success", -- 1.7.11.2
Minchan Kim
2012-Aug-12 23:14 UTC
[PATCH v7 1/4] mm: introduce compaction and migration for virtio ballooned pages
On Fri, Aug 10, 2012 at 02:55:14PM -0300, Rafael Aquini wrote:> Memory fragmentation introduced by ballooning might reduce significantly > the number of 2MB contiguous memory blocks that can be used within a guest, > thus imposing performance penalties associated with the reduced number of > transparent huge pages that could be used by the guest workload. > > This patch introduces the helper functions as well as the necessary changes > to teach compaction and migration bits how to cope with pages which are > part of a guest memory balloon, in order to make them movable by memory > compaction procedures. > > Signed-off-by: Rafael Aquini <aquini at redhat.com>Reviewed-by: Minchan Kim <minchan at kernel.org> -- Kind regards, Minchan Kim
On Fri, Aug 10, 2012 at 02:55:16PM -0300, Rafael Aquini wrote:> The PATCH "mm: introduce compaction and migration for virtio ballooned pages" > hacks around putback_lru_pages() in order to allow ballooned pages to be > re-inserted on balloon page list as if a ballooned page was like a LRU page. > > As ballooned pages are not legitimate LRU pages, this patch introduces > putback_movable_pages() to properly cope with cases where the isolated > pageset contains ballooned pages and LRU pages, thus fixing the mentioned > inelegant hack around putback_lru_pages(). > > Signed-off-by: Rafael Aquini <aquini at redhat.com>Reviewed-by: Minchan Kim <minchan at kernel.org> Thanks for your good work, Rafael. -- Kind regards, Minchan Kim
Michael S. Tsirkin
2012-Aug-14 20:31 UTC
[PATCH v7 2/4] virtio_balloon: introduce migration primitives to balloon pages
On Tue, Aug 14, 2012 at 05:11:13PM -0300, Rafael Aquini wrote:> On Tue, Aug 14, 2012 at 10:51:39PM +0300, Michael S. Tsirkin wrote: > > What I think you should do is use rcu for access. > > And here sync rcu before freeing. > > Maybe an overkill but at least a documented synchronization > > primitive, and it is very light weight. > > > > I liked your suggestion on barriers, as well. > > Rik, Mel ?Further instead of simple assignment I would add an api in mm to set/clear the balloon mapping, with proper locking. This could fail if already set, and thus fix crash with many ballons.
Mel Gorman
2012-Aug-15 09:05 UTC
[PATCH v7 2/4] virtio_balloon: introduce migration primitives to balloon pages
On Tue, Aug 14, 2012 at 05:11:13PM -0300, Rafael Aquini wrote:> On Tue, Aug 14, 2012 at 10:51:39PM +0300, Michael S. Tsirkin wrote: > > What I think you should do is use rcu for access. > > And here sync rcu before freeing. > > Maybe an overkill but at least a documented synchronization > > primitive, and it is very light weight. > > > > I liked your suggestion on barriers, as well. >I have not thought about this as deeply as I shouold but is simply rechecking the mapping under the pages_lock to make sure the page is still a balloon page an option? i.e. use pages_lock to stabilise page->mapping. -- Mel Gorman SUSE Labs
Michael S. Tsirkin
2012-Aug-15 09:25 UTC
[PATCH v7 2/4] virtio_balloon: introduce migration primitives to balloon pages
On Wed, Aug 15, 2012 at 10:05:28AM +0100, Mel Gorman wrote:> On Tue, Aug 14, 2012 at 05:11:13PM -0300, Rafael Aquini wrote: > > On Tue, Aug 14, 2012 at 10:51:39PM +0300, Michael S. Tsirkin wrote: > > > What I think you should do is use rcu for access. > > > And here sync rcu before freeing. > > > Maybe an overkill but at least a documented synchronization > > > primitive, and it is very light weight. > > > > > > > I liked your suggestion on barriers, as well. > > > > I have not thought about this as deeply as I shouold but is simply rechecking > the mapping under the pages_lock to make sure the page is still a balloon > page an option? i.e. use pages_lock to stabilise page->mapping.To clarify, are you concerned about cost of rcu_read_lock for non balloon pages?> -- > Mel Gorman > SUSE Labs
Mel Gorman
2012-Aug-15 09:48 UTC
[PATCH v7 2/4] virtio_balloon: introduce migration primitives to balloon pages
On Wed, Aug 15, 2012 at 12:25:28PM +0300, Michael S. Tsirkin wrote:> On Wed, Aug 15, 2012 at 10:05:28AM +0100, Mel Gorman wrote: > > On Tue, Aug 14, 2012 at 05:11:13PM -0300, Rafael Aquini wrote: > > > On Tue, Aug 14, 2012 at 10:51:39PM +0300, Michael S. Tsirkin wrote: > > > > What I think you should do is use rcu for access. > > > > And here sync rcu before freeing. > > > > Maybe an overkill but at least a documented synchronization > > > > primitive, and it is very light weight. > > > > > > > > > > I liked your suggestion on barriers, as well. > > > > > > > I have not thought about this as deeply as I shouold but is simply rechecking > > the mapping under the pages_lock to make sure the page is still a balloon > > page an option? i.e. use pages_lock to stabilise page->mapping. > > To clarify, are you concerned about cost of rcu_read_lock > for non balloon pages? >Not as such, but given the choice between introducing RCU locking and rechecking page->mapping under a spinlock I would choose the latter as it is more straight-forward. -- Mel Gorman SUSE Labs
Reasonably Related Threads
- [PATCH v7 0/4] make balloon pages movable by compaction
- [PATCH v6 0/3] make balloon pages movable by compaction
- [PATCH v6 0/3] make balloon pages movable by compaction
- [PATCH v5 0/3] make balloon pages movable by compaction
- [PATCH v5 0/3] make balloon pages movable by compaction