Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%. Another change is for speeding up live migration. By skipping process guest's free pages in the first round of data copy, to reduce needless data processing, this can help to save quite a lot of CPU cycles and network bandwidth. We put guest's free page information in bitmap and send it to host with the virt queue of virtio-balloon. For an idle 8GB guest, this can help to shorten the total live migration time from 2Sec to about 500ms in the 10Gbps network environment. Dave Hansen suggested a new scheme to encode the data structure, because of additional complexity, it's not implemented in v3. Changes from v2 to v3: * Change the name of 'free page' to 'unused page'. * Use the scatter & gather bitmap instead of a 1MB page bitmap. * Fix overwriting the page bitmap after kicking. * Some of MST's comments for v2. Changes from v1 to v2: * Abandon the patch for dropping page cache. * Put some structures to uapi head file. * Use a new way to determine the page bitmap size. * Use a unified way to send the free page information with the bitmap * Address the issues referred in MST's comments Liang Li (7): virtio-balloon: rework deflate to add page to a list virtio-balloon: define new feature bit and page bitmap head mm: add a function to get the max pfn virtio-balloon: speed up inflate/deflate process mm: add the related functions to get unused page virtio-balloon: define feature bit and head for misc virt queue virtio-balloon: tell host vm's unused page info drivers/virtio/virtio_balloon.c | 390 ++++++++++++++++++++++++++++++++---- include/linux/mm.h | 3 + include/uapi/linux/virtio_balloon.h | 41 ++++ mm/page_alloc.c | 94 +++++++++ 4 files changed, 485 insertions(+), 43 deletions(-) -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 1/7] virtio-balloon: rework deflate to add page to a list
Will allow faster notifications using a bitmap down the road. balloon_pfn_to_page() can be removed because it's useless. Signed-off-by: Liang Li <liang.z.li at intel.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- drivers/virtio/virtio_balloon.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 4e7003d..59ffe5a 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -103,12 +103,6 @@ static u32 page_to_balloon_pfn(struct page *page) return pfn * VIRTIO_BALLOON_PAGES_PER_PAGE; } -static struct page *balloon_pfn_to_page(u32 pfn) -{ - BUG_ON(pfn % VIRTIO_BALLOON_PAGES_PER_PAGE); - return pfn_to_page(pfn / VIRTIO_BALLOON_PAGES_PER_PAGE); -} - static void balloon_ack(struct virtqueue *vq) { struct virtio_balloon *vb = vq->vdev->priv; @@ -181,18 +175,16 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) return num_allocated_pages; } -static void release_pages_balloon(struct virtio_balloon *vb) +static void release_pages_balloon(struct virtio_balloon *vb, + struct list_head *pages) { - unsigned int i; - struct page *page; + struct page *page, *next; - /* Find pfns pointing at start of each page, get pages and free them. */ - for (i = 0; i < vb->num_pfns; i += VIRTIO_BALLOON_PAGES_PER_PAGE) { - page = balloon_pfn_to_page(virtio32_to_cpu(vb->vdev, - vb->pfns[i])); + list_for_each_entry_safe(page, next, pages, lru) { if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) adjust_managed_page_count(page, 1); + list_del(&page->lru); put_page(page); /* balloon reference */ } } @@ -202,6 +194,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) unsigned num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; + LIST_HEAD(pages); /* We can only do one array worth at a time. */ num = min(num, ARRAY_SIZE(vb->pfns)); @@ -215,6 +208,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) if (!page) break; set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + list_add(&page->lru, &pages); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -226,7 +220,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) */ if (vb->num_pfns != 0) tell_host(vb, vb->deflate_vq); - release_pages_balloon(vb); + release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; } -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 2/7] virtio-balloon: define new feature bit and page bitmap head
Add a new feature which supports sending the page information with a bitmap. The current implementation uses PFNs array, which is not very efficient. Using bitmap can improve the performance of inflating/deflating significantly The page bitmap header will used to tell the host some information about the page bitmap. e.g. the page size, page bitmap length and start pfn. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- include/uapi/linux/virtio_balloon.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index 343d7dd..d3b182a 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,6 +34,7 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -82,4 +83,22 @@ struct virtio_balloon_stat { __virtio64 val; } __attribute__((packed)); +/* Page bitmap header structure */ +struct balloon_bmap_hdr { + /* Used to distinguish different request */ + __virtio16 cmd; + /* Shift width of page in the bitmap */ + __virtio16 page_shift; + /* flag used to identify different status */ + __virtio16 flag; + /* Reserved */ + __virtio16 reserved; + /* ID of the request */ + __virtio64 req_id; + /* The pfn of 0 bit in the bitmap */ + __virtio64 start_pfn; + /* The length of the bitmap, in bytes */ + __virtio64 bmap_len; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 3/7] mm: add a function to get the max pfn
Expose the function to get the max pfn, so it can be used in the virtio-balloon device driver. Simply include the 'linux/bootmem.h' is not enough, if the device driver is built to a module, directly refer the max_pfn lead to build failed. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Mel Gorman <mgorman at techsingularity.net> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- include/linux/mm.h | 1 + mm/page_alloc.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ffbd729..2a89da0e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1776,6 +1776,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern unsigned long get_max_pfn(void); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 2b3bf67..e5f63a9 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4426,6 +4426,16 @@ void show_free_areas(unsigned int filter) show_swap_cache_info(); } +/* + * The max_pfn can change because of memory hot plug, so it's only good + * as a hint. e.g. for sizing data structures. + */ +unsigned long get_max_pfn(void) +{ + return max_pfn; +} +EXPORT_SYMBOL(get_max_pfn); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 4/7] virtio-balloon: speed up inflate/deflate process
The implementation of the current virtio-balloon is not very efficient, the time spends on different stages of inflating the balloon to 7GB of a 8GB idle guest: a. allocating pages (6.5%) b. sending PFNs to host (68.3%) c. address translation (6.1%) d. madvise (19%) It takes about 4126ms for the inflating process to complete. Debugging shows that the bottle neck are the stage b and stage d. If using a bitmap to send the page info instead of the PFNs, we can reduce the overhead in stage b quite a lot. Furthermore, we can do the address translation and call madvise() with a bulk of RAM pages, instead of the current page per page way, the overhead of stage c and stage d can also be reduced a lot. This patch is the kernel side implementation which is intended to speed up the inflating & deflating process by adding a new feature to the virtio-balloon device. With this new feature, inflating the balloon to 7GB of a 8GB idle guest only takes 590ms, the performance improvement is about 85%. TODO: optimize stage a by allocating/freeing a chunk of pages instead of a single page at a time. Signed-off-by: Liang Li <liang.z.li at intel.com> Suggested-by: Michael S. Tsirkin <mst at redhat.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- drivers/virtio/virtio_balloon.c | 233 +++++++++++++++++++++++++++++++++++----- 1 file changed, 209 insertions(+), 24 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 59ffe5a..c31839c 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -42,6 +42,10 @@ #define OOM_VBALLOON_DEFAULT_PAGES 256 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 +#define BALLOON_BMAP_SIZE (8 * PAGE_SIZE) +#define PFNS_PER_BMAP (BALLOON_BMAP_SIZE * BITS_PER_BYTE) +#define BALLOON_BMAP_COUNT 32 + static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -67,6 +71,13 @@ struct virtio_balloon { /* Number of balloon pages we've told the Host we're not using. */ unsigned int num_pages; + /* Pointer of the bitmap header. */ + void *bmap_hdr; + /* Bitmap and bitmap count used to tell the host the pages */ + unsigned long *page_bitmap[BALLOON_BMAP_COUNT]; + unsigned int nr_page_bmap; + /* Used to record the processed pfn range */ + unsigned long min_pfn, max_pfn, start_pfn, end_pfn; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -110,16 +121,66 @@ static void balloon_ack(struct virtqueue *vq) wake_up(&vb->acked); } +static inline void init_pfn_range(struct virtio_balloon *vb) +{ + vb->min_pfn = ULONG_MAX; + vb->max_pfn = 0; +} + +static inline void update_pfn_range(struct virtio_balloon *vb, + struct page *page) +{ + unsigned long balloon_pfn = page_to_balloon_pfn(page); + + if (balloon_pfn < vb->min_pfn) + vb->min_pfn = balloon_pfn; + if (balloon_pfn > vb->max_pfn) + vb->max_pfn = balloon_pfn; +} + static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) { - struct scatterlist sg; - unsigned int len; + struct scatterlist sg, sg2[BALLOON_BMAP_COUNT + 1]; + unsigned int len, i; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP)) { + struct balloon_bmap_hdr *hdr = vb->bmap_hdr; + unsigned long bmap_len; + int nr_pfn, nr_used_bmap, nr_buf; + + nr_pfn = vb->end_pfn - vb->start_pfn + 1; + nr_pfn = roundup(nr_pfn, BITS_PER_LONG); + nr_used_bmap = nr_pfn / PFNS_PER_BMAP; + bmap_len = nr_pfn / BITS_PER_BYTE; + nr_buf = nr_used_bmap + 1; + + /* cmd, reserved and req_id are init to 0, unused here */ + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT); + hdr->start_pfn = cpu_to_virtio64(vb->vdev, vb->start_pfn); + hdr->bmap_len = cpu_to_virtio64(vb->vdev, bmap_len); + sg_init_table(sg2, nr_buf); + sg_set_buf(&sg2[0], hdr, sizeof(struct balloon_bmap_hdr)); + for (i = 0; i < nr_used_bmap; i++) { + unsigned int buf_len = BALLOON_BMAP_SIZE; + + if (i + 1 == nr_used_bmap) + buf_len = bmap_len - BALLOON_BMAP_SIZE * i; + sg_set_buf(&sg2[i + 1], vb->page_bitmap[i], buf_len); + } - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + while (vq->num_free < nr_buf) + msleep(2); + if (virtqueue_add_outbuf(vq, sg2, nr_buf, vb, GFP_KERNEL) == 0) + virtqueue_kick(vq); - /* We should always be able to add one buffer to an empty queue. */ - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); - virtqueue_kick(vq); + } else { + sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + + /* We should always be able to add one buffer to an empty + * queue. */ + virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); + virtqueue_kick(vq); + } /* When host has read buffer, this completes via balloon_ack */ wait_event(vb->acked, virtqueue_get_buf(vq, &len)); @@ -138,13 +199,93 @@ static void set_page_pfns(struct virtio_balloon *vb, page_to_balloon_pfn(page) + i); } -static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) +static void extend_page_bitmap(struct virtio_balloon *vb) +{ + int i; + unsigned long bmap_len, bmap_count; + + bmap_len = ALIGN(get_max_pfn(), BITS_PER_LONG) / BITS_PER_BYTE; + bmap_count = bmap_len / BALLOON_BMAP_SIZE; + if (bmap_len % BALLOON_BMAP_SIZE) + bmap_count++; + if (bmap_count > BALLOON_BMAP_COUNT) + bmap_count = BALLOON_BMAP_COUNT; + + for (i = 1; i < bmap_count; i++) { + vb->page_bitmap[i] = kmalloc(BALLOON_BMAP_SIZE, GFP_ATOMIC); + if (vb->page_bitmap[i]) + vb->nr_page_bmap++; + else + break; + } +} + +static void kfree_page_bitmap(struct virtio_balloon *vb) +{ + int i; + + for (i = 0; i < vb->nr_page_bmap; i++) + kfree(vb->page_bitmap[i]); +} + +static void clear_page_bitmap(struct virtio_balloon *vb) +{ + int i; + + for (i = 0; i < vb->nr_page_bmap; i++) + memset(vb->page_bitmap[i], 0, BALLOON_BMAP_SIZE); +} + +static void set_page_bitmap(struct virtio_balloon *vb, + struct list_head *pages, struct virtqueue *vq) +{ + unsigned long pfn, pfn_limit; + struct page *page; + bool found; + int bmap_idx; + + vb->min_pfn = rounddown(vb->min_pfn, BITS_PER_LONG); + vb->max_pfn = roundup(vb->max_pfn, BITS_PER_LONG); + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; + + for (pfn = vb->min_pfn; pfn < vb->max_pfn; pfn += pfn_limit) { + unsigned long end_pfn; + + clear_page_bitmap(vb); + vb->start_pfn = pfn; + end_pfn = pfn; + found = false; + list_for_each_entry(page, pages, lru) { + unsigned long pos, balloon_pfn; + + balloon_pfn = page_to_balloon_pfn(page); + if (balloon_pfn < pfn || balloon_pfn >= pfn + pfn_limit) + continue; + bmap_idx = (balloon_pfn - pfn) / PFNS_PER_BMAP; + pos = (balloon_pfn - pfn) % PFNS_PER_BMAP; + set_bit(pos, vb->page_bitmap[bmap_idx]); + if (balloon_pfn > end_pfn) + end_pfn = balloon_pfn; + found = true; + } + if (found) { + vb->end_pfn = end_pfn; + tell_host(vb, vq); + } + } +} + +static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num, + bool use_bmap) { struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; - unsigned num_allocated_pages; + unsigned int num_allocated_pages; - /* We can only do one array worth at a time. */ - num = min(num, ARRAY_SIZE(vb->pfns)); + if (use_bmap) + init_pfn_range(vb); + else + /* We can only do one array worth at a time. */ + num = min(num, ARRAY_SIZE(vb->pfns)); mutex_lock(&vb->balloon_lock); for (vb->num_pfns = 0; vb->num_pfns < num; @@ -159,7 +300,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) msleep(200); break; } - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (use_bmap) + update_pfn_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) @@ -168,8 +312,13 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) num_allocated_pages = vb->num_pfns; /* Did we get any? */ - if (vb->num_pfns != 0) - tell_host(vb, vb->inflate_vq); + if (vb->num_pfns != 0) { + if (use_bmap) + set_page_bitmap(vb, &vb_dev_info->pages, + vb->inflate_vq); + else + tell_host(vb, vb->inflate_vq); + } mutex_unlock(&vb->balloon_lock); return num_allocated_pages; @@ -189,15 +338,19 @@ static void release_pages_balloon(struct virtio_balloon *vb, } } -static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) +static unsigned int leak_balloon(struct virtio_balloon *vb, size_t num, + bool use_bmap) { - unsigned num_freed_pages; + unsigned int num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; LIST_HEAD(pages); - /* We can only do one array worth at a time. */ - num = min(num, ARRAY_SIZE(vb->pfns)); + if (use_bmap) + init_pfn_range(vb); + else + /* We can only do one array worth at a time. */ + num = min(num, ARRAY_SIZE(vb->pfns)); mutex_lock(&vb->balloon_lock); /* We can't release more pages than taken */ @@ -207,7 +360,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) page = balloon_page_dequeue(vb_dev_info); if (!page) break; - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (use_bmap) + update_pfn_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); list_add(&page->lru, &pages); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -218,8 +374,14 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); * is true, we *have* to do it in this order */ - if (vb->num_pfns != 0) - tell_host(vb, vb->deflate_vq); + if (vb->num_pfns != 0) { + if (use_bmap) + set_page_bitmap(vb, &pages, vb->deflate_vq); + else + tell_host(vb, vb->deflate_vq); + + release_pages_balloon(vb, &pages); + } release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; @@ -354,13 +516,15 @@ static int virtballoon_oom_notify(struct notifier_block *self, struct virtio_balloon *vb; unsigned long *freed; unsigned num_freed_pages; + bool use_bmap; vb = container_of(self, struct virtio_balloon, nb); if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) return NOTIFY_OK; freed = parm; - num_freed_pages = leak_balloon(vb, oom_pages); + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + num_freed_pages = leak_balloon(vb, oom_pages, use_bmap); update_balloon_size(vb); *freed += num_freed_pages; @@ -380,15 +544,19 @@ static void update_balloon_size_func(struct work_struct *work) { struct virtio_balloon *vb; s64 diff; + bool use_bmap; vb = container_of(work, struct virtio_balloon, update_balloon_size_work); diff = towards_target(vb); + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + if (use_bmap && diff && vb->nr_page_bmap == 1) + extend_page_bitmap(vb); if (diff > 0) - diff -= fill_balloon(vb, diff); + diff -= fill_balloon(vb, diff, use_bmap); else if (diff < 0) - diff += leak_balloon(vb, -diff); + diff += leak_balloon(vb, -diff, use_bmap); update_balloon_size(vb); if (diff) @@ -533,6 +701,17 @@ static int virtballoon_probe(struct virtio_device *vdev) spin_lock_init(&vb->stop_update_lock); vb->stop_update = false; vb->num_pages = 0; + vb->bmap_hdr = kzalloc(sizeof(struct balloon_bmap_hdr), GFP_KERNEL); + /* Clear the feature bit if memory allocation fails */ + if (!vb->bmap_hdr) + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + else { + vb->page_bitmap[0] = kmalloc(BALLOON_BMAP_SIZE, GFP_KERNEL); + if (!vb->page_bitmap[0]) + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + else + vb->nr_page_bmap = 1; + } mutex_init(&vb->balloon_lock); init_waitqueue_head(&vb->acked); vb->vdev = vdev; @@ -583,9 +762,12 @@ static int virtballoon_probe(struct virtio_device *vdev) static void remove_common(struct virtio_balloon *vb) { + bool use_bmap; + + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); /* There might be pages left in the balloon: free them. */ while (vb->num_pages) - leak_balloon(vb, vb->num_pages); + leak_balloon(vb, vb->num_pages, use_bmap); update_balloon_size(vb); /* Now we reset the device so we can clean up the queues. */ @@ -609,6 +791,8 @@ static void virtballoon_remove(struct virtio_device *vdev) remove_common(vb); if (vb->vb_dev_info.inode) iput(vb->vb_dev_info.inode); + kfree_page_bitmap(vb); + kfree(vb->bmap_hdr); kfree(vb); } @@ -647,6 +831,7 @@ static int virtballoon_restore(struct virtio_device *vdev) VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_PAGE_BITMAP, }; static struct virtio_driver virtio_balloon_driver = { -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 5/7] mm: add the related functions to get unused page
Save the unused page info into page bitmap. The virtio balloon driver call this new API to get the unused page bitmap and send the bitmap to hypervisor(QEMU) for speeding up live migration. During sending the bitmap, some the pages may be modified and are no free anymore, this inaccuracy can be corrected by the dirty page logging mechanism. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Mel Gorman <mgorman at techsingularity.net> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- include/linux/mm.h | 2 ++ mm/page_alloc.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 86 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2a89da0e..84f56ec 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1777,6 +1777,8 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); extern unsigned long get_max_pfn(void); +extern int get_unused_pages(unsigned long start_pfn, unsigned long end_pfn, + unsigned long *bitmap[], unsigned long len, unsigned int nr_bmap); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e5f63a9..848bb85 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4436,6 +4436,90 @@ unsigned long get_max_pfn(void) } EXPORT_SYMBOL(get_max_pfn); +static void mark_unused_pages_bitmap(struct zone *zone, + unsigned long start_pfn, unsigned long end_pfn, + unsigned long *bitmap[], unsigned long bits, + unsigned int nr_bmap) +{ + unsigned long pfn, flags, nr_pg, pos, *bmap; + unsigned int order, i, t, bmap_idx; + struct list_head *curr; + + if (zone_is_empty(zone)) + return; + + end_pfn = min(start_pfn + nr_bmap * bits, end_pfn); + spin_lock_irqsave(&zone->lock, flags); + + for_each_migratetype_order(order, t) { + list_for_each(curr, &zone->free_area[order].free_list[t]) { + pfn = page_to_pfn(list_entry(curr, struct page, lru)); + if (pfn < start_pfn || pfn >= end_pfn) + continue; + nr_pg = 1UL << order; + if (pfn + nr_pg > end_pfn) + nr_pg = end_pfn - pfn; + bmap_idx = (pfn - start_pfn) / bits; + if (bmap_idx == (pfn + nr_pg - start_pfn) / bits) { + bmap = bitmap[bmap_idx]; + pos = (pfn - start_pfn) % bits; + bitmap_set(bmap, pos, nr_pg); + } else + for (i = 0; i < nr_pg; i++) { + bmap_idx = pos / bits; + bmap = bitmap[bmap_idx]; + pos = pos % bits; + bitmap_set(bmap, pos, 1); + } + } + } + + spin_unlock_irqrestore(&zone->lock, flags); +} + +/* + * During live migration, page is always discardable unless it's + * content is needed by the system. + * get_unused_pages provides an API to get the unused pages, these + * unused pages can be discarded if there is no modification since + * the request. Some other mechanism, like the dirty page logging + * can be used to track the modification. + * + * This function scans the free page list to get the unused pages + * whose pfn are range from start_pfn to end_pfn, and set the + * corresponding bit in the bitmap if an unused page is found. + * + * Allocating a large bitmap may fail because of fragmentation, + * instead of using a single bitmap, we use a scatter/gather bitmap. + * The 'bitmap' is the start address of an array which contains + * 'nr_bmap' separate small bitmaps, each bitmap contains 'bits' bits. + * + * return -1 if parameters are invalid + * return 0 when end_pfn >= max_pfn + * return 1 when end_pfn < max_pfn + */ +int get_unused_pages(unsigned long start_pfn, unsigned long end_pfn, + unsigned long *bitmap[], unsigned long bits, unsigned int nr_bmap) +{ + struct zone *zone; + int ret = 0; + + if (bitmap == NULL || *bitmap == NULL || nr_bmap == 0 || + bits == 0 || start_pfn > end_pfn) + return -1; + if (end_pfn < max_pfn) + ret = 1; + if (end_pfn >= max_pfn) + ret = 0; + + for_each_populated_zone(zone) + mark_unused_pages_bitmap(zone, start_pfn, end_pfn, bitmap, + bits, nr_bmap); + + return ret; +} +EXPORT_SYMBOL(get_unused_pages); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 6/7] virtio-balloon: define feature bit and head for misc virt queue
Define a new feature bit which supports a new virtual queue. This new virtual qeuque is for information exchange between hypervisor and guest. The VMM hypervisor can make use of this virtual queue to request the guest do some operations, e.g. drop page cache, synchronize file system, etc. And the VMM hypervisor can get some of guest's runtime information through this virtual queue, e.g. the guest's unused page information, which can be used for live migration optimization. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- include/uapi/linux/virtio_balloon.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index d3b182a..3a9d633 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ +#define VIRTIO_BALLOON_F_MISC_VQ 4 /* Misc info virtqueue */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -101,4 +102,25 @@ struct balloon_bmap_hdr { __virtio64 bmap_len; }; +enum balloon_req_id { + /* Get unused pages information */ + BALLOON_GET_UNUSED_PAGES, +}; + +enum balloon_flag { + /* Have more data for a request */ + BALLOON_FLAG_CONT, + /* No more data for a request */ + BALLOON_FLAG_DONE, +}; + +struct balloon_req_hdr { + /* Used to distinguish different request */ + __virtio16 cmd; + /* Reserved */ + __virtio16 reserved[3]; + /* Request parameter */ + __virtio64 param; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 1.8.3.1
Liang Li
2016-Oct-21 06:24 UTC
[RESEND PATCH v3 kernel 7/7] virtio-balloon: tell host vm's unused page info
Support the request for vm's unused page information, response with a page bitmap. QEMU can make use of this bitmap and the dirty page logging mechanism to skip the transportation of these unused pages, this is very helpful to speed up the live migration process. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> --- drivers/virtio/virtio_balloon.c | 143 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 134 insertions(+), 9 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index c31839c..f10bb8b 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -56,7 +56,7 @@ struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *misc_vq; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -78,6 +78,8 @@ struct virtio_balloon { unsigned int nr_page_bmap; /* Used to record the processed pfn range */ unsigned long min_pfn, max_pfn, start_pfn, end_pfn; + /* Request header */ + struct balloon_req_hdr req_hdr; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -423,6 +425,78 @@ static void update_balloon_stats(struct virtio_balloon *vb) pages_to_bytes(available)); } +static void send_unused_pages_info(struct virtio_balloon *vb, + unsigned long req_id) +{ + struct scatterlist sg_in, sg_out[BALLOON_BMAP_COUNT + 1]; + unsigned long pfn = 0, bmap_len, pfn_limit, last_pfn, nr_pfn; + struct virtqueue *vq = vb->misc_vq; + struct balloon_bmap_hdr *hdr = vb->bmap_hdr; + int ret = 1, nr_buf, used_nr_bmap = 0, i; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP) && + vb->nr_page_bmap == 1) + extend_page_bitmap(vb); + + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; + mutex_lock(&vb->balloon_lock); + last_pfn = get_max_pfn(); + + while (ret) { + clear_page_bitmap(vb); + ret = get_unused_pages(pfn, pfn + pfn_limit, vb->page_bitmap, + PFNS_PER_BMAP, vb->nr_page_bmap); + if (ret < 0) + break; + hdr->cmd = cpu_to_virtio16(vb->vdev, BALLOON_GET_UNUSED_PAGES); + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT); + hdr->req_id = cpu_to_virtio64(vb->vdev, req_id); + hdr->start_pfn = cpu_to_virtio64(vb->vdev, pfn); + bmap_len = BALLOON_BMAP_SIZE * vb->nr_page_bmap; + + if (!ret) { + hdr->flag = cpu_to_virtio16(vb->vdev, + BALLOON_FLAG_DONE); + nr_pfn = last_pfn - pfn; + used_nr_bmap = nr_pfn / PFNS_PER_BMAP; + if (nr_pfn % PFNS_PER_BMAP) + used_nr_bmap++; + bmap_len = nr_pfn / BITS_PER_BYTE; + } else { + hdr->flag = cpu_to_virtio16(vb->vdev, + BALLOON_FLAG_CONT); + used_nr_bmap = vb->nr_page_bmap; + } + hdr->bmap_len = cpu_to_virtio64(vb->vdev, bmap_len); + nr_buf = used_nr_bmap + 1; + sg_init_table(sg_out, nr_buf); + sg_set_buf(&sg_out[0], hdr, sizeof(struct balloon_bmap_hdr)); + for (i = 0; i < used_nr_bmap; i++) { + unsigned int buf_len = BALLOON_BMAP_SIZE; + + if (i + 1 == used_nr_bmap) + buf_len = bmap_len - BALLOON_BMAP_SIZE * i; + sg_set_buf(&sg_out[i + 1], vb->page_bitmap[i], buf_len); + } + + while (vq->num_free < nr_buf) + msleep(2); + if (virtqueue_add_outbuf(vq, sg_out, nr_buf, vb, + GFP_KERNEL) == 0) { + virtqueue_kick(vq); + while (!virtqueue_get_buf(vq, &i) + && !virtqueue_is_broken(vq)) + cpu_relax(); + } + pfn += pfn_limit; + } + + mutex_unlock(&vb->balloon_lock); + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + virtqueue_add_inbuf(vq, &sg_in, 1, &vb->req_hdr, GFP_KERNEL); + virtqueue_kick(vq); +} + /* * While most virtqueues communicate guest-initiated requests to the hypervisor, * the stats queue operates in reverse. The driver initializes the virtqueue @@ -563,18 +637,56 @@ static void update_balloon_size_func(struct work_struct *work) queue_work(system_freezable_wq, work); } +static void misc_handle_rq(struct virtio_balloon *vb) +{ + struct balloon_req_hdr *ptr_hdr; + unsigned int len; + + ptr_hdr = virtqueue_get_buf(vb->misc_vq, &len); + if (!ptr_hdr || len != sizeof(vb->req_hdr)) + return; + + switch (ptr_hdr->cmd) { + case BALLOON_GET_UNUSED_PAGES: + send_unused_pages_info(vb, ptr_hdr->param); + break; + default: + break; + } +} + +static void misc_request(struct virtqueue *vq) +{ + struct virtio_balloon *vb = vq->vdev->priv; + + misc_handle_rq(vb); +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; - static const char * const names[] = { "inflate", "deflate", "stats" }; + struct virtqueue *vqs[4]; + vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, + stats_request, misc_request }; + static const char * const names[] = { "inflate", "deflate", "stats", + "misc" }; int err, nvqs; /* * We expect two virtqueues: inflate and deflate, and * optionally stat. */ - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) + nvqs = 4; + else if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + nvqs = 3; + else + nvqs = 2; + + if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ); + } + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names); if (err) return err; @@ -595,6 +707,16 @@ static int init_vqs(struct virtio_balloon *vb) BUG(); virtqueue_kick(vb->stats_vq); } + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) { + struct scatterlist sg_in; + + vb->misc_vq = vqs[3]; + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + if (virtqueue_add_inbuf(vb->misc_vq, &sg_in, 1, + &vb->req_hdr, GFP_KERNEL) < 0) + BUG(); + virtqueue_kick(vb->misc_vq); + } return 0; } @@ -703,13 +825,15 @@ static int virtballoon_probe(struct virtio_device *vdev) vb->num_pages = 0; vb->bmap_hdr = kzalloc(sizeof(struct balloon_bmap_hdr), GFP_KERNEL); /* Clear the feature bit if memory allocation fails */ - if (!vb->bmap_hdr) + if (!vb->bmap_hdr) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); - else { + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_MISC_VQ); + } else { vb->page_bitmap[0] = kmalloc(BALLOON_BMAP_SIZE, GFP_KERNEL); - if (!vb->page_bitmap[0]) + if (!vb->page_bitmap[0]) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); - else + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_MISC_VQ); + } else vb->nr_page_bmap = 1; } mutex_init(&vb->balloon_lock); @@ -832,6 +956,7 @@ static int virtballoon_restore(struct virtio_device *vdev) VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_PAGE_BITMAP, + VIRTIO_BALLOON_F_MISC_VQ, }; static struct virtio_driver virtio_balloon_driver = { -- 1.8.3.1
Dave Hansen
2016-Oct-21 17:25 UTC
[RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
On 10/20/2016 11:24 PM, Liang Li wrote:> Dave Hansen suggested a new scheme to encode the data structure, > because of additional complexity, it's not implemented in v3.So, what do you want done with this patch set? Do you want it applied as-is so that we can introduce a new host/guest ABI that we must support until the end of time? Then, we go back in a year or two and add the newer format that addresses the deficiencies that this ABI has with a third version?
Michael S. Tsirkin
2016-Oct-21 19:44 UTC
[RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
On Fri, Oct 21, 2016 at 10:25:21AM -0700, Dave Hansen wrote:> On 10/20/2016 11:24 PM, Liang Li wrote: > > Dave Hansen suggested a new scheme to encode the data structure, > > because of additional complexity, it's not implemented in v3. > > So, what do you want done with this patch set? Do you want it applied > as-is so that we can introduce a new host/guest ABI that we must support > until the end of time? Then, we go back in a year or two and add the > newer format that addresses the deficiencies that this ABI has with a > third version? >Exactly my questions.
Dave Hansen
2016-Oct-24 16:46 UTC
[RESEND PATCH v3 kernel 1/7] virtio-balloon: rework deflate to add page to a list
On 10/20/2016 11:24 PM, Liang Li wrote:> Will allow faster notifications using a bitmap down the road. > balloon_pfn_to_page() can be removed because it's useless.This is a pretty terse description of what's going on here. Could you try to elaborate a bit? What *is* the current approach? Why does it not work going forward? What do you propose instead? Why is it better?
Dave Hansen
2016-Oct-24 16:51 UTC
[RESEND PATCH v3 kernel 2/7] virtio-balloon: define new feature bit and page bitmap head
On 10/20/2016 11:24 PM, Liang Li wrote:> Add a new feature which supports sending the page information with > a bitmap. The current implementation uses PFNs array, which is not > very efficient. Using bitmap can improve the performance of > inflating/deflating significantlyWhy is it not efficient? How is using a bitmap more efficient? What kinds of cases is the bitmap inefficient?> The page bitmap header will used to tell the host some information > about the page bitmap. e.g. the page size, page bitmap length and > start pfn.Why did you choose to add these features to the structure? What benefits do they add? Could you describe your solution a bit here, and describe its strengths and weaknesses? The same comments apply, even if (especially if) you change the data structure.> Signed-off-by: Liang Li <liang.z.li at intel.com> > Cc: Michael S. Tsirkin <mst at redhat.com> > Cc: Paolo Bonzini <pbonzini at redhat.com> > Cc: Cornelia Huck <cornelia.huck at de.ibm.com> > Cc: Amit Shah <amit.shah at redhat.com> > --- > include/uapi/linux/virtio_balloon.h | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h > index 343d7dd..d3b182a 100644 > --- a/include/uapi/linux/virtio_balloon.h > +++ b/include/uapi/linux/virtio_balloon.h > @@ -34,6 +34,7 @@ > #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ > #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ > #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ > +#define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ > > /* Size of a PFN in the balloon interface. */ > #define VIRTIO_BALLOON_PFN_SHIFT 12 > @@ -82,4 +83,22 @@ struct virtio_balloon_stat { > __virtio64 val; > } __attribute__((packed)); > > +/* Page bitmap header structure */ > +struct balloon_bmap_hdr { > + /* Used to distinguish different request */ > + __virtio16 cmd; > + /* Shift width of page in the bitmap */ > + __virtio16 page_shift; > + /* flag used to identify different status */ > + __virtio16 flag; > + /* Reserved */ > + __virtio16 reserved; > + /* ID of the request */ > + __virtio64 req_id; > + /* The pfn of 0 bit in the bitmap */ > + __virtio64 start_pfn; > + /* The length of the bitmap, in bytes */ > + __virtio64 bmap_len; > +};FWIW this is totally unreadable. Please do something like this:> +struct balloon_bmap_hdr { > + __virtio16 cmd; /* Used to distinguish different ... > + __virtio16 page_shift; /* Shift width of page in the bitmap */ > + __virtio16 flag; /* flag used to identify different... > + __virtio16 reserved; /* Reserved */ > + __virtio64 req_id; /* ID of the request */ > + __virtio64 start_pfn; /* The pfn of 0 bit in the bitmap */ > + __virtio64 bmap_len; /* The length of the bitmap, in bytes */ > +};and please make an effort to add useful comments. "/* Reserved */" seems like a waste of bytes to me.
Dave Hansen
2016-Oct-24 16:53 UTC
[RESEND PATCH v3 kernel 3/7] mm: add a function to get the max pfn
On 10/20/2016 11:24 PM, Liang Li wrote:> Expose the function to get the max pfn, so it can be used in the > virtio-balloon device driver. Simply include the 'linux/bootmem.h' > is not enough, if the device driver is built to a module, directly > refer the max_pfn lead to build failed.I'm not sure the rest of the set is worth reviewing. I think a lot of it will change pretty fundamentally once you have those improved data structures in place.
Michael S. Tsirkin
2016-Oct-25 06:36 UTC
[RESEND PATCH v3 kernel 4/7] virtio-balloon: speed up inflate/deflate process
On Fri, Oct 21, 2016 at 02:24:37PM +0800, Liang Li wrote:> The implementation of the current virtio-balloon is not very > efficient, the time spends on different stages of inflating > the balloon to 7GB of a 8GB idle guest: > > a. allocating pages (6.5%) > b. sending PFNs to host (68.3%) > c. address translation (6.1%) > d. madvise (19%) > > It takes about 4126ms for the inflating process to complete. > Debugging shows that the bottle neck are the stage b and stage d. > > If using a bitmap to send the page info instead of the PFNs, we > can reduce the overhead in stage b quite a lot. Furthermore, we > can do the address translation and call madvise() with a bulk of > RAM pages, instead of the current page per page way, the overhead > of stage c and stage d can also be reduced a lot. > > This patch is the kernel side implementation which is intended to > speed up the inflating & deflating process by adding a new feature > to the virtio-balloon device. With this new feature, inflating the > balloon to 7GB of a 8GB idle guest only takes 590ms, the > performance improvement is about 85%. > > TODO: optimize stage a by allocating/freeing a chunk of pages > instead of a single page at a time. > > Signed-off-by: Liang Li <liang.z.li at intel.com> > Suggested-by: Michael S. Tsirkin <mst at redhat.com> > Cc: Michael S. Tsirkin <mst at redhat.com> > Cc: Paolo Bonzini <pbonzini at redhat.com> > Cc: Cornelia Huck <cornelia.huck at de.ibm.com> > Cc: Amit Shah <amit.shah at redhat.com> > --- > drivers/virtio/virtio_balloon.c | 233 +++++++++++++++++++++++++++++++++++----- > 1 file changed, 209 insertions(+), 24 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index 59ffe5a..c31839c 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -42,6 +42,10 @@ > #define OOM_VBALLOON_DEFAULT_PAGES 256 > #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 > > +#define BALLOON_BMAP_SIZE (8 * PAGE_SIZE) > +#define PFNS_PER_BMAP (BALLOON_BMAP_SIZE * BITS_PER_BYTE) > +#define BALLOON_BMAP_COUNT 32 > + > static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; > module_param(oom_pages, int, S_IRUSR | S_IWUSR); > MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > @@ -67,6 +71,13 @@ struct virtio_balloon { > > /* Number of balloon pages we've told the Host we're not using. */ > unsigned int num_pages; > + /* Pointer of the bitmap header. */ > + void *bmap_hdr; > + /* Bitmap and bitmap count used to tell the host the pages */ > + unsigned long *page_bitmap[BALLOON_BMAP_COUNT]; > + unsigned int nr_page_bmap; > + /* Used to record the processed pfn range */ > + unsigned long min_pfn, max_pfn, start_pfn, end_pfn; > /* > * The pages we've told the Host we're not using are enqueued > * at vb_dev_info->pages list. > @@ -110,16 +121,66 @@ static void balloon_ack(struct virtqueue *vq) > wake_up(&vb->acked); > } > > +static inline void init_pfn_range(struct virtio_balloon *vb) > +{ > + vb->min_pfn = ULONG_MAX; > + vb->max_pfn = 0; > +} > + > +static inline void update_pfn_range(struct virtio_balloon *vb, > + struct page *page) > +{ > + unsigned long balloon_pfn = page_to_balloon_pfn(page); > + > + if (balloon_pfn < vb->min_pfn) > + vb->min_pfn = balloon_pfn; > + if (balloon_pfn > vb->max_pfn) > + vb->max_pfn = balloon_pfn; > +} > +rename to hint these are all bitmap related.> static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) > { > - struct scatterlist sg; > - unsigned int len; > + struct scatterlist sg, sg2[BALLOON_BMAP_COUNT + 1]; > + unsigned int len, i; > + > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP)) { > + struct balloon_bmap_hdr *hdr = vb->bmap_hdr; > + unsigned long bmap_len; > + int nr_pfn, nr_used_bmap, nr_buf; > + > + nr_pfn = vb->end_pfn - vb->start_pfn + 1; > + nr_pfn = roundup(nr_pfn, BITS_PER_LONG); > + nr_used_bmap = nr_pfn / PFNS_PER_BMAP; > + bmap_len = nr_pfn / BITS_PER_BYTE; > + nr_buf = nr_used_bmap + 1; > + > + /* cmd, reserved and req_id are init to 0, unused here */ > + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT); > + hdr->start_pfn = cpu_to_virtio64(vb->vdev, vb->start_pfn); > + hdr->bmap_len = cpu_to_virtio64(vb->vdev, bmap_len); > + sg_init_table(sg2, nr_buf); > + sg_set_buf(&sg2[0], hdr, sizeof(struct balloon_bmap_hdr)); > + for (i = 0; i < nr_used_bmap; i++) { > + unsigned int buf_len = BALLOON_BMAP_SIZE; > + > + if (i + 1 == nr_used_bmap) > + buf_len = bmap_len - BALLOON_BMAP_SIZE * i; > + sg_set_buf(&sg2[i + 1], vb->page_bitmap[i], buf_len); > + } > > - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); > + while (vq->num_free < nr_buf) > + msleep(2);What's going on here? Who is expected to update num_free?> + if (virtqueue_add_outbuf(vq, sg2, nr_buf, vb, GFP_KERNEL) == 0) > + virtqueue_kick(vq); > > - /* We should always be able to add one buffer to an empty queue. */ > - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); > - virtqueue_kick(vq); > + } else { > + sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); > + > + /* We should always be able to add one buffer to an empty > + * queue. */Pls use a multiple comment style consistent with kernel coding style.> + virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); > + virtqueue_kick(vq); > + } > > /* When host has read buffer, this completes via balloon_ack */ > wait_event(vb->acked, virtqueue_get_buf(vq, &len)); > @@ -138,13 +199,93 @@ static void set_page_pfns(struct virtio_balloon *vb, > page_to_balloon_pfn(page) + i); > } > > -static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) > +static void extend_page_bitmap(struct virtio_balloon *vb) > +{ > + int i; > + unsigned long bmap_len, bmap_count; > + > + bmap_len = ALIGN(get_max_pfn(), BITS_PER_LONG) / BITS_PER_BYTE; > + bmap_count = bmap_len / BALLOON_BMAP_SIZE; > + if (bmap_len % BALLOON_BMAP_SIZE) > + bmap_count++; > + if (bmap_count > BALLOON_BMAP_COUNT) > + bmap_count = BALLOON_BMAP_COUNT; > +This is doing simple things in tricky ways. Please use macros such as ALIGN and max instead of if.> + for (i = 1; i < bmap_count; i++) {why 1?> + vb->page_bitmap[i] = kmalloc(BALLOON_BMAP_SIZE, GFP_ATOMIC);why GFP_ATOMIC? and what will free the previous buffer?> + if (vb->page_bitmap[i]) > + vb->nr_page_bmap++; > + else > + break;and what will happen then?> + } > +} > + > +static void kfree_page_bitmap(struct virtio_balloon *vb) > +{ > + int i; > + > + for (i = 0; i < vb->nr_page_bmap; i++) > + kfree(vb->page_bitmap[i]); > +} > + > +static void clear_page_bitmap(struct virtio_balloon *vb) > +{ > + int i; > + > + for (i = 0; i < vb->nr_page_bmap; i++) > + memset(vb->page_bitmap[i], 0, BALLOON_BMAP_SIZE); > +} > + > +static void set_page_bitmap(struct virtio_balloon *vb, > + struct list_head *pages, struct virtqueue *vq) > +{ > + unsigned long pfn, pfn_limit; > + struct page *page; > + bool found; > + int bmap_idx; > + > + vb->min_pfn = rounddown(vb->min_pfn, BITS_PER_LONG); > + vb->max_pfn = roundup(vb->max_pfn, BITS_PER_LONG); > + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; > + > + for (pfn = vb->min_pfn; pfn < vb->max_pfn; pfn += pfn_limit) { > + unsigned long end_pfn; > + > + clear_page_bitmap(vb); > + vb->start_pfn = pfn; > + end_pfn = pfn; > + found = false; > + list_for_each_entry(page, pages, lru) { > + unsigned long pos, balloon_pfn; > + > + balloon_pfn = page_to_balloon_pfn(page); > + if (balloon_pfn < pfn || balloon_pfn >= pfn + pfn_limit) > + continue; > + bmap_idx = (balloon_pfn - pfn) / PFNS_PER_BMAP; > + pos = (balloon_pfn - pfn) % PFNS_PER_BMAP; > + set_bit(pos, vb->page_bitmap[bmap_idx]); > + if (balloon_pfn > end_pfn) > + end_pfn = balloon_pfn; > + found = true; > + } > + if (found) { > + vb->end_pfn = end_pfn; > + tell_host(vb, vq); > + } > + } > +} > + > +static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num, > + bool use_bmap) > { > struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; > - unsigned num_allocated_pages; > + unsigned int num_allocated_pages; > > - /* We can only do one array worth at a time. */ > - num = min(num, ARRAY_SIZE(vb->pfns)); > + if (use_bmap) > + init_pfn_range(vb); > + else > + /* We can only do one array worth at a time. */ > + num = min(num, ARRAY_SIZE(vb->pfns)); > > mutex_lock(&vb->balloon_lock); > for (vb->num_pfns = 0; vb->num_pfns < num; > @@ -159,7 +300,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) > msleep(200); > break; > } > - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > + if (use_bmap) > + update_pfn_range(vb, page); > + else > + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; > if (!virtio_has_feature(vb->vdev, > VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) > @@ -168,8 +312,13 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) > > num_allocated_pages = vb->num_pfns; > /* Did we get any? */ > - if (vb->num_pfns != 0) > - tell_host(vb, vb->inflate_vq); > + if (vb->num_pfns != 0) { > + if (use_bmap) > + set_page_bitmap(vb, &vb_dev_info->pages, > + vb->inflate_vq); > + else > + tell_host(vb, vb->inflate_vq); > + } > mutex_unlock(&vb->balloon_lock); > > return num_allocated_pages; > @@ -189,15 +338,19 @@ static void release_pages_balloon(struct virtio_balloon *vb, > } > } > > -static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) > +static unsigned int leak_balloon(struct virtio_balloon *vb, size_t num, > + bool use_bmap)this is just a feature bit - why not get it internally?> { > - unsigned num_freed_pages; > + unsigned int num_freed_pages; > struct page *page; > struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; > LIST_HEAD(pages); > > - /* We can only do one array worth at a time. */ > - num = min(num, ARRAY_SIZE(vb->pfns)); > + if (use_bmap) > + init_pfn_range(vb); > + else > + /* We can only do one array worth at a time. */ > + num = min(num, ARRAY_SIZE(vb->pfns)); > > mutex_lock(&vb->balloon_lock); > /* We can't release more pages than taken */ > @@ -207,7 +360,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) > page = balloon_page_dequeue(vb_dev_info); > if (!page) > break; > - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > + if (use_bmap) > + update_pfn_range(vb, page); > + else > + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > list_add(&page->lru, &pages); > vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; > } > @@ -218,8 +374,14 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) > * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); > * is true, we *have* to do it in this order > */ > - if (vb->num_pfns != 0) > - tell_host(vb, vb->deflate_vq); > + if (vb->num_pfns != 0) { > + if (use_bmap) > + set_page_bitmap(vb, &pages, vb->deflate_vq); > + else > + tell_host(vb, vb->deflate_vq); > + > + release_pages_balloon(vb, &pages); > + } > release_pages_balloon(vb, &pages); > mutex_unlock(&vb->balloon_lock); > return num_freed_pages; > @@ -354,13 +516,15 @@ static int virtballoon_oom_notify(struct notifier_block *self, > struct virtio_balloon *vb; > unsigned long *freed; > unsigned num_freed_pages; > + bool use_bmap; > > vb = container_of(self, struct virtio_balloon, nb); > if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) > return NOTIFY_OK; > > freed = parm; > - num_freed_pages = leak_balloon(vb, oom_pages); > + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); > + num_freed_pages = leak_balloon(vb, oom_pages, use_bmap); > update_balloon_size(vb); > *freed += num_freed_pages; > > @@ -380,15 +544,19 @@ static void update_balloon_size_func(struct work_struct *work) > { > struct virtio_balloon *vb; > s64 diff; > + bool use_bmap; > > vb = container_of(work, struct virtio_balloon, > update_balloon_size_work); > diff = towards_target(vb); > + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); > + if (use_bmap && diff && vb->nr_page_bmap == 1) > + extend_page_bitmap(vb);So you allocate it on first use, then keep it around until device remove? Seems ugly. Needs comments explaining the motivation for this. Can't we free it immediately when it becomes unused?> > if (diff > 0) > - diff -= fill_balloon(vb, diff); > + diff -= fill_balloon(vb, diff, use_bmap); > else if (diff < 0) > - diff += leak_balloon(vb, -diff); > + diff += leak_balloon(vb, -diff, use_bmap); > update_balloon_size(vb); > > if (diff) > @@ -533,6 +701,17 @@ static int virtballoon_probe(struct virtio_device *vdev) > spin_lock_init(&vb->stop_update_lock); > vb->stop_update = false; > vb->num_pages = 0; > + vb->bmap_hdr = kzalloc(sizeof(struct balloon_bmap_hdr), GFP_KERNEL); > + /* Clear the feature bit if memory allocation fails */ > + if (!vb->bmap_hdr) > + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); > + else { > + vb->page_bitmap[0] = kmalloc(BALLOON_BMAP_SIZE, GFP_KERNEL); > + if (!vb->page_bitmap[0]) > + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); > + else > + vb->nr_page_bmap = 1; > + } > mutex_init(&vb->balloon_lock); > init_waitqueue_head(&vb->acked); > vb->vdev = vdev; > @@ -583,9 +762,12 @@ static int virtballoon_probe(struct virtio_device *vdev) > > static void remove_common(struct virtio_balloon *vb) > { > + bool use_bmap; > + > + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); > /* There might be pages left in the balloon: free them. */ > while (vb->num_pages) > - leak_balloon(vb, vb->num_pages); > + leak_balloon(vb, vb->num_pages, use_bmap); > update_balloon_size(vb); > > /* Now we reset the device so we can clean up the queues. */ > @@ -609,6 +791,8 @@ static void virtballoon_remove(struct virtio_device *vdev) > remove_common(vb); > if (vb->vb_dev_info.inode) > iput(vb->vb_dev_info.inode); > + kfree_page_bitmap(vb); > + kfree(vb->bmap_hdr); > kfree(vb); > } > > @@ -647,6 +831,7 @@ static int virtballoon_restore(struct virtio_device *vdev) > VIRTIO_BALLOON_F_MUST_TELL_HOST, > VIRTIO_BALLOON_F_STATS_VQ, > VIRTIO_BALLOON_F_DEFLATE_ON_OOM, > + VIRTIO_BALLOON_F_PAGE_BITMAP, > }; > > static struct virtio_driver virtio_balloon_driver = { > -- > 1.8.3.1
Li, Liang Z
2016-Oct-26 10:06 UTC
[RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
> On 10/20/2016 11:24 PM, Liang Li wrote: > > Dave Hansen suggested a new scheme to encode the data structure, > > because of additional complexity, it's not implemented in v3. > > So, what do you want done with this patch set? Do you want it applied as-is > so that we can introduce a new host/guest ABI that we must support until > the end of time? Then, we go back in a year or two and add the newer > format that addresses the deficiencies that this ABI has with a third version? >Hi Dave & Michael, I am working on Dave's new bitmap schema, I have finished the part of getting the 'hybrid scheme bitmap' and found the complexity was more than I expected. The main issue is more memory is required to save the 'hybrid scheme bitmap' beside that used to save the raw page bitmap, for the worst case, the memory required is 3 times than that in the previous implementation. I am wondering if I should continue, as an alternative solution, how about using PFNs array when inflating/deflating only a few pages? Things will be much more simple. Thanks! Liang
Michael S. Tsirkin
2016-Oct-26 15:35 UTC
[RESEND PATCH v3 kernel 2/7] virtio-balloon: define new feature bit and page bitmap head
On Fri, Oct 21, 2016 at 02:24:35PM +0800, Liang Li wrote:> Add a new feature which supports sending the page information with > a bitmap. The current implementation uses PFNs array, which is not > very efficient. Using bitmap can improve the performance of > inflating/deflating significantly > > The page bitmap header will used to tell the host some information > about the page bitmap. e.g. the page size, page bitmap length and > start pfn. > > Signed-off-by: Liang Li <liang.z.li at intel.com> > Cc: Michael S. Tsirkin <mst at redhat.com> > Cc: Paolo Bonzini <pbonzini at redhat.com> > Cc: Cornelia Huck <cornelia.huck at de.ibm.com> > Cc: Amit Shah <amit.shah at redhat.com> > --- > include/uapi/linux/virtio_balloon.h | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h > index 343d7dd..d3b182a 100644 > --- a/include/uapi/linux/virtio_balloon.h > +++ b/include/uapi/linux/virtio_balloon.h > @@ -34,6 +34,7 @@ > #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ > #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ > #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ > +#define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ > > /* Size of a PFN in the balloon interface. */ > #define VIRTIO_BALLOON_PFN_SHIFT 12 > @@ -82,4 +83,22 @@ struct virtio_balloon_stat { > __virtio64 val; > } __attribute__((packed)); > > +/* Page bitmap header structure */ > +struct balloon_bmap_hdr {Should be virtio_balloon.> + /* Used to distinguish different request */different requests? what are the legal values?> + __virtio16 cmd; > + /* Shift width of page in the bitmap */In which units?> + __virtio16 page_shift; > + /* flag used to identify different status */this comment does not seem to add any value.> + __virtio16 flag; > + /* Reserved */this too> + __virtio16 reserved; > + /* ID of the request */ > + __virtio64 req_id; > + /* The pfn of 0 bit in the bitmap */ > + __virtio64 start_pfn; > + /* The length of the bitmap, in bytes */Why not in bits?> + __virtio64 bmap_len; > +}; > + > #endif /* _LINUX_VIRTIO_BALLOON_H */ > -- > 1.8.3.1
Michael S. Tsirkin
2016-Oct-27 18:29 UTC
[RESEND PATCH v3 kernel 6/7] virtio-balloon: define feature bit and head for misc virt queue
On Fri, Oct 21, 2016 at 02:24:39PM +0800, Liang Li wrote:> Define a new feature bit which supports a new virtual queue. This > new virtual qeuque is for information exchange between hypervisor > and guest. The VMM hypervisor can make use of this virtual queue > to request the guest do some operations, e.g. drop page cache, > synchronize file system, etc.Can we call this something more informative pls? host request vq?> And the VMM hypervisor can get some > of guest's runtime information through this virtual queue, e.g. the > guest's unused page information, which can be used for live migration > optimization.I guess the idea is that guest gets requests from host and then responds to them on this vq. Pls document.> > Signed-off-by: Liang Li <liang.z.li at intel.com> > Cc: Michael S. Tsirkin <mst at redhat.com> > Cc: Paolo Bonzini <pbonzini at redhat.com> > Cc: Cornelia Huck <cornelia.huck at de.ibm.com> > Cc: Amit Shah <amit.shah at redhat.com> > --- > include/uapi/linux/virtio_balloon.h | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h > index d3b182a..3a9d633 100644 > --- a/include/uapi/linux/virtio_balloon.h > +++ b/include/uapi/linux/virtio_balloon.h > @@ -35,6 +35,7 @@ > #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ > #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ > #define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ > +#define VIRTIO_BALLOON_F_MISC_VQ 4 /* Misc info virtqueue */ > > /* Size of a PFN in the balloon interface. */ > #define VIRTIO_BALLOON_PFN_SHIFT 12 > @@ -101,4 +102,25 @@ struct balloon_bmap_hdr { > __virtio64 bmap_len; > }; > > +enum balloon_req_id { > + /* Get unused pages information */unused page information> + BALLOON_GET_UNUSED_PAGES, > +}; > + > +enum balloon_flag { > + /* Have more data for a request */ > + BALLOON_FLAG_CONT, > + /* No more data for a request */ > + BALLOON_FLAG_DONE, > +};is this a bit number or a value? Pls name consistently.> + > +struct balloon_req_hdr { > + /* Used to distinguish different request */requests> + __virtio16 cmd; > + /* Reserved */ > + __virtio16 reserved[3]; > + /* Request parameter */ > + __virtio64 param; > +}; > + > #endif /* _LINUX_VIRTIO_BALLOON_H */Prefix structs with virtio_ as well pls. Also, wouldn't it simplify code if we use __le for new structs?> -- > 1.8.3.1
Maybe Matching Threads
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration