Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%. Another change is for speeding up live migration. By skipping process guest's free pages in the first round of data copy, to reduce needless data processing, this can help to save quite a lot of CPU cycles and network bandwidth. We put guest's free page information in bitmap and send it to host with the virt queue of virtio-balloon. For an idle 8GB guest, this can help to shorten the total live migration time from 2Sec to about 500ms in the 10Gbps network environment. Dave Hansen suggested a new scheme to encode the data structure, because of additional complexity, it's not implemented in v3. Changes from v2 to v3: * Change the name of 'free page' to 'unused page'. * Use the scatter & gather bitmap instead of a 1MB page bitmap. * Fix overwriting the page bitmap after kicking. * Some of MST's comments for v2. Changes from v1 to v2: * Abandon the patch for dropping page cache. * Put some structures to uapi head file. * Use a new way to determine the page bitmap size. * Use a unified way to send the free page information with the bitmap * Address the issues referred in MST's comments Liang Li (7): virtio-balloon: rework deflate to add page to a list virtio-balloon: define new feature bit and page bitmap head mm: add a function to get the max pfn virtio-balloon: speed up inflate/deflate process mm: add the related functions to get unused page virtio-balloon: define feature bit and head for misc virt queue virtio-balloon: tell host vm's unused page info drivers/virtio/virtio_balloon.c | 390 ++++++++++++++++++++++++++++++++---- include/linux/mm.h | 3 + include/uapi/linux/virtio_balloon.h | 41 ++++ mm/page_alloc.c | 94 +++++++++ 4 files changed, 485 insertions(+), 43 deletions(-) -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 1/7] virtio-balloon: rework deflate to add page to a list
Will allow faster notifications using a bitmap down the road. balloon_pfn_to_page() can be removed because it's useless. Signed-off-by: Liang Li <liang.z.li at intel.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- drivers/virtio/virtio_balloon.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 4e7003d..59ffe5a 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -103,12 +103,6 @@ static u32 page_to_balloon_pfn(struct page *page) return pfn * VIRTIO_BALLOON_PAGES_PER_PAGE; } -static struct page *balloon_pfn_to_page(u32 pfn) -{ - BUG_ON(pfn % VIRTIO_BALLOON_PAGES_PER_PAGE); - return pfn_to_page(pfn / VIRTIO_BALLOON_PAGES_PER_PAGE); -} - static void balloon_ack(struct virtqueue *vq) { struct virtio_balloon *vb = vq->vdev->priv; @@ -181,18 +175,16 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) return num_allocated_pages; } -static void release_pages_balloon(struct virtio_balloon *vb) +static void release_pages_balloon(struct virtio_balloon *vb, + struct list_head *pages) { - unsigned int i; - struct page *page; + struct page *page, *next; - /* Find pfns pointing at start of each page, get pages and free them. */ - for (i = 0; i < vb->num_pfns; i += VIRTIO_BALLOON_PAGES_PER_PAGE) { - page = balloon_pfn_to_page(virtio32_to_cpu(vb->vdev, - vb->pfns[i])); + list_for_each_entry_safe(page, next, pages, lru) { if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) adjust_managed_page_count(page, 1); + list_del(&page->lru); put_page(page); /* balloon reference */ } } @@ -202,6 +194,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) unsigned num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; + LIST_HEAD(pages); /* We can only do one array worth at a time. */ num = min(num, ARRAY_SIZE(vb->pfns)); @@ -215,6 +208,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) if (!page) break; set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + list_add(&page->lru, &pages); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -226,7 +220,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) */ if (vb->num_pfns != 0) tell_host(vb, vb->deflate_vq); - release_pages_balloon(vb); + release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; } -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 2/7] virtio-balloon: define new feature bit and page bitmap head
Add a new feature which supports sending the page information with a bitmap. The current implementation uses PFNs array, which is not very efficient. Using bitmap can improve the performance of inflating/deflating significantly The page bitmap header will used to tell the host some information about the page bitmap. e.g. the page size, page bitmap length and start pfn. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- include/uapi/linux/virtio_balloon.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index 343d7dd..d3b182a 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,6 +34,7 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -82,4 +83,22 @@ struct virtio_balloon_stat { __virtio64 val; } __attribute__((packed)); +/* Page bitmap header structure */ +struct balloon_bmap_hdr { + /* Used to distinguish different request */ + __virtio16 cmd; + /* Shift width of page in the bitmap */ + __virtio16 page_shift; + /* flag used to identify different status */ + __virtio16 flag; + /* Reserved */ + __virtio16 reserved; + /* ID of the request */ + __virtio64 req_id; + /* The pfn of 0 bit in the bitmap */ + __virtio64 start_pfn; + /* The length of the bitmap, in bytes */ + __virtio64 bmap_len; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 3/7] mm: add a function to get the max pfn
Expose the function to get the max pfn, so it can be used in the virtio-balloon device driver. Simply include the 'linux/bootmem.h' is not enough, if the device driver is built to a module, directly refer the max_pfn lead to build failed. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Mel Gorman <mgorman at techsingularity.net> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- include/linux/mm.h | 1 + mm/page_alloc.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 08ed53e..5873057 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1788,6 +1788,7 @@ extern void free_area_init(unsigned long * zones_size); extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern unsigned long get_max_pfn(void); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fb975ce..3373704 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4391,6 +4391,16 @@ void show_free_areas(unsigned int filter) show_swap_cache_info(); } +/* + * The max_pfn can change because of memory hot plug, so it's only good + * as a hint. e.g. for sizing data structures. + */ +unsigned long get_max_pfn(void) +{ + return max_pfn; +} +EXPORT_SYMBOL(get_max_pfn); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 4/7] virtio-balloon: speed up inflate/deflate process
The implementation of the current virtio-balloon is not very efficient, the time spends on different stages of inflating the balloon to 7GB of a 8GB idle guest: a. allocating pages (6.5%) b. sending PFNs to host (68.3%) c. address translation (6.1%) d. madvise (19%) It takes about 4126ms for the inflating process to complete. Debugging shows that the bottle neck are the stage b and stage d. If using a bitmap to send the page info instead of the PFNs, we can reduce the overhead in stage b quite a lot. Furthermore, we can do the address translation and call madvise() with a bulk of RAM pages, instead of the current page per page way, the overhead of stage c and stage d can also be reduced a lot. This patch is the kernel side implementation which is intended to speed up the inflating & deflating process by adding a new feature to the virtio-balloon device. With this new feature, inflating the balloon to 7GB of a 8GB idle guest only takes 590ms, the performance improvement is about 85%. TODO: optimize stage a by allocating/freeing a chunk of pages instead of a single page at a time. Signed-off-by: Liang Li <liang.z.li at intel.com> Suggested-by: Michael S. Tsirkin <mst at redhat.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- drivers/virtio/virtio_balloon.c | 233 +++++++++++++++++++++++++++++++++++----- 1 file changed, 209 insertions(+), 24 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 59ffe5a..c31839c 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -42,6 +42,10 @@ #define OOM_VBALLOON_DEFAULT_PAGES 256 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 +#define BALLOON_BMAP_SIZE (8 * PAGE_SIZE) +#define PFNS_PER_BMAP (BALLOON_BMAP_SIZE * BITS_PER_BYTE) +#define BALLOON_BMAP_COUNT 32 + static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -67,6 +71,13 @@ struct virtio_balloon { /* Number of balloon pages we've told the Host we're not using. */ unsigned int num_pages; + /* Pointer of the bitmap header. */ + void *bmap_hdr; + /* Bitmap and bitmap count used to tell the host the pages */ + unsigned long *page_bitmap[BALLOON_BMAP_COUNT]; + unsigned int nr_page_bmap; + /* Used to record the processed pfn range */ + unsigned long min_pfn, max_pfn, start_pfn, end_pfn; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -110,16 +121,66 @@ static void balloon_ack(struct virtqueue *vq) wake_up(&vb->acked); } +static inline void init_pfn_range(struct virtio_balloon *vb) +{ + vb->min_pfn = ULONG_MAX; + vb->max_pfn = 0; +} + +static inline void update_pfn_range(struct virtio_balloon *vb, + struct page *page) +{ + unsigned long balloon_pfn = page_to_balloon_pfn(page); + + if (balloon_pfn < vb->min_pfn) + vb->min_pfn = balloon_pfn; + if (balloon_pfn > vb->max_pfn) + vb->max_pfn = balloon_pfn; +} + static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) { - struct scatterlist sg; - unsigned int len; + struct scatterlist sg, sg2[BALLOON_BMAP_COUNT + 1]; + unsigned int len, i; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP)) { + struct balloon_bmap_hdr *hdr = vb->bmap_hdr; + unsigned long bmap_len; + int nr_pfn, nr_used_bmap, nr_buf; + + nr_pfn = vb->end_pfn - vb->start_pfn + 1; + nr_pfn = roundup(nr_pfn, BITS_PER_LONG); + nr_used_bmap = nr_pfn / PFNS_PER_BMAP; + bmap_len = nr_pfn / BITS_PER_BYTE; + nr_buf = nr_used_bmap + 1; + + /* cmd, reserved and req_id are init to 0, unused here */ + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT); + hdr->start_pfn = cpu_to_virtio64(vb->vdev, vb->start_pfn); + hdr->bmap_len = cpu_to_virtio64(vb->vdev, bmap_len); + sg_init_table(sg2, nr_buf); + sg_set_buf(&sg2[0], hdr, sizeof(struct balloon_bmap_hdr)); + for (i = 0; i < nr_used_bmap; i++) { + unsigned int buf_len = BALLOON_BMAP_SIZE; + + if (i + 1 == nr_used_bmap) + buf_len = bmap_len - BALLOON_BMAP_SIZE * i; + sg_set_buf(&sg2[i + 1], vb->page_bitmap[i], buf_len); + } - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + while (vq->num_free < nr_buf) + msleep(2); + if (virtqueue_add_outbuf(vq, sg2, nr_buf, vb, GFP_KERNEL) == 0) + virtqueue_kick(vq); - /* We should always be able to add one buffer to an empty queue. */ - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); - virtqueue_kick(vq); + } else { + sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + + /* We should always be able to add one buffer to an empty + * queue. */ + virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); + virtqueue_kick(vq); + } /* When host has read buffer, this completes via balloon_ack */ wait_event(vb->acked, virtqueue_get_buf(vq, &len)); @@ -138,13 +199,93 @@ static void set_page_pfns(struct virtio_balloon *vb, page_to_balloon_pfn(page) + i); } -static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) +static void extend_page_bitmap(struct virtio_balloon *vb) +{ + int i; + unsigned long bmap_len, bmap_count; + + bmap_len = ALIGN(get_max_pfn(), BITS_PER_LONG) / BITS_PER_BYTE; + bmap_count = bmap_len / BALLOON_BMAP_SIZE; + if (bmap_len % BALLOON_BMAP_SIZE) + bmap_count++; + if (bmap_count > BALLOON_BMAP_COUNT) + bmap_count = BALLOON_BMAP_COUNT; + + for (i = 1; i < bmap_count; i++) { + vb->page_bitmap[i] = kmalloc(BALLOON_BMAP_SIZE, GFP_ATOMIC); + if (vb->page_bitmap[i]) + vb->nr_page_bmap++; + else + break; + } +} + +static void kfree_page_bitmap(struct virtio_balloon *vb) +{ + int i; + + for (i = 0; i < vb->nr_page_bmap; i++) + kfree(vb->page_bitmap[i]); +} + +static void clear_page_bitmap(struct virtio_balloon *vb) +{ + int i; + + for (i = 0; i < vb->nr_page_bmap; i++) + memset(vb->page_bitmap[i], 0, BALLOON_BMAP_SIZE); +} + +static void set_page_bitmap(struct virtio_balloon *vb, + struct list_head *pages, struct virtqueue *vq) +{ + unsigned long pfn, pfn_limit; + struct page *page; + bool found; + int bmap_idx; + + vb->min_pfn = rounddown(vb->min_pfn, BITS_PER_LONG); + vb->max_pfn = roundup(vb->max_pfn, BITS_PER_LONG); + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; + + for (pfn = vb->min_pfn; pfn < vb->max_pfn; pfn += pfn_limit) { + unsigned long end_pfn; + + clear_page_bitmap(vb); + vb->start_pfn = pfn; + end_pfn = pfn; + found = false; + list_for_each_entry(page, pages, lru) { + unsigned long pos, balloon_pfn; + + balloon_pfn = page_to_balloon_pfn(page); + if (balloon_pfn < pfn || balloon_pfn >= pfn + pfn_limit) + continue; + bmap_idx = (balloon_pfn - pfn) / PFNS_PER_BMAP; + pos = (balloon_pfn - pfn) % PFNS_PER_BMAP; + set_bit(pos, vb->page_bitmap[bmap_idx]); + if (balloon_pfn > end_pfn) + end_pfn = balloon_pfn; + found = true; + } + if (found) { + vb->end_pfn = end_pfn; + tell_host(vb, vq); + } + } +} + +static unsigned int fill_balloon(struct virtio_balloon *vb, size_t num, + bool use_bmap) { struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; - unsigned num_allocated_pages; + unsigned int num_allocated_pages; - /* We can only do one array worth at a time. */ - num = min(num, ARRAY_SIZE(vb->pfns)); + if (use_bmap) + init_pfn_range(vb); + else + /* We can only do one array worth at a time. */ + num = min(num, ARRAY_SIZE(vb->pfns)); mutex_lock(&vb->balloon_lock); for (vb->num_pfns = 0; vb->num_pfns < num; @@ -159,7 +300,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) msleep(200); break; } - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (use_bmap) + update_pfn_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) @@ -168,8 +312,13 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) num_allocated_pages = vb->num_pfns; /* Did we get any? */ - if (vb->num_pfns != 0) - tell_host(vb, vb->inflate_vq); + if (vb->num_pfns != 0) { + if (use_bmap) + set_page_bitmap(vb, &vb_dev_info->pages, + vb->inflate_vq); + else + tell_host(vb, vb->inflate_vq); + } mutex_unlock(&vb->balloon_lock); return num_allocated_pages; @@ -189,15 +338,19 @@ static void release_pages_balloon(struct virtio_balloon *vb, } } -static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) +static unsigned int leak_balloon(struct virtio_balloon *vb, size_t num, + bool use_bmap) { - unsigned num_freed_pages; + unsigned int num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; LIST_HEAD(pages); - /* We can only do one array worth at a time. */ - num = min(num, ARRAY_SIZE(vb->pfns)); + if (use_bmap) + init_pfn_range(vb); + else + /* We can only do one array worth at a time. */ + num = min(num, ARRAY_SIZE(vb->pfns)); mutex_lock(&vb->balloon_lock); /* We can't release more pages than taken */ @@ -207,7 +360,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) page = balloon_page_dequeue(vb_dev_info); if (!page) break; - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (use_bmap) + update_pfn_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); list_add(&page->lru, &pages); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -218,8 +374,14 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); * is true, we *have* to do it in this order */ - if (vb->num_pfns != 0) - tell_host(vb, vb->deflate_vq); + if (vb->num_pfns != 0) { + if (use_bmap) + set_page_bitmap(vb, &pages, vb->deflate_vq); + else + tell_host(vb, vb->deflate_vq); + + release_pages_balloon(vb, &pages); + } release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; @@ -354,13 +516,15 @@ static int virtballoon_oom_notify(struct notifier_block *self, struct virtio_balloon *vb; unsigned long *freed; unsigned num_freed_pages; + bool use_bmap; vb = container_of(self, struct virtio_balloon, nb); if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) return NOTIFY_OK; freed = parm; - num_freed_pages = leak_balloon(vb, oom_pages); + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + num_freed_pages = leak_balloon(vb, oom_pages, use_bmap); update_balloon_size(vb); *freed += num_freed_pages; @@ -380,15 +544,19 @@ static void update_balloon_size_func(struct work_struct *work) { struct virtio_balloon *vb; s64 diff; + bool use_bmap; vb = container_of(work, struct virtio_balloon, update_balloon_size_work); diff = towards_target(vb); + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + if (use_bmap && diff && vb->nr_page_bmap == 1) + extend_page_bitmap(vb); if (diff > 0) - diff -= fill_balloon(vb, diff); + diff -= fill_balloon(vb, diff, use_bmap); else if (diff < 0) - diff += leak_balloon(vb, -diff); + diff += leak_balloon(vb, -diff, use_bmap); update_balloon_size(vb); if (diff) @@ -533,6 +701,17 @@ static int virtballoon_probe(struct virtio_device *vdev) spin_lock_init(&vb->stop_update_lock); vb->stop_update = false; vb->num_pages = 0; + vb->bmap_hdr = kzalloc(sizeof(struct balloon_bmap_hdr), GFP_KERNEL); + /* Clear the feature bit if memory allocation fails */ + if (!vb->bmap_hdr) + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + else { + vb->page_bitmap[0] = kmalloc(BALLOON_BMAP_SIZE, GFP_KERNEL); + if (!vb->page_bitmap[0]) + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + else + vb->nr_page_bmap = 1; + } mutex_init(&vb->balloon_lock); init_waitqueue_head(&vb->acked); vb->vdev = vdev; @@ -583,9 +762,12 @@ out: static void remove_common(struct virtio_balloon *vb) { + bool use_bmap; + + use_bmap = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); /* There might be pages left in the balloon: free them. */ while (vb->num_pages) - leak_balloon(vb, vb->num_pages); + leak_balloon(vb, vb->num_pages, use_bmap); update_balloon_size(vb); /* Now we reset the device so we can clean up the queues. */ @@ -609,6 +791,8 @@ static void virtballoon_remove(struct virtio_device *vdev) remove_common(vb); if (vb->vb_dev_info.inode) iput(vb->vb_dev_info.inode); + kfree_page_bitmap(vb); + kfree(vb->bmap_hdr); kfree(vb); } @@ -647,6 +831,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_PAGE_BITMAP, }; static struct virtio_driver virtio_balloon_driver = { -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 5/7] mm: add the related functions to get unused page
Save the unused page info into page bitmap. The virtio balloon driver call this new API to get the unused page bitmap and send the bitmap to hypervisor(QEMU) for speeding up live migration. During sending the bitmap, some the pages may be modified and are no free anymore, this inaccuracy can be corrected by the dirty page logging mechanism. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Mel Gorman <mgorman at techsingularity.net> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- include/linux/mm.h | 2 ++ mm/page_alloc.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 86 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5873057..d181864 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1789,6 +1789,8 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); extern unsigned long get_max_pfn(void); +extern int get_unused_pages(unsigned long start_pfn, unsigned long end_pfn, + unsigned long *bitmap[], unsigned long len, unsigned int nr_bmap); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3373704..1b5419d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4401,6 +4401,90 @@ unsigned long get_max_pfn(void) } EXPORT_SYMBOL(get_max_pfn); +static void mark_unused_pages_bitmap(struct zone *zone, + unsigned long start_pfn, unsigned long end_pfn, + unsigned long *bitmap[], unsigned long bits, + unsigned int nr_bmap) +{ + unsigned long pfn, flags, nr_pg, pos, *bmap; + unsigned int order, i, t, bmap_idx; + struct list_head *curr; + + if (zone_is_empty(zone)) + return; + + end_pfn = min(start_pfn + nr_bmap * bits, end_pfn); + spin_lock_irqsave(&zone->lock, flags); + + for_each_migratetype_order(order, t) { + list_for_each(curr, &zone->free_area[order].free_list[t]) { + pfn = page_to_pfn(list_entry(curr, struct page, lru)); + if (pfn < start_pfn || pfn >= end_pfn) + continue; + nr_pg = 1UL << order; + if (pfn + nr_pg > end_pfn) + nr_pg = end_pfn - pfn; + bmap_idx = (pfn - start_pfn) / bits; + if (bmap_idx == (pfn + nr_pg - start_pfn) / bits) { + bmap = bitmap[bmap_idx]; + pos = (pfn - start_pfn) % bits; + bitmap_set(bmap, pos, nr_pg); + } else + for (i = 0; i < nr_pg; i++) { + bmap_idx = pos / bits; + bmap = bitmap[bmap_idx]; + pos = pos % bits; + bitmap_set(bmap, pos, 1); + } + } + } + + spin_unlock_irqrestore(&zone->lock, flags); +} + +/* + * During live migration, page is always discardable unless it's + * content is needed by the system. + * get_unused_pages provides an API to get the unused pages, these + * unused pages can be discarded if there is no modification since + * the request. Some other mechanism, like the dirty page logging + * can be used to track the modification. + * + * This function scans the free page list to get the unused pages + * whose pfn are range from start_pfn to end_pfn, and set the + * corresponding bit in the bitmap if an unused page is found. + * + * Allocating a large bitmap may fail because of fragmentation, + * instead of using a single bitmap, we use a scatter/gather bitmap. + * The 'bitmap' is the start address of an array which contains + * 'nr_bmap' separate small bitmaps, each bitmap contains 'bits' bits. + * + * return -1 if parameters are invalid + * return 0 when end_pfn >= max_pfn + * return 1 when end_pfn < max_pfn + */ +int get_unused_pages(unsigned long start_pfn, unsigned long end_pfn, + unsigned long *bitmap[], unsigned long bits, unsigned int nr_bmap) +{ + struct zone *zone; + int ret = 0; + + if (bitmap == NULL || *bitmap == NULL || nr_bmap == 0 || + bits == 0 || start_pfn > end_pfn) + return -1; + if (end_pfn < max_pfn) + ret = 1; + if (end_pfn >= max_pfn) + ret = 0; + + for_each_populated_zone(zone) + mark_unused_pages_bitmap(zone, start_pfn, end_pfn, bitmap, + bits, nr_bmap); + + return ret; +} +EXPORT_SYMBOL(get_unused_pages); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 6/7] virtio-balloon: define feature bit and head for misc virt queue
Define a new feature bit which supports a new virtual queue. This new virtual qeuque is for information exchange between hypervisor and guest. The VMM hypervisor can make use of this virtual queue to request the guest do some operations, e.g. drop page cache, synchronize file system, etc. And the VMM hypervisor can get some of guest's runtime information through this virtual queue, e.g. the guest's unused page information, which can be used for live migration optimization. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- include/uapi/linux/virtio_balloon.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index d3b182a..3a9d633 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_PAGE_BITMAP 3 /* Send page info with bitmap */ +#define VIRTIO_BALLOON_F_MISC_VQ 4 /* Misc info virtqueue */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -101,4 +102,25 @@ struct balloon_bmap_hdr { __virtio64 bmap_len; }; +enum balloon_req_id { + /* Get unused pages information */ + BALLOON_GET_UNUSED_PAGES, +}; + +enum balloon_flag { + /* Have more data for a request */ + BALLOON_FLAG_CONT, + /* No more data for a request */ + BALLOON_FLAG_DONE, +}; + +struct balloon_req_hdr { + /* Used to distinguish different request */ + __virtio16 cmd; + /* Reserved */ + __virtio16 reserved[3]; + /* Request parameter */ + __virtio64 param; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 1.8.3.1
Liang Li
2016-Aug-08 06:35 UTC
[PATCH v3 kernel 7/7] virtio-balloon: tell host vm's unused page info
Support the request for vm's unused page information, response with a page bitmap. QEMU can make use of this bitmap and the dirty page logging mechanism to skip the transportation of these unused pages, this is very helpful to speed up the live migration process. Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Paolo Bonzini <pbonzini at redhat.com> Cc: Cornelia Huck <cornelia.huck at de.ibm.com> Cc: Amit Shah <amit.shah at redhat.com> Cc: Dave Hansen <dave.hansen at intel.com> --- drivers/virtio/virtio_balloon.c | 143 +++++++++++++++++++++++++++++++++++++--- 1 file changed, 134 insertions(+), 9 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index c31839c..f10bb8b 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -56,7 +56,7 @@ static struct vfsmount *balloon_mnt; struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *misc_vq; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -78,6 +78,8 @@ struct virtio_balloon { unsigned int nr_page_bmap; /* Used to record the processed pfn range */ unsigned long min_pfn, max_pfn, start_pfn, end_pfn; + /* Request header */ + struct balloon_req_hdr req_hdr; /* * The pages we've told the Host we're not using are enqueued * at vb_dev_info->pages list. @@ -423,6 +425,78 @@ static void update_balloon_stats(struct virtio_balloon *vb) pages_to_bytes(available)); } +static void send_unused_pages_info(struct virtio_balloon *vb, + unsigned long req_id) +{ + struct scatterlist sg_in, sg_out[BALLOON_BMAP_COUNT + 1]; + unsigned long pfn = 0, bmap_len, pfn_limit, last_pfn, nr_pfn; + struct virtqueue *vq = vb->misc_vq; + struct balloon_bmap_hdr *hdr = vb->bmap_hdr; + int ret = 1, nr_buf, used_nr_bmap = 0, i; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP) && + vb->nr_page_bmap == 1) + extend_page_bitmap(vb); + + pfn_limit = PFNS_PER_BMAP * vb->nr_page_bmap; + mutex_lock(&vb->balloon_lock); + last_pfn = get_max_pfn(); + + while (ret) { + clear_page_bitmap(vb); + ret = get_unused_pages(pfn, pfn + pfn_limit, vb->page_bitmap, + PFNS_PER_BMAP, vb->nr_page_bmap); + if (ret < 0) + break; + hdr->cmd = cpu_to_virtio16(vb->vdev, BALLOON_GET_UNUSED_PAGES); + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT); + hdr->req_id = cpu_to_virtio64(vb->vdev, req_id); + hdr->start_pfn = cpu_to_virtio64(vb->vdev, pfn); + bmap_len = BALLOON_BMAP_SIZE * vb->nr_page_bmap; + + if (!ret) { + hdr->flag = cpu_to_virtio16(vb->vdev, + BALLOON_FLAG_DONE); + nr_pfn = last_pfn - pfn; + used_nr_bmap = nr_pfn / PFNS_PER_BMAP; + if (nr_pfn % PFNS_PER_BMAP) + used_nr_bmap++; + bmap_len = nr_pfn / BITS_PER_BYTE; + } else { + hdr->flag = cpu_to_virtio16(vb->vdev, + BALLOON_FLAG_CONT); + used_nr_bmap = vb->nr_page_bmap; + } + hdr->bmap_len = cpu_to_virtio64(vb->vdev, bmap_len); + nr_buf = used_nr_bmap + 1; + sg_init_table(sg_out, nr_buf); + sg_set_buf(&sg_out[0], hdr, sizeof(struct balloon_bmap_hdr)); + for (i = 0; i < used_nr_bmap; i++) { + unsigned int buf_len = BALLOON_BMAP_SIZE; + + if (i + 1 == used_nr_bmap) + buf_len = bmap_len - BALLOON_BMAP_SIZE * i; + sg_set_buf(&sg_out[i + 1], vb->page_bitmap[i], buf_len); + } + + while (vq->num_free < nr_buf) + msleep(2); + if (virtqueue_add_outbuf(vq, sg_out, nr_buf, vb, + GFP_KERNEL) == 0) { + virtqueue_kick(vq); + while (!virtqueue_get_buf(vq, &i) + && !virtqueue_is_broken(vq)) + cpu_relax(); + } + pfn += pfn_limit; + } + + mutex_unlock(&vb->balloon_lock); + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + virtqueue_add_inbuf(vq, &sg_in, 1, &vb->req_hdr, GFP_KERNEL); + virtqueue_kick(vq); +} + /* * While most virtqueues communicate guest-initiated requests to the hypervisor, * the stats queue operates in reverse. The driver initializes the virtqueue @@ -563,18 +637,56 @@ static void update_balloon_size_func(struct work_struct *work) queue_work(system_freezable_wq, work); } +static void misc_handle_rq(struct virtio_balloon *vb) +{ + struct balloon_req_hdr *ptr_hdr; + unsigned int len; + + ptr_hdr = virtqueue_get_buf(vb->misc_vq, &len); + if (!ptr_hdr || len != sizeof(vb->req_hdr)) + return; + + switch (ptr_hdr->cmd) { + case BALLOON_GET_UNUSED_PAGES: + send_unused_pages_info(vb, ptr_hdr->param); + break; + default: + break; + } +} + +static void misc_request(struct virtqueue *vq) +{ + struct virtio_balloon *vb = vq->vdev->priv; + + misc_handle_rq(vb); +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; - static const char * const names[] = { "inflate", "deflate", "stats" }; + struct virtqueue *vqs[4]; + vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, + stats_request, misc_request }; + static const char * const names[] = { "inflate", "deflate", "stats", + "misc" }; int err, nvqs; /* * We expect two virtqueues: inflate and deflate, and * optionally stat. */ - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) + nvqs = 4; + else if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + nvqs = 3; + else + nvqs = 2; + + if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ); + } + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names); if (err) return err; @@ -595,6 +707,16 @@ static int init_vqs(struct virtio_balloon *vb) BUG(); virtqueue_kick(vb->stats_vq); } + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) { + struct scatterlist sg_in; + + vb->misc_vq = vqs[3]; + sg_init_one(&sg_in, &vb->req_hdr, sizeof(vb->req_hdr)); + if (virtqueue_add_inbuf(vb->misc_vq, &sg_in, 1, + &vb->req_hdr, GFP_KERNEL) < 0) + BUG(); + virtqueue_kick(vb->misc_vq); + } return 0; } @@ -703,13 +825,15 @@ static int virtballoon_probe(struct virtio_device *vdev) vb->num_pages = 0; vb->bmap_hdr = kzalloc(sizeof(struct balloon_bmap_hdr), GFP_KERNEL); /* Clear the feature bit if memory allocation fails */ - if (!vb->bmap_hdr) + if (!vb->bmap_hdr) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); - else { + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_MISC_VQ); + } else { vb->page_bitmap[0] = kmalloc(BALLOON_BMAP_SIZE, GFP_KERNEL); - if (!vb->page_bitmap[0]) + if (!vb->page_bitmap[0]) { __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_BITMAP); - else + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_MISC_VQ); + } else vb->nr_page_bmap = 1; } mutex_init(&vb->balloon_lock); @@ -832,6 +956,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_PAGE_BITMAP, + VIRTIO_BALLOON_F_MISC_VQ, }; static struct virtio_driver virtio_balloon_driver = { -- 1.8.3.1
kbuild test robot
2016-Aug-08 08:17 UTC
[PATCH v3 kernel 4/7] virtio-balloon: speed up inflate/deflate process
Hi Liang, [auto build test WARNING on linus/master] [also build test WARNING on v4.8-rc1 next-20160805] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Liang-Li/Extend-virtio-balloon-for-fast-de-inflating-fast-live-migration/20160808-144551 config: s390-default_defconfig (attached as .config) compiler: s390x-linux-gnu-gcc (Debian 5.4.0-6) 5.4.0 20160609 reproduce: wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree make.cross ARCH=s390 All warnings (new ones prefixed by >>): drivers/virtio/virtio_balloon.c: In function 'tell_host':>> drivers/virtio/virtio_balloon.c:188:1: warning: the frame size of 1456 bytes is larger than 1024 bytes [-Wframe-larger-than=]} ^ vim +188 drivers/virtio/virtio_balloon.c 112d1263 Liang Li 2016-08-08 172 msleep(2); 112d1263 Liang Li 2016-08-08 173 if (virtqueue_add_outbuf(vq, sg2, nr_buf, vb, GFP_KERNEL) == 0) 112d1263 Liang Li 2016-08-08 174 virtqueue_kick(vq); 6b35e407 Rusty Russell 2008-02-04 175 112d1263 Liang Li 2016-08-08 176 } else { 6b35e407 Rusty Russell 2008-02-04 177 sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); 6b35e407 Rusty Russell 2008-02-04 178 112d1263 Liang Li 2016-08-08 179 /* We should always be able to add one buffer to an empty 112d1263 Liang Li 2016-08-08 180 * queue. */ 4951cc90 Rusty Russell 2014-03-13 181 virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); 946cfe0e Michael S. Tsirkin 2010-04-12 182 virtqueue_kick(vq); 112d1263 Liang Li 2016-08-08 183 } 6b35e407 Rusty Russell 2008-02-04 184 6b35e407 Rusty Russell 2008-02-04 185 /* When host has read buffer, this completes via balloon_ack */ 9c378abc Michael S. Tsirkin 2012-07-02 186 wait_event(vb->acked, virtqueue_get_buf(vq, &len)); fd0e21c3 Petr Mladek 2016-01-25 187 6b35e407 Rusty Russell 2008-02-04 @188 } 6b35e407 Rusty Russell 2008-02-04 189 87c9403b Michael S. Tsirkin 2016-05-17 190 static void set_page_pfns(struct virtio_balloon *vb, 87c9403b Michael S. Tsirkin 2016-05-17 191 __virtio32 pfns[], struct page *page) 3ccc9372 Michael S. Tsirkin 2012-04-12 192 { 3ccc9372 Michael S. Tsirkin 2012-04-12 193 unsigned int i; 3ccc9372 Michael S. Tsirkin 2012-04-12 194 3ccc9372 Michael S. Tsirkin 2012-04-12 195 /* Set balloon pfns pointing at this page. 3ccc9372 Michael S. Tsirkin 2012-04-12 196 * Note that the first pfn points at start of the page. */ :::::: The code at line 188 was first introduced by commit :::::: 6b35e40767c6c1ac783330109ae8e0c09ea6bc82 virtio: balloon driver :::::: TO: Rusty Russell <rusty at rustcorp.com.au> :::::: CC: Rusty Russell <rusty at rustcorp.com.au> --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation -------------- next part -------------- A non-text attachment was scrubbed... Name: .config.gz Type: application/octet-stream Size: 16210 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20160808/25fef051/attachment-0001.obj>
Dave Hansen
2016-Aug-08 16:15 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
On 08/07/2016 11:35 PM, Liang Li wrote:> Dave Hansen suggested a new scheme to encode the data structure, > because of additional complexity, it's not implemented in v3.FWIW, I don't think it takes any additional complexity here, at least in the guest implementation side. The thing I suggested would just mean explicitly calling out that there was a single bitmap instead of implying it in the ABI. Do you think the scheme I suggested is the way to go?
Li, Liang Z
2016-Aug-09 02:52 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
> Subject: Re: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating > & fast live migration > > On 08/07/2016 11:35 PM, Liang Li wrote: > > Dave Hansen suggested a new scheme to encode the data structure, > > because of additional complexity, it's not implemented in v3. > > FWIW, I don't think it takes any additional complexity here, at least in the > guest implementation side. The thing I suggested would just mean explicitly > calling out that there was a single bitmap instead of implying it in the ABI. > > Do you think the scheme I suggested is the way to go?Yes, I think so. And I will do that in the later version. In this V3, I just want to solve the issue caused by a large page bitmap in v2. Liang
Li, Liang Z
2016-Aug-18 01:05 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
Hi Michael, Could you help to review this version when you have time? Thanks! Liang> -----Original Message----- > From: Li, Liang Z > Sent: Monday, August 08, 2016 2:35 PM > To: linux-kernel at vger.kernel.org > Cc: virtualization at lists.linux-foundation.org; linux-mm at kvack.org; virtio- > dev at lists.oasis-open.org; kvm at vger.kernel.org; qemu-devel at nongnu.org; > quintela at redhat.com; dgilbert at redhat.com; Hansen, Dave; Li, Liang Z > Subject: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & > fast live migration > > This patch set contains two parts of changes to the virtio-balloon. > > One is the change for speeding up the inflating & deflating process, the main > idea of this optimization is to use bitmap to send the page information to > host instead of the PFNs, to reduce the overhead of virtio data transmission, > address translation and madvise(). This can help to improve the performance > by about 85%. > > Another change is for speeding up live migration. By skipping process guest's > free pages in the first round of data copy, to reduce needless data processing, > this can help to save quite a lot of CPU cycles and network bandwidth. We > put guest's free page information in bitmap and send it to host with the virt > queue of virtio-balloon. For an idle 8GB guest, this can help to shorten the > total live migration time from 2Sec to about 500ms in the 10Gbps network > environment. > > Dave Hansen suggested a new scheme to encode the data structure, > because of additional complexity, it's not implemented in v3. > > Changes from v2 to v3: > * Change the name of 'free page' to 'unused page'. > * Use the scatter & gather bitmap instead of a 1MB page bitmap. > * Fix overwriting the page bitmap after kicking. > * Some of MST's comments for v2. > > Changes from v1 to v2: > * Abandon the patch for dropping page cache. > * Put some structures to uapi head file. > * Use a new way to determine the page bitmap size. > * Use a unified way to send the free page information with the bitmap > * Address the issues referred in MST's comments > > > Liang Li (7): > virtio-balloon: rework deflate to add page to a list > virtio-balloon: define new feature bit and page bitmap head > mm: add a function to get the max pfn > virtio-balloon: speed up inflate/deflate process > mm: add the related functions to get unused page > virtio-balloon: define feature bit and head for misc virt queue > virtio-balloon: tell host vm's unused page info > > drivers/virtio/virtio_balloon.c | 390 > ++++++++++++++++++++++++++++++++---- > include/linux/mm.h | 3 + > include/uapi/linux/virtio_balloon.h | 41 ++++ > mm/page_alloc.c | 94 +++++++++ > 4 files changed, 485 insertions(+), 43 deletions(-) > > -- > 1.8.3.1
Li, Liang Z
2016-Aug-31 06:28 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
Hi Michael, I know you are very busy. If you have time, could you help to take a look at this patch set? Thanks! Liang> -----Original Message----- > From: Li, Liang Z > Sent: Thursday, August 18, 2016 9:06 AM > To: Michael S. Tsirkin > Cc: virtualization at lists.linux-foundation.org; linux-mm at kvack.org; virtio- > dev at lists.oasis-open.org; kvm at vger.kernel.org; qemu-devel at nongnu.org; > quintela at redhat.com; dgilbert at redhat.com; Hansen, Dave; linux- > kernel at vger.kernel.org > Subject: RE: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating > & fast live migration > > Hi Michael, > > Could you help to review this version when you have time? > > Thanks! > Liang > > > -----Original Message----- > > From: Li, Liang Z > > Sent: Monday, August 08, 2016 2:35 PM > > To: linux-kernel at vger.kernel.org > > Cc: virtualization at lists.linux-foundation.org; linux-mm at kvack.org; > > virtio- dev at lists.oasis-open.org; kvm at vger.kernel.org; > > qemu-devel at nongnu.org; quintela at redhat.com; dgilbert at redhat.com; > > Hansen, Dave; Li, Liang Z > > Subject: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast > > (de)inflating & fast live migration > > > > This patch set contains two parts of changes to the virtio-balloon. > > > > One is the change for speeding up the inflating & deflating process, > > the main idea of this optimization is to use bitmap to send the page > > information to host instead of the PFNs, to reduce the overhead of > > virtio data transmission, address translation and madvise(). This can > > help to improve the performance by about 85%. > > > > Another change is for speeding up live migration. By skipping process > > guest's free pages in the first round of data copy, to reduce needless > > data processing, this can help to save quite a lot of CPU cycles and > > network bandwidth. We put guest's free page information in bitmap and > > send it to host with the virt queue of virtio-balloon. For an idle 8GB > > guest, this can help to shorten the total live migration time from > > 2Sec to about 500ms in the 10Gbps network environment. > > > > Dave Hansen suggested a new scheme to encode the data structure, > > because of additional complexity, it's not implemented in v3. > > > > Changes from v2 to v3: > > * Change the name of 'free page' to 'unused page'. > > * Use the scatter & gather bitmap instead of a 1MB page bitmap. > > * Fix overwriting the page bitmap after kicking. > > * Some of MST's comments for v2. > > > > Changes from v1 to v2: > > * Abandon the patch for dropping page cache. > > * Put some structures to uapi head file. > > * Use a new way to determine the page bitmap size. > > * Use a unified way to send the free page information with the bitmap > > * Address the issues referred in MST's comments > > > > > > Liang Li (7): > > virtio-balloon: rework deflate to add page to a list > > virtio-balloon: define new feature bit and page bitmap head > > mm: add a function to get the max pfn > > virtio-balloon: speed up inflate/deflate process > > mm: add the related functions to get unused page > > virtio-balloon: define feature bit and head for misc virt queue > > virtio-balloon: tell host vm's unused page info > > > > drivers/virtio/virtio_balloon.c | 390 > > ++++++++++++++++++++++++++++++++---- > > include/linux/mm.h | 3 + > > include/uapi/linux/virtio_balloon.h | 41 ++++ > > mm/page_alloc.c | 94 +++++++++ > > 4 files changed, 485 insertions(+), 43 deletions(-) > > > > -- > > 1.8.3.1
Wanpeng Li
2016-Sep-01 04:30 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
2016-08-08 14:35 GMT+08:00 Liang Li <liang.z.li at intel.com>:> This patch set contains two parts of changes to the virtio-balloon. > > One is the change for speeding up the inflating & deflating process, > the main idea of this optimization is to use bitmap to send the page > information to host instead of the PFNs, to reduce the overhead of > virtio data transmission, address translation and madvise(). This can > help to improve the performance by about 85%. > > Another change is for speeding up live migration. By skipping process > guest's free pages in the first round of data copy, to reduce needless > data processing, this can help to save quite a lot of CPU cycles and > network bandwidth. We put guest's free page information in bitmap and > send it to host with the virt queue of virtio-balloon. For an idle 8GB > guest, this can help to shorten the total live migration time from 2Sec > to about 500ms in the 10Gbps network environment.I just read the slides of this feature for recent kvm forum, the cloud providers more care about live migration downtime to avoid customers' perception than total time, however, this feature will increase downtime when acquire the benefit of reducing total time, maybe it will be more acceptable if there is no downside for downtime. Regards, Wanpeng Li
Li, Liang Z
2016-Sep-01 05:46 UTC
[PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
> Subject: Re: [PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating > & fast live migration > > 2016-08-08 14:35 GMT+08:00 Liang Li <liang.z.li at intel.com>: > > This patch set contains two parts of changes to the virtio-balloon. > > > > One is the change for speeding up the inflating & deflating process, > > the main idea of this optimization is to use bitmap to send the page > > information to host instead of the PFNs, to reduce the overhead of > > virtio data transmission, address translation and madvise(). This can > > help to improve the performance by about 85%. > > > > Another change is for speeding up live migration. By skipping process > > guest's free pages in the first round of data copy, to reduce needless > > data processing, this can help to save quite a lot of CPU cycles and > > network bandwidth. We put guest's free page information in bitmap and > > send it to host with the virt queue of virtio-balloon. For an idle 8GB > > guest, this can help to shorten the total live migration time from > > 2Sec to about 500ms in the 10Gbps network environment. > > I just read the slides of this feature for recent kvm forum, the cloud > providers more care about live migration downtime to avoid customers' > perception than total time, however, this feature will increase downtime > when acquire the benefit of reducing total time, maybe it will be more > acceptable if there is no downside for downtime. > > Regards, > Wanpeng LiIn theory, there is no factor that will increase the downtime. There is no additional operation and no more data copy during the stop and copy stage. But in the test, the downtime increases and this can be reproduced. I think the busy network line maybe the reason for this. With this optimization, a huge amount of data is written to the socket in a shorter time, so some of the write operation may need to wait. Without this optimization, zero page checking takes more time, the network is not so busy. If the guest is not an idle one, I think the gap of the downtime will not so obvious. Anyway, the downtime is still less than the max_down_time set by the user. Thanks! Liang
Possibly Parallel Threads
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v4 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v4 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration