Wei Wang
2017-Apr-13 09:35 UTC
[PATCH v9 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch series implements two optimizations: 1) transfer pages in chuncks between the guest and host; 2) transfer the guest unused pages to the host so that they can be skipped to migrate in live migration. Changes: v8->v9: 1) Split the two new features, VIRTIO_BALLOON_F_BALLOON_CHUNKS and VIRTIO_BALLOON_F_MISC_VQ, which were mixed together in the previous implementation; 2) Simpler function to get the free page block. v7->v8: 1) Use only one chunk format, instead of two. 2) re-write the virtio-balloon implementation patch. 3) commit changes 4) patch re-org Liang Li (1): virtio-balloon: deflate via a page list Wei Wang (4): virtio-balloon: VIRTIO_BALLOON_F_BALLOON_CHUNKS mm: function to offer a page block on the free list mm: export symbol of next_zone and first_online_pgdat virtio-balloon: VIRTIO_BALLOON_F_MISC_VQ drivers/virtio/virtio_balloon.c | 615 +++++++++++++++++++++++++++++++++--- include/linux/mm.h | 3 + include/uapi/linux/virtio_balloon.h | 21 ++ mm/mmzone.c | 2 + mm/page_alloc.c | 87 +++++ 5 files changed, 678 insertions(+), 50 deletions(-) -- 2.7.4
From: Liang Li <liang.z.li at intel.com> This patch saves the deflated pages to a list, instead of the PFN array. Accordingly, the balloon_pfn_to_page() function is removed. Signed-off-by: Liang Li <liang.z.li at intel.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Signed-off-by: Wei Wang <wei.w.wang at intel.com> --- drivers/virtio/virtio_balloon.c | 22 ++++++++-------------- 1 file changed, 8 insertions(+), 14 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 181793f..f59cb4f 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -103,12 +103,6 @@ static u32 page_to_balloon_pfn(struct page *page) return pfn * VIRTIO_BALLOON_PAGES_PER_PAGE; } -static struct page *balloon_pfn_to_page(u32 pfn) -{ - BUG_ON(pfn % VIRTIO_BALLOON_PAGES_PER_PAGE); - return pfn_to_page(pfn / VIRTIO_BALLOON_PAGES_PER_PAGE); -} - static void balloon_ack(struct virtqueue *vq) { struct virtio_balloon *vb = vq->vdev->priv; @@ -181,18 +175,16 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) return num_allocated_pages; } -static void release_pages_balloon(struct virtio_balloon *vb) +static void release_pages_balloon(struct virtio_balloon *vb, + struct list_head *pages) { - unsigned int i; - struct page *page; + struct page *page, *next; - /* Find pfns pointing at start of each page, get pages and free them. */ - for (i = 0; i < vb->num_pfns; i += VIRTIO_BALLOON_PAGES_PER_PAGE) { - page = balloon_pfn_to_page(virtio32_to_cpu(vb->vdev, - vb->pfns[i])); + list_for_each_entry_safe(page, next, pages, lru) { if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) adjust_managed_page_count(page, 1); + list_del(&page->lru); put_page(page); /* balloon reference */ } } @@ -202,6 +194,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) unsigned num_freed_pages; struct page *page; struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; + LIST_HEAD(pages); /* We can only do one array worth at a time. */ num = min(num, ARRAY_SIZE(vb->pfns)); @@ -215,6 +208,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) if (!page) break; set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + list_add(&page->lru, &pages); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -226,7 +220,7 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) */ if (vb->num_pfns != 0) tell_host(vb, vb->deflate_vq); - release_pages_balloon(vb); + release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; } -- 2.7.4
Wei Wang
2017-Apr-13 09:35 UTC
[PATCH v9 2/5] virtio-balloon: VIRTIO_BALLOON_F_BALLOON_CHUNKS
Add a new feature, VIRTIO_BALLOON_F_BALLOON_CHUNKS, which enables the transfer of the ballooned (i.e. inflated/deflated) pages in chunks to the host. The implementation of the previous virtio-balloon is not very efficient, because the ballooned pages are transferred to the host one by one. Here is the breakdown of the time in percentage spent on each step of the balloon inflating process (inflating 7GB of an 8GB idle guest). 1) allocating pages (6.5%) 2) sending PFNs to host (68.3%) 3) address translation (6.1%) 4) madvise (19%) It takes about 4126ms for the inflating process to complete. The above profiling shows that the bottlenecks are stage 2) and stage 4). This patch optimizes step 2) by transferring pages to the host in chunks. A chunk consists of guest physically continuous pages, and it is offered to the host via a base PFN (i.e. the start PFN of those physically continuous pages) and the size (i.e. the total number of the pages). A chunk is formated as below: -------------------------------------------------------- | Base (52 bit) | Rsvd (12 bit) | -------------------------------------------------------- -------------------------------------------------------- | Size (52 bit) | Rsvd (12 bit) | -------------------------------------------------------- By doing so, step 4) can also be optimized by doing address translation and madvise() in chunks rather than page by page. With this new feature, the above ballooning process takes ~590ms resulting in an improvement of ~85%. TODO: optimize stage 1) by allocating/freeing a chunk of pages instead of a single page each time. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> Suggested-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/virtio/virtio_balloon.c | 384 +++++++++++++++++++++++++++++++++--- include/uapi/linux/virtio_balloon.h | 13 ++ 2 files changed, 374 insertions(+), 23 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index f59cb4f..5e2e7cc 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -42,6 +42,10 @@ #define OOM_VBALLOON_DEFAULT_PAGES 256 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 +#define PAGE_BMAP_SIZE (8 * PAGE_SIZE) +#define PFNS_PER_PAGE_BMAP (PAGE_BMAP_SIZE * BITS_PER_BYTE) +#define PAGE_BMAP_COUNT_MAX 32 + static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -50,6 +54,10 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif +/* Types of pages to chunk */ +#define PAGE_CHUNK_TYPE_BALLOON 0 + +#define MAX_PAGE_CHUNKS 4096 struct virtio_balloon { struct virtio_device *vdev; struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; @@ -78,6 +86,32 @@ struct virtio_balloon { /* Synchronize access/update to this struct virtio_balloon elements */ struct mutex balloon_lock; + /* + * Buffer for PAGE_CHUNK_TYPE_BALLOON: + * virtio_balloon_page_chunk_hdr + + * virtio_balloon_page_chunk * MAX_PAGE_CHUNKS + */ + struct virtio_balloon_page_chunk_hdr *balloon_page_chunk_hdr; + struct virtio_balloon_page_chunk *balloon_page_chunk; + + /* Bitmap used to record pages */ + unsigned long *page_bmap[PAGE_BMAP_COUNT_MAX]; + /* Number of the allocated page_bmap */ + unsigned int page_bmaps; + + /* + * The allocated page_bmap size may be smaller than the pfn range of + * the ballooned pages. In this case, we need to use the page_bmap + * multiple times to cover the entire pfn range. It's like using a + * short ruler several times to finish measuring a long object. + * The start location of the ruler in the next measurement is the end + * location of the ruler in the previous measurement. + * + * pfn_max & pfn_min: forms the pfn range of the ballooned pages + * pfn_start & pfn_stop: records the start and stop pfn in each cover + */ + unsigned long pfn_min, pfn_max, pfn_start, pfn_stop; + /* The array of pfns we tell the Host about. */ unsigned int num_pfns; __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX]; @@ -110,20 +144,201 @@ static void balloon_ack(struct virtqueue *vq) wake_up(&vb->acked); } -static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) +static inline void init_page_bmap_range(struct virtio_balloon *vb) +{ + vb->pfn_min = ULONG_MAX; + vb->pfn_max = 0; +} + +static inline void update_page_bmap_range(struct virtio_balloon *vb, + struct page *page) +{ + unsigned long balloon_pfn = page_to_balloon_pfn(page); + + vb->pfn_min = min(balloon_pfn, vb->pfn_min); + vb->pfn_max = max(balloon_pfn, vb->pfn_max); +} + +/* The page_bmap size is extended by adding more number of page_bmap */ +static void extend_page_bmap_size(struct virtio_balloon *vb, + unsigned long pfns) +{ + int i, bmaps; + unsigned long bmap_len; + + bmap_len = ALIGN(pfns, BITS_PER_LONG) / BITS_PER_BYTE; + bmap_len = ALIGN(bmap_len, PAGE_BMAP_SIZE); + bmaps = min((int)(bmap_len / PAGE_BMAP_SIZE), + PAGE_BMAP_COUNT_MAX); + + for (i = 1; i < bmaps; i++) { + vb->page_bmap[i] = kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); + if (vb->page_bmap[i]) + vb->page_bmaps++; + else + break; + } +} + +static void free_extended_page_bmap(struct virtio_balloon *vb) +{ + int i, bmaps = vb->page_bmaps; + + for (i = 1; i < bmaps; i++) { + kfree(vb->page_bmap[i]); + vb->page_bmap[i] = NULL; + vb->page_bmaps--; + } +} + +static void free_page_bmap(struct virtio_balloon *vb) +{ + int i; + + for (i = 0; i < vb->page_bmaps; i++) + kfree(vb->page_bmap[i]); +} + +static void clear_page_bmap(struct virtio_balloon *vb) +{ + int i; + + for (i = 0; i < vb->page_bmaps; i++) + memset(vb->page_bmap[i], 0, PAGE_BMAP_SIZE); +} + +static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *vq, + int type, bool busy_wait) { struct scatterlist sg; + struct virtio_balloon_page_chunk_hdr *hdr; + void *buf; unsigned int len; - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + switch (type) { + case PAGE_CHUNK_TYPE_BALLOON: + hdr = vb->balloon_page_chunk_hdr; + len = 0; + break; + default: + dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", + __func__, type); + return; + } - /* We should always be able to add one buffer to an empty queue. */ - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); - virtqueue_kick(vq); + buf = (void *)hdr - len; + len += sizeof(struct virtio_balloon_page_chunk_hdr); + len += hdr->chunks * sizeof(struct virtio_balloon_page_chunk); + sg_init_table(&sg, 1); + sg_set_buf(&sg, buf, len); + if (!virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)) { + virtqueue_kick(vq); + if (busy_wait) + while (!virtqueue_get_buf(vq, &len) && + !virtqueue_is_broken(vq)) + cpu_relax(); + else + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + hdr->chunks = 0; + } +} + +static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, + int type, u64 base, u64 size) +{ + struct virtio_balloon_page_chunk_hdr *hdr; + struct virtio_balloon_page_chunk *chunk; + + switch (type) { + case PAGE_CHUNK_TYPE_BALLOON: + hdr = vb->balloon_page_chunk_hdr; + chunk = vb->balloon_page_chunk; + break; + default: + dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", + __func__, type); + return; + } + chunk = chunk + hdr->chunks; + chunk->base = cpu_to_le64(base << VIRTIO_BALLOON_CHUNK_BASE_SHIFT); + chunk->size = cpu_to_le64(size << VIRTIO_BALLOON_CHUNK_SIZE_SHIFT); + hdr->chunks++; + if (hdr->chunks == MAX_PAGE_CHUNKS) + send_page_chunks(vb, vq, type, false); +} + +static void chunking_pages_from_bmap(struct virtio_balloon *vb, + struct virtqueue *vq, + unsigned long pfn_start, + unsigned long *bmap, + unsigned long len) +{ + unsigned long pos = 0, end = len * BITS_PER_BYTE; + + while (pos < end) { + unsigned long one = find_next_bit(bmap, end, pos); + + if (one < end) { + unsigned long chunk_size, zero; + + zero = find_next_zero_bit(bmap, end, one + 1); + if (zero >= end) + chunk_size = end - one; + else + chunk_size = zero - one; + + if (chunk_size) + add_one_chunk(vb, vq, PAGE_CHUNK_TYPE_BALLOON, + pfn_start + one, chunk_size); + pos = one + chunk_size; + } else + break; + } +} + +static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) +{ + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS)) { + int pfns, page_bmaps, i; + unsigned long pfn_start, pfns_len; + + pfn_start = vb->pfn_start; + pfns = vb->pfn_stop - pfn_start + 1; + pfns = roundup(roundup(pfns, BITS_PER_LONG), + PFNS_PER_PAGE_BMAP); + page_bmaps = pfns / PFNS_PER_PAGE_BMAP; + pfns_len = pfns / BITS_PER_BYTE; + + for (i = 0; i < page_bmaps; i++) { + unsigned int bmap_len = PAGE_BMAP_SIZE; + + /* The last one takes the leftover only */ + if (i + 1 == page_bmaps) + bmap_len = pfns_len - PAGE_BMAP_SIZE * i; + + chunking_pages_from_bmap(vb, vq, pfn_start + + i * PFNS_PER_PAGE_BMAP, + vb->page_bmap[i], bmap_len); + } + if (vb->balloon_page_chunk_hdr->chunks > 0) + send_page_chunks(vb, vq, PAGE_CHUNK_TYPE_BALLOON, + false); + } else { + struct scatterlist sg; + unsigned int len; - /* When host has read buffer, this completes via balloon_ack */ - wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); + /* + * We should always be able to add one buffer to an empty + * queue. + */ + virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); + virtqueue_kick(vq); + + /* When host has read buffer, this completes via balloon_ack */ + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); + } } static void set_page_pfns(struct virtio_balloon *vb, @@ -131,20 +346,73 @@ static void set_page_pfns(struct virtio_balloon *vb, { unsigned int i; - /* Set balloon pfns pointing at this page. - * Note that the first pfn points at start of the page. */ + /* + * Set balloon pfns pointing at this page. + * Note that the first pfn points at start of the page. + */ for (i = 0; i < VIRTIO_BALLOON_PAGES_PER_PAGE; i++) pfns[i] = cpu_to_virtio32(vb->vdev, page_to_balloon_pfn(page) + i); } +static void set_page_bmap(struct virtio_balloon *vb, + struct list_head *pages, struct virtqueue *vq) +{ + unsigned long pfn_start, pfn_stop; + struct page *page; + bool found; + + vb->pfn_min = rounddown(vb->pfn_min, BITS_PER_LONG); + vb->pfn_max = roundup(vb->pfn_max, BITS_PER_LONG); + + extend_page_bmap_size(vb, vb->pfn_max - vb->pfn_min + 1); + pfn_start = vb->pfn_min; + + while (pfn_start < vb->pfn_max) { + pfn_stop = pfn_start + PFNS_PER_PAGE_BMAP * vb->page_bmaps; + pfn_stop = pfn_stop < vb->pfn_max ? pfn_stop : vb->pfn_max; + + vb->pfn_start = pfn_start; + clear_page_bmap(vb); + found = false; + + list_for_each_entry(page, pages, lru) { + unsigned long bmap_idx, bmap_pos, balloon_pfn; + + balloon_pfn = page_to_balloon_pfn(page); + if (balloon_pfn < pfn_start || balloon_pfn > pfn_stop) + continue; + bmap_idx = (balloon_pfn - pfn_start) / + PFNS_PER_PAGE_BMAP; + bmap_pos = (balloon_pfn - pfn_start) % + PFNS_PER_PAGE_BMAP; + set_bit(bmap_pos, vb->page_bmap[bmap_idx]); + + found = true; + } + if (found) { + vb->pfn_stop = pfn_stop; + tell_host(vb, vq); + } + pfn_start = pfn_stop; + } + free_extended_page_bmap(vb); +} + static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) { struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; unsigned num_allocated_pages; + bool chunking = virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_BALLOON_CHUNKS); /* We can only do one array worth at a time. */ - num = min(num, ARRAY_SIZE(vb->pfns)); + if (chunking) { + init_page_bmap_range(vb); + } else { + /* We can only do one array worth at a time. */ + num = min(num, ARRAY_SIZE(vb->pfns)); + } mutex_lock(&vb->balloon_lock); for (vb->num_pfns = 0; vb->num_pfns < num; @@ -159,7 +427,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) msleep(200); break; } - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (chunking) + update_page_bmap_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) @@ -168,8 +439,13 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) num_allocated_pages = vb->num_pfns; /* Did we get any? */ - if (vb->num_pfns != 0) - tell_host(vb, vb->inflate_vq); + if (vb->num_pfns != 0) { + if (chunking) + set_page_bmap(vb, &vb_dev_info->pages, + vb->inflate_vq); + else + tell_host(vb, vb->inflate_vq); + } mutex_unlock(&vb->balloon_lock); return num_allocated_pages; @@ -195,6 +471,13 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) struct page *page; struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; LIST_HEAD(pages); + bool chunking = virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_BALLOON_CHUNKS); + if (chunking) + init_page_bmap_range(vb); + else + /* We can only do one array worth at a time. */ + num = min(num, ARRAY_SIZE(vb->pfns)); /* We can only do one array worth at a time. */ num = min(num, ARRAY_SIZE(vb->pfns)); @@ -208,6 +491,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) if (!page) break; set_page_pfns(vb, vb->pfns + vb->num_pfns, page); + if (chunking) + update_page_bmap_range(vb, page); + else + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); list_add(&page->lru, &pages); vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; } @@ -218,8 +505,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); * is true, we *have* to do it in this order */ - if (vb->num_pfns != 0) - tell_host(vb, vb->deflate_vq); + if (vb->num_pfns != 0) { + if (chunking) + set_page_bmap(vb, &pages, vb->deflate_vq); + else + tell_host(vb, vb->deflate_vq); + } release_pages_balloon(vb, &pages); mutex_unlock(&vb->balloon_lock); return num_freed_pages; @@ -431,6 +722,13 @@ static int init_vqs(struct virtio_balloon *vb) } #ifdef CONFIG_BALLOON_COMPACTION + +static void tell_host_one_page(struct virtio_balloon *vb, + struct virtqueue *vq, struct page *page) +{ + add_one_chunk(vb, vq, PAGE_CHUNK_TYPE_BALLOON, page_to_pfn(page), 1); +} + /* * virtballoon_migratepage - perform the balloon page migration on behalf of * a compation thread. (called under page lock) @@ -454,6 +752,8 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info, { struct virtio_balloon *vb = container_of(vb_dev_info, struct virtio_balloon, vb_dev_info); + bool chunking = virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_BALLOON_CHUNKS); unsigned long flags; /* @@ -475,16 +775,22 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info, vb_dev_info->isolated_pages--; __count_vm_event(BALLOON_MIGRATE); spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags); - vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; - set_page_pfns(vb, vb->pfns, newpage); - tell_host(vb, vb->inflate_vq); - + if (chunking) { + tell_host_one_page(vb, vb->inflate_vq, newpage); + } else { + vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb, vb->pfns, newpage); + tell_host(vb, vb->inflate_vq); + } /* balloon's page migration 2nd step -- deflate "page" */ balloon_page_delete(page); - vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; - set_page_pfns(vb, vb->pfns, page); - tell_host(vb, vb->deflate_vq); - + if (chunking) { + tell_host_one_page(vb, vb->deflate_vq, page); + } else { + vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; + set_page_pfns(vb, vb->pfns, page); + tell_host(vb, vb->deflate_vq); + } mutex_unlock(&vb->balloon_lock); put_page(page); /* balloon reference */ @@ -511,6 +817,32 @@ static struct file_system_type balloon_fs = { #endif /* CONFIG_BALLOON_COMPACTION */ +static void balloon_page_chunk_init(struct virtio_balloon *vb) +{ + void *buf; + + /* + * By default, we allocate page_bmap[0] only. More page_bmap will be + * allocated on demand. + */ + vb->page_bmap[0] = kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); + buf = kmalloc(sizeof(struct virtio_balloon_page_chunk_hdr) + + sizeof(struct virtio_balloon_page_chunk) * + MAX_PAGE_CHUNKS, GFP_KERNEL); + if (!vb->page_bmap[0] || !buf) { + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS); + kfree(vb->page_bmap[0]); + kfree(vb->balloon_page_chunk_hdr); + dev_warn(&vb->vdev->dev, "%s: failed\n", __func__); + } else { + vb->page_bmaps = 1; + vb->balloon_page_chunk_hdr = buf; + vb->balloon_page_chunk_hdr->chunks = 0; + vb->balloon_page_chunk = buf + + sizeof(struct virtio_balloon_page_chunk_hdr); + } +} + static int virtballoon_probe(struct virtio_device *vdev) { struct virtio_balloon *vb; @@ -533,6 +865,10 @@ static int virtballoon_probe(struct virtio_device *vdev) spin_lock_init(&vb->stop_update_lock); vb->stop_update = false; vb->num_pages = 0; + + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS)) + balloon_page_chunk_init(vb); + mutex_init(&vb->balloon_lock); init_waitqueue_head(&vb->acked); vb->vdev = vdev; @@ -609,6 +945,7 @@ static void virtballoon_remove(struct virtio_device *vdev) cancel_work_sync(&vb->update_balloon_stats_work); remove_common(vb); + free_page_bmap(vb); if (vb->vb_dev_info.inode) iput(vb->vb_dev_info.inode); kfree(vb); @@ -649,6 +986,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_BALLOON_CHUNKS, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index 343d7dd..be317b7 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,6 +34,7 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_BALLOON_CHUNKS 3 /* Inflate/Deflate pages in chunks */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -82,4 +83,16 @@ struct virtio_balloon_stat { __virtio64 val; } __attribute__((packed)); +struct virtio_balloon_page_chunk_hdr { + /* Number of chunks in the payload */ + __le32 chunks; +}; + +#define VIRTIO_BALLOON_CHUNK_BASE_SHIFT 12 +#define VIRTIO_BALLOON_CHUNK_SIZE_SHIFT 12 +struct virtio_balloon_page_chunk { + __le64 base; + __le64 size; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 2.7.4
Wei Wang
2017-Apr-13 09:35 UTC
[PATCH v9 3/5] mm: function to offer a page block on the free list
Add a function to find a page block on the free list specified by the caller. Pages from the page block may be used immediately after the function returns. The caller is responsible for detecting or preventing the use of such pages. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> --- include/linux/mm.h | 3 ++ mm/page_alloc.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 90 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index b84615b..096705e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1764,6 +1764,9 @@ extern void free_area_init(unsigned long * zones_size); extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern int inquire_unused_page_block(struct zone *zone, unsigned int order, + unsigned int migratetype, + struct page **page); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f3e0c69..fa8203f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4498,6 +4498,93 @@ void show_free_areas(unsigned int filter) show_swap_cache_info(); } +/** + * Heuristically get a page block in the system that is unused. + * It is possible that pages from the page block are used immediately after + * inquire_unused_page_block() returns. It is the caller's responsibility + * to either detect or prevent the use of such pages. + * + * The free list to check: zone->free_area[order].free_list[migratetype]. + * + * If the caller supplied page block (i.e. **page) is on the free list, offer + * the next page block on the list to the caller. Otherwise, offer the first + * page block on the list. + * + * Return 0 when a page block is found on the caller specified free list. + */ +int inquire_unused_page_block(struct zone *zone, unsigned int order, + unsigned int migratetype, struct page **page) +{ + struct zone *this_zone; + struct list_head *this_list; + int ret = 0; + unsigned long flags; + + /* Sanity check */ + if (zone == NULL || page == NULL || order >= MAX_ORDER || + migratetype >= MIGRATE_TYPES) + return -EINVAL; + + /* Zone validity check */ + for_each_populated_zone(this_zone) { + if (zone == this_zone) + break; + } + + /* Got a non-existent zone from the caller? */ + if (zone != this_zone) + return -EINVAL; + + spin_lock_irqsave(&this_zone->lock, flags); + + this_list = &zone->free_area[order].free_list[migratetype]; + if (list_empty(this_list)) { + *page = NULL; + ret = 1; + goto out; + } + + /* The caller is asking for the first free page block on the list */ + if ((*page) == NULL) { + *page = list_first_entry(this_list, struct page, lru); + ret = 0; + goto out; + } + + /** + * The page block passed from the caller is not on this free list + * anymore (e.g. a 1MB free page block has been split). In this case, + * offer the first page block on the free list that the caller is + * asking for. + */ + if (PageBuddy(*page) && order != page_order(*page)) { + *page = list_first_entry(this_list, struct page, lru); + ret = 0; + goto out; + } + + /** + * The page block passed from the caller has been the last page block + * on the list. + */ + if ((*page)->lru.next == this_list) { + *page = NULL; + ret = 1; + goto out; + } + + /** + * Finally, fall into the regular case: the page block passed from the + * caller is still on the free list. Offer the next one. + */ + *page = list_next_entry((*page), lru); + ret = 0; +out: + spin_unlock_irqrestore(&this_zone->lock, flags); + return ret; +} +EXPORT_SYMBOL(inquire_unused_page_block); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 2.7.4
Wei Wang
2017-Apr-13 09:35 UTC
[PATCH v9 4/5] mm: export symbol of next_zone and first_online_pgdat
This patch enables for_each_zone()/for_each_populated_zone() to be invoked by a kernel module. Signed-off-by: Wei Wang <wei.w.wang at intel.com> --- mm/mmzone.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/mmzone.c b/mm/mmzone.c index 5652be8..e14b7ec 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -13,6 +13,7 @@ struct pglist_data *first_online_pgdat(void) { return NODE_DATA(first_online_node); } +EXPORT_SYMBOL_GPL(first_online_pgdat); struct pglist_data *next_online_pgdat(struct pglist_data *pgdat) { @@ -41,6 +42,7 @@ struct zone *next_zone(struct zone *zone) } return zone; } +EXPORT_SYMBOL_GPL(next_zone); static inline int zref_in_nodemask(struct zoneref *zref, nodemask_t *nodes) { -- 2.7.4
Add a new vq, miscq, to handle miscellaneous requests between the device and the driver. This patch implemnts the VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES request sent from the device. Upon receiving this request from the miscq, the driver offers to the device the guest unused pages. Tests have shown that skipping the transfer of unused pages of a 32G guest can get the live migration time reduced to 1/8. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> --- drivers/virtio/virtio_balloon.c | 209 +++++++++++++++++++++++++++++++++--- include/uapi/linux/virtio_balloon.h | 8 ++ 2 files changed, 204 insertions(+), 13 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 5e2e7cc..95c703e 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -56,11 +56,12 @@ static struct vfsmount *balloon_mnt; /* Types of pages to chunk */ #define PAGE_CHUNK_TYPE_BALLOON 0 +#define PAGE_CHUNK_TYPE_UNUSED 1 #define MAX_PAGE_CHUNKS 4096 struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *miscq; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -94,6 +95,19 @@ struct virtio_balloon { struct virtio_balloon_page_chunk_hdr *balloon_page_chunk_hdr; struct virtio_balloon_page_chunk *balloon_page_chunk; + /* + * Buffer for PAGE_CHUNK_TYPE_UNUSED: + * virtio_balloon_miscq_hdr + + * virtio_balloon_page_chunk_hdr + + * virtio_balloon_page_chunk * MAX_PAGE_CHUNKS + */ + struct virtio_balloon_miscq_hdr *miscq_out_hdr; + struct virtio_balloon_page_chunk_hdr *unused_page_chunk_hdr; + struct virtio_balloon_page_chunk *unused_page_chunk; + + /* Buffer for host to send cmd to miscq */ + struct virtio_balloon_miscq_hdr *miscq_in_hdr; + /* Bitmap used to record pages */ unsigned long *page_bmap[PAGE_BMAP_COUNT_MAX]; /* Number of the allocated page_bmap */ @@ -220,6 +234,10 @@ static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *vq, hdr = vb->balloon_page_chunk_hdr; len = 0; break; + case PAGE_CHUNK_TYPE_UNUSED: + hdr = vb->unused_page_chunk_hdr; + len = sizeof(struct virtio_balloon_miscq_hdr); + break; default: dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", __func__, type); @@ -254,6 +272,10 @@ static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, hdr = vb->balloon_page_chunk_hdr; chunk = vb->balloon_page_chunk; break; + case PAGE_CHUNK_TYPE_UNUSED: + hdr = vb->unused_page_chunk_hdr; + chunk = vb->unused_page_chunk; + break; default: dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", __func__, type); @@ -686,28 +708,139 @@ static void update_balloon_size_func(struct work_struct *work) queue_work(system_freezable_wq, work); } +static void miscq_in_hdr_add(struct virtio_balloon *vb) +{ + struct scatterlist sg_in; + + sg_init_one(&sg_in, vb->miscq_in_hdr, + sizeof(struct virtio_balloon_miscq_hdr)); + if (virtqueue_add_inbuf(vb->miscq, &sg_in, 1, vb->miscq_in_hdr, + GFP_KERNEL) < 0) { + __virtio_clear_bit(vb->vdev, + VIRTIO_BALLOON_F_MISC_VQ); + dev_warn(&vb->vdev->dev, "%s: add miscq_in_hdr err\n", + __func__); + return; + } + virtqueue_kick(vb->miscq); +} + +static void miscq_send_unused_pages(struct virtio_balloon *vb) +{ + struct virtio_balloon_miscq_hdr *miscq_out_hdr = vb->miscq_out_hdr; + struct virtqueue *vq = vb->miscq; + int ret = 0; + unsigned int order = 0, migratetype = 0; + struct zone *zone = NULL; + struct page *page = NULL; + u64 pfn; + + miscq_out_hdr->cmd = VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES; + miscq_out_hdr->flags = 0; + + for_each_populated_zone(zone) { + for (order = MAX_ORDER - 1; order > 0; order--) { + for (migratetype = 0; migratetype < MIGRATE_TYPES; + migratetype++) { + do { + ret = inquire_unused_page_block(zone, + order, migratetype, &page); + if (!ret) { + pfn = (u64)page_to_pfn(page); + add_one_chunk(vb, vq, + PAGE_CHUNK_TYPE_UNUSED, + pfn, + (u64)(1 << order)); + } + } while (!ret); + } + } + } + miscq_out_hdr->flags |= VIRTIO_BALLOON_MISCQ_F_COMPLETE; + send_page_chunks(vb, vq, PAGE_CHUNK_TYPE_UNUSED, true); +} + +static void miscq_handle(struct virtqueue *vq) +{ + struct virtio_balloon *vb = vq->vdev->priv; + struct virtio_balloon_miscq_hdr *hdr; + unsigned int len; + + hdr = virtqueue_get_buf(vb->miscq, &len); + if (!hdr || len != sizeof(struct virtio_balloon_miscq_hdr)) { + dev_warn(&vb->vdev->dev, "%s: invalid miscq hdr len\n", + __func__); + miscq_in_hdr_add(vb); + return; + } + switch (hdr->cmd) { + case VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES: + miscq_send_unused_pages(vb); + break; + default: + dev_warn(&vb->vdev->dev, "%s: miscq cmd %d not supported\n", + __func__, hdr->cmd); + } + miscq_in_hdr_add(vb); +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; - static const char * const names[] = { "inflate", "deflate", "stats" }; - int err, nvqs; + struct virtqueue **vqs; + vq_callback_t **callbacks; + const char **names; + int err = -ENOMEM; + int i, nvqs; + + /* Inflateq and deflateq are used unconditionally */ + nvqs = 2; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) + nvqs++; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) + nvqs++; + + /* Allocate space for find_vqs parameters */ + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL); + if (!vqs) + goto err_vq; + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL); + if (!callbacks) + goto err_callback; + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL); + if (!names) + goto err_names; + + callbacks[0] = balloon_ack; + names[0] = "inflate"; + callbacks[1] = balloon_ack; + names[1] = "deflate"; + + i = 2; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + callbacks[i] = stats_request; + names[i] = "stats"; + i++; + } - /* - * We expect two virtqueues: inflate and deflate, and - * optionally stat. - */ - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; - err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names); + if (virtio_has_feature(vb->vdev, + VIRTIO_BALLOON_F_MISC_VQ)) { + callbacks[i] = miscq_handle; + names[i] = "miscq"; + } + + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, + names); if (err) - return err; + goto err_find; vb->inflate_vq = vqs[0]; vb->deflate_vq = vqs[1]; + i = 2; if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { struct scatterlist sg; - vb->stats_vq = vqs[2]; + vb->stats_vq = vqs[i++]; /* * Prime this virtqueue with one buffer so the hypervisor can * use it to signal us later (it can't be broken yet!). @@ -718,7 +851,25 @@ static int init_vqs(struct virtio_balloon *vb) BUG(); virtqueue_kick(vb->stats_vq); } + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) { + vb->miscq = vqs[i]; + miscq_in_hdr_add(vb); + } + + kfree(names); + kfree(callbacks); + kfree(vqs); return 0; + +err_find: + kfree(names); +err_names: + kfree(callbacks); +err_callback: + kfree(vqs); +err_vq: + return err; } #ifdef CONFIG_BALLOON_COMPACTION @@ -843,6 +994,32 @@ static void balloon_page_chunk_init(struct virtio_balloon *vb) } } +static void miscq_init(struct virtio_balloon *vb) +{ + void *buf; + + vb->miscq_in_hdr = kmalloc(sizeof(struct virtio_balloon_miscq_hdr), + GFP_KERNEL); + buf = kmalloc(sizeof(struct virtio_balloon_miscq_hdr) + + sizeof(struct virtio_balloon_page_chunk_hdr) + + sizeof(struct virtio_balloon_page_chunk) * + MAX_PAGE_CHUNKS, GFP_KERNEL); + if (!vb->miscq_in_hdr || !buf) { + kfree(buf); + kfree(vb->miscq_in_hdr); + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ); + dev_warn(&vb->vdev->dev, "%s: failed\n", __func__); + } else { + vb->miscq_out_hdr = buf; + vb->unused_page_chunk_hdr = buf + + sizeof(struct virtio_balloon_miscq_hdr); + vb->unused_page_chunk_hdr->chunks = 0; + vb->unused_page_chunk = buf + + sizeof(struct virtio_balloon_miscq_hdr) + + sizeof(struct virtio_balloon_page_chunk_hdr); + } +} + static int virtballoon_probe(struct virtio_device *vdev) { struct virtio_balloon *vb; @@ -869,6 +1046,9 @@ static int virtballoon_probe(struct virtio_device *vdev) if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS)) balloon_page_chunk_init(vb); + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_MISC_VQ)) + miscq_init(vb); + mutex_init(&vb->balloon_lock); init_waitqueue_head(&vb->acked); vb->vdev = vdev; @@ -946,6 +1126,8 @@ static void virtballoon_remove(struct virtio_device *vdev) remove_common(vb); free_page_bmap(vb); + kfree(vb->miscq_out_hdr); + kfree(vb->miscq_in_hdr); if (vb->vb_dev_info.inode) iput(vb->vb_dev_info.inode); kfree(vb); @@ -987,6 +1169,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_BALLOON_CHUNKS, + VIRTIO_BALLOON_F_MISC_VQ, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index be317b7..96bdc86 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_BALLOON_CHUNKS 3 /* Inflate/Deflate pages in chunks */ +#define VIRTIO_BALLOON_F_MISC_VQ 4 /* Virtqueue for misc. requests */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -95,4 +96,11 @@ struct virtio_balloon_page_chunk { __le64 size; }; +#define VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES 0 +#define VIRTIO_BALLOON_MISCQ_F_COMPLETE 0x1 +struct virtio_balloon_miscq_hdr { + __le16 cmd; + __le16 flags; +}; + #endif /* _LINUX_VIRTIO_BALLOON_H */ -- 2.7.4
Michael S. Tsirkin
2017-Apr-13 16:34 UTC
[PATCH v9 2/5] virtio-balloon: VIRTIO_BALLOON_F_BALLOON_CHUNKS
On Thu, Apr 13, 2017 at 05:35:05PM +0800, Wei Wang wrote:> Add a new feature, VIRTIO_BALLOON_F_BALLOON_CHUNKS, which enablesLet's find a better name here. VIRTIO_BALLOON_F_PAGE_CHUNK> the transfer of the ballooned (i.e. inflated/deflated) pages in > chunks to the host. > > The implementation of the previous virtio-balloon is not very > efficient, because the ballooned pages are transferred to the > host one by one. Here is the breakdown of the time in percentage > spent on each step of the balloon inflating process (inflating > 7GB of an 8GB idle guest). > > 1) allocating pages (6.5%) > 2) sending PFNs to host (68.3%) > 3) address translation (6.1%) > 4) madvise (19%) > > It takes about 4126ms for the inflating process to complete. > The above profiling shows that the bottlenecks are stage 2) > and stage 4). > > This patch optimizes step 2) by transferring pages to the host in > chunks. A chunk consists of guest physically continuous pages, and > it is offered to the host via a base PFN (i.e. the start PFN of > those physically continuous pages) and the size (i.e. the total > number of the pages). A chunk is formated as below:formatted> -------------------------------------------------------- > | Base (52 bit) | Rsvd (12 bit) | > -------------------------------------------------------- > -------------------------------------------------------- > | Size (52 bit) | Rsvd (12 bit) | > -------------------------------------------------------- > > By doing so, step 4) can also be optimized by doing address > translation and madvise() in chunks rather than page by page. > > With this new feature, the above ballooning process takes ~590ms > resulting in an improvement of ~85%. > > TODO: optimize stage 1) by allocating/freeing a chunk of pages > instead of a single page each time. > > Signed-off-by: Wei Wang <wei.w.wang at intel.com> > Signed-off-by: Liang Li <liang.z.li at intel.com> > Suggested-by: Michael S. Tsirkin <mst at redhat.com>So we don't need the bitmap to talk to host, it is just a data structure we chose to maintain lists of pages, right? OK as far as it goes but you need much better isolation for it. Build a data structure with APIs such as _init, _cleanup, _add, _clear, _find_first, _find_next. Completely unrelated to pages, it just maintains bits. Then use it here.> --- > drivers/virtio/virtio_balloon.c | 384 +++++++++++++++++++++++++++++++++--- > include/uapi/linux/virtio_balloon.h | 13 ++ > 2 files changed, 374 insertions(+), 23 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index f59cb4f..5e2e7cc 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -42,6 +42,10 @@ > #define OOM_VBALLOON_DEFAULT_PAGES 256 > #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 > > +#define PAGE_BMAP_SIZE (8 * PAGE_SIZE) > +#define PFNS_PER_PAGE_BMAP (PAGE_BMAP_SIZE * BITS_PER_BYTE) > +#define PAGE_BMAP_COUNT_MAX 32 > +Please prefix with VIRTIO_BALLOON_ and add comments.> static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; > module_param(oom_pages, int, S_IRUSR | S_IWUSR); > MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > @@ -50,6 +54,10 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > static struct vfsmount *balloon_mnt; > #endif > > +/* Types of pages to chunk */ > +#define PAGE_CHUNK_TYPE_BALLOON 0 > +Doesn't look like you are ever adding more types in this patchset. Pls keep code simple, generalize it later.> +#define MAX_PAGE_CHUNKS 4096This is an order-4 allocation. I'd make it 4095 and then it's an order-3 one.> struct virtio_balloon { > struct virtio_device *vdev; > struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; > @@ -78,6 +86,32 @@ struct virtio_balloon { > /* Synchronize access/update to this struct virtio_balloon elements */ > struct mutex balloon_lock; > > + /* > + * Buffer for PAGE_CHUNK_TYPE_BALLOON: > + * virtio_balloon_page_chunk_hdr + > + * virtio_balloon_page_chunk * MAX_PAGE_CHUNKS > + */ > + struct virtio_balloon_page_chunk_hdr *balloon_page_chunk_hdr; > + struct virtio_balloon_page_chunk *balloon_page_chunk; > + > + /* Bitmap used to record pages */ > + unsigned long *page_bmap[PAGE_BMAP_COUNT_MAX]; > + /* Number of the allocated page_bmap */ > + unsigned int page_bmaps; > + > + /* > + * The allocated page_bmap size may be smaller than the pfn range of > + * the ballooned pages. In this case, we need to use the page_bmap > + * multiple times to cover the entire pfn range. It's like using a > + * short ruler several times to finish measuring a long object. > + * The start location of the ruler in the next measurement is the end > + * location of the ruler in the previous measurement. > + * > + * pfn_max & pfn_min: forms the pfn range of the ballooned pages > + * pfn_start & pfn_stop: records the start and stop pfn in each covercover? what does this mean? looks like you only use these to pass data to tell_host. so pass these as parameters and you won't need to keep them in this structure. And then you can move this comment to set_page_bmap where it belongs.> + */ > + unsigned long pfn_min, pfn_max, pfn_start, pfn_stop; > + > /* The array of pfns we tell the Host about. */ > unsigned int num_pfns; > __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX]; > @@ -110,20 +144,201 @@ static void balloon_ack(struct virtqueue *vq) > wake_up(&vb->acked); > } > > -static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) > +static inline void init_page_bmap_range(struct virtio_balloon *vb) > +{ > + vb->pfn_min = ULONG_MAX; > + vb->pfn_max = 0; > +} > + > +static inline void update_page_bmap_range(struct virtio_balloon *vb, > + struct page *page) > +{ > + unsigned long balloon_pfn = page_to_balloon_pfn(page); > + > + vb->pfn_min = min(balloon_pfn, vb->pfn_min); > + vb->pfn_max = max(balloon_pfn, vb->pfn_max); > +} > + > +/* The page_bmap size is extended by adding more number of page_bmap */did you mean Allocate more bitmaps to cover the given number of pfns and add them to page_bmap ? This isn't what this function does. It blindly assumes 1 bitmap is allocated and allocates more, up to PAGE_BMAP_COUNT_MAX.> +static void extend_page_bmap_size(struct virtio_balloon *vb, > + unsigned long pfns) > +{ > + int i, bmaps; > + unsigned long bmap_len; > + > + bmap_len = ALIGN(pfns, BITS_PER_LONG) / BITS_PER_BYTE; > + bmap_len = ALIGN(bmap_len, PAGE_BMAP_SIZE);Align? PAGE_BMAP_SIZE doesn't even have to be a power of 2 ...> + bmaps = min((int)(bmap_len / PAGE_BMAP_SIZE), > + PAGE_BMAP_COUNT_MAX);I got lost here. Please use things like ARRAY_SIZE instead of macros.> + > + for (i = 1; i < bmaps; i++) { > + vb->page_bmap[i] = kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); > + if (vb->page_bmap[i]) > + vb->page_bmaps++; > + else > + break; > + } > +} > + > +static void free_extended_page_bmap(struct virtio_balloon *vb) > +{ > + int i, bmaps = vb->page_bmaps; > + > + for (i = 1; i < bmaps; i++) { > + kfree(vb->page_bmap[i]); > + vb->page_bmap[i] = NULL; > + vb->page_bmaps--; > + } > +} > +What's the magic number 1 here? Maybe you want to document what is going on. Here's a guess: We keep a single bmap around at all times. If memory does not fit there, we allocate up to PAGE_BMAP_COUNT_MAX of chunks.> +static void free_page_bmap(struct virtio_balloon *vb) > +{ > + int i; > + > + for (i = 0; i < vb->page_bmaps; i++) > + kfree(vb->page_bmap[i]); > +} > + > +static void clear_page_bmap(struct virtio_balloon *vb) > +{ > + int i; > + > + for (i = 0; i < vb->page_bmaps; i++) > + memset(vb->page_bmap[i], 0, PAGE_BMAP_SIZE); > +} > + > +static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *vq, > + int type, bool busy_wait)busy_wait seems unused. pls drop.> { > struct scatterlist sg; > + struct virtio_balloon_page_chunk_hdr *hdr; > + void *buf; > unsigned int len; > > - sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); > + switch (type) { > + case PAGE_CHUNK_TYPE_BALLOON: > + hdr = vb->balloon_page_chunk_hdr; > + len = 0; > + break; > + default: > + dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", > + __func__, type); > + return; > + } > > - /* We should always be able to add one buffer to an empty queue. */ > - virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); > - virtqueue_kick(vq); > + buf = (void *)hdr - len;Moving back to before the header? How can this make sense? It works fine since len is 0, so just buf = hdr.> + len += sizeof(struct virtio_balloon_page_chunk_hdr); > + len += hdr->chunks * sizeof(struct virtio_balloon_page_chunk); > + sg_init_table(&sg, 1); > + sg_set_buf(&sg, buf, len); > + if (!virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL)) { > + virtqueue_kick(vq); > + if (busy_wait) > + while (!virtqueue_get_buf(vq, &len) && > + !virtqueue_is_broken(vq)) > + cpu_relax(); > + else > + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); > + hdr->chunks = 0;Why zero it here after device used it? Better to zero before use.> + } > +} > + > +static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, > + int type, u64 base, u64 size)what are the units here? Looks like it's in 4kbyte units?> +{ > + struct virtio_balloon_page_chunk_hdr *hdr; > + struct virtio_balloon_page_chunk *chunk; > + > + switch (type) { > + case PAGE_CHUNK_TYPE_BALLOON: > + hdr = vb->balloon_page_chunk_hdr; > + chunk = vb->balloon_page_chunk; > + break; > + default: > + dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", > + __func__, type); > + return; > + } > + chunk = chunk + hdr->chunks; > + chunk->base = cpu_to_le64(base << VIRTIO_BALLOON_CHUNK_BASE_SHIFT); > + chunk->size = cpu_to_le64(size << VIRTIO_BALLOON_CHUNK_SIZE_SHIFT); > + hdr->chunks++;Isn't this LE? You should keep it somewhere else.> + if (hdr->chunks == MAX_PAGE_CHUNKS) > + send_page_chunks(vb, vq, type, false);and zero chunks here?> +} > + > +static void chunking_pages_from_bmap(struct virtio_balloon *vb,Does this mean "convert_bmap_to_chunks"?> + struct virtqueue *vq, > + unsigned long pfn_start, > + unsigned long *bmap, > + unsigned long len) > +{ > + unsigned long pos = 0, end = len * BITS_PER_BYTE; > + > + while (pos < end) { > + unsigned long one = find_next_bit(bmap, end, pos); > + > + if (one < end) { > + unsigned long chunk_size, zero; > + > + zero = find_next_zero_bit(bmap, end, one + 1);zero and one are unhelpful names unless they equal 0 and 1. current/next?> + if (zero >= end) > + chunk_size = end - one; > + else > + chunk_size = zero - one; > + > + if (chunk_size) > + add_one_chunk(vb, vq, PAGE_CHUNK_TYPE_BALLOON, > + pfn_start + one, chunk_size);Still not so what does a bit refer to? page or 4kbytes? I think it should be a page.> + pos = one + chunk_size; > + } else > + break; > + } > +} > +> +static void tell_host(struct virtio_balloon *vb, struct virtqueue *vq) > +{ > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS)) { > + int pfns, page_bmaps, i; > + unsigned long pfn_start, pfns_len; > + > + pfn_start = vb->pfn_start; > + pfns = vb->pfn_stop - pfn_start + 1; > + pfns = roundup(roundup(pfns, BITS_PER_LONG), > + PFNS_PER_PAGE_BMAP); > + page_bmaps = pfns / PFNS_PER_PAGE_BMAP; > + pfns_len = pfns / BITS_PER_BYTE; > + > + for (i = 0; i < page_bmaps; i++) { > + unsigned int bmap_len = PAGE_BMAP_SIZE; > + > + /* The last one takes the leftover only */I don't understand what does this mean.> + if (i + 1 == page_bmaps) > + bmap_len = pfns_len - PAGE_BMAP_SIZE * i; > + > + chunking_pages_from_bmap(vb, vq, pfn_start + > + i * PFNS_PER_PAGE_BMAP, > + vb->page_bmap[i], bmap_len); > + } > + if (vb->balloon_page_chunk_hdr->chunks > 0) > + send_page_chunks(vb, vq, PAGE_CHUNK_TYPE_BALLOON, > + false); > + } else { > + struct scatterlist sg; > + unsigned int len; > > - /* When host has read buffer, this completes via balloon_ack */ > - wait_event(vb->acked, virtqueue_get_buf(vq, &len)); > + sg_init_one(&sg, vb->pfns, sizeof(vb->pfns[0]) * vb->num_pfns); > > + /* > + * We should always be able to add one buffer to an empty > + * queue. > + */ > + virtqueue_add_outbuf(vq, &sg, 1, vb, GFP_KERNEL); > + virtqueue_kick(vq); > + > + /* When host has read buffer, this completes via balloon_ack */ > + wait_event(vb->acked, virtqueue_get_buf(vq, &len)); > + } > } > > static void set_page_pfns(struct virtio_balloon *vb, > @@ -131,20 +346,73 @@ static void set_page_pfns(struct virtio_balloon *vb, > { > unsigned int i; > > - /* Set balloon pfns pointing at this page. > - * Note that the first pfn points at start of the page. */ > + /* > + * Set balloon pfns pointing at this page. > + * Note that the first pfn points at start of the page. > + */ > for (i = 0; i < VIRTIO_BALLOON_PAGES_PER_PAGE; i++) > pfns[i] = cpu_to_virtio32(vb->vdev, > page_to_balloon_pfn(page) + i); > } >Nice cleanup but pls split this out. This patch is big enough as it is.> +static void set_page_bmap(struct virtio_balloon *vb, > + struct list_head *pages, struct virtqueue *vq) > +{ > + unsigned long pfn_start, pfn_stop; > + struct page *page; > + bool found; > + > + vb->pfn_min = rounddown(vb->pfn_min, BITS_PER_LONG); > + vb->pfn_max = roundup(vb->pfn_max, BITS_PER_LONG); > + > + extend_page_bmap_size(vb, vb->pfn_max - vb->pfn_min + 1);This might not do anything in particular might not cover the given pfn range. Do we care? Why not?> + pfn_start = vb->pfn_min; > + > + while (pfn_start < vb->pfn_max) { > + pfn_stop = pfn_start + PFNS_PER_PAGE_BMAP * vb->page_bmaps; > + pfn_stop = pfn_stop < vb->pfn_max ? pfn_stop : vb->pfn_max; > + > + vb->pfn_start = pfn_start; > + clear_page_bmap(vb); > + found = false; > + > + list_for_each_entry(page, pages, lru) { > + unsigned long bmap_idx, bmap_pos, balloon_pfn; > + > + balloon_pfn = page_to_balloon_pfn(page); > + if (balloon_pfn < pfn_start || balloon_pfn > pfn_stop) > + continue; > + bmap_idx = (balloon_pfn - pfn_start) / > + PFNS_PER_PAGE_BMAP; > + bmap_pos = (balloon_pfn - pfn_start) % > + PFNS_PER_PAGE_BMAP; > + set_bit(bmap_pos, vb->page_bmap[bmap_idx]);Looks like this will crash if bmap_idx is out of range or if page_bmap allocation failed.> + > + found = true; > + } > + if (found) { > + vb->pfn_stop = pfn_stop; > + tell_host(vb, vq); > + } > + pfn_start = pfn_stop; > + } > + free_extended_page_bmap(vb); > +} > + > static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) > { > struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; > unsigned num_allocated_pages; > + bool chunking = virtio_has_feature(vb->vdev, > + VIRTIO_BALLOON_F_BALLOON_CHUNKS); > > /* We can only do one array worth at a time. */ > - num = min(num, ARRAY_SIZE(vb->pfns)); > + if (chunking) { > + init_page_bmap_range(vb); > + } else { > + /* We can only do one array worth at a time. */ > + num = min(num, ARRAY_SIZE(vb->pfns)); > + } > > mutex_lock(&vb->balloon_lock); > for (vb->num_pfns = 0; vb->num_pfns < num; > @@ -159,7 +427,10 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) > msleep(200); > break; > } > - set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > + if (chunking) > + update_page_bmap_range(vb, page); > + else > + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > vb->num_pages += VIRTIO_BALLOON_PAGES_PER_PAGE; > if (!virtio_has_feature(vb->vdev, > VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) > @@ -168,8 +439,13 @@ static unsigned fill_balloon(struct virtio_balloon *vb, size_t num) > > num_allocated_pages = vb->num_pfns; > /* Did we get any? */ > - if (vb->num_pfns != 0) > - tell_host(vb, vb->inflate_vq); > + if (vb->num_pfns != 0) { > + if (chunking) > + set_page_bmap(vb, &vb_dev_info->pages, > + vb->inflate_vq); > + else > + tell_host(vb, vb->inflate_vq); > + } > mutex_unlock(&vb->balloon_lock); > > return num_allocated_pages; > @@ -195,6 +471,13 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) > struct page *page; > struct balloon_dev_info *vb_dev_info = &vb->vb_dev_info; > LIST_HEAD(pages); > + bool chunking = virtio_has_feature(vb->vdev, > + VIRTIO_BALLOON_F_BALLOON_CHUNKS); > + if (chunking) > + init_page_bmap_range(vb); > + else > + /* We can only do one array worth at a time. */ > + num = min(num, ARRAY_SIZE(vb->pfns)); > > /* We can only do one array worth at a time. */ > num = min(num, ARRAY_SIZE(vb->pfns)); > @@ -208,6 +491,10 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) > if (!page) > break; > set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > + if (chunking) > + update_page_bmap_range(vb, page); > + else > + set_page_pfns(vb, vb->pfns + vb->num_pfns, page); > list_add(&page->lru, &pages); > vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE; > } > @@ -218,8 +505,12 @@ static unsigned leak_balloon(struct virtio_balloon *vb, size_t num) > * virtio_has_feature(vdev, VIRTIO_BALLOON_F_MUST_TELL_HOST); > * is true, we *have* to do it in this order > */ > - if (vb->num_pfns != 0) > - tell_host(vb, vb->deflate_vq); > + if (vb->num_pfns != 0) { > + if (chunking) > + set_page_bmap(vb, &pages, vb->deflate_vq); > + else > + tell_host(vb, vb->deflate_vq); > + } > release_pages_balloon(vb, &pages); > mutex_unlock(&vb->balloon_lock); > return num_freed_pages; > @@ -431,6 +722,13 @@ static int init_vqs(struct virtio_balloon *vb) > } > > #ifdef CONFIG_BALLOON_COMPACTION > + > +static void tell_host_one_page(struct virtio_balloon *vb, > + struct virtqueue *vq, struct page *page) > +{ > + add_one_chunk(vb, vq, PAGE_CHUNK_TYPE_BALLOON, page_to_pfn(page), 1);This passes 4kbytes to host which seems wrong - I think you want a full page.> +} > + > /* > * virtballoon_migratepage - perform the balloon page migration on behalf of > * a compation thread. (called under page lock) > @@ -454,6 +752,8 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info, > { > struct virtio_balloon *vb = container_of(vb_dev_info, > struct virtio_balloon, vb_dev_info); > + bool chunking = virtio_has_feature(vb->vdev, > + VIRTIO_BALLOON_F_BALLOON_CHUNKS); > unsigned long flags; > > /* > @@ -475,16 +775,22 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info, > vb_dev_info->isolated_pages--; > __count_vm_event(BALLOON_MIGRATE); > spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags); > - vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; > - set_page_pfns(vb, vb->pfns, newpage); > - tell_host(vb, vb->inflate_vq); > - > + if (chunking) { > + tell_host_one_page(vb, vb->inflate_vq, newpage); > + } else { > + vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; > + set_page_pfns(vb, vb->pfns, newpage); > + tell_host(vb, vb->inflate_vq); > + } > /* balloon's page migration 2nd step -- deflate "page" */ > balloon_page_delete(page); > - vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; > - set_page_pfns(vb, vb->pfns, page); > - tell_host(vb, vb->deflate_vq); > - > + if (chunking) { > + tell_host_one_page(vb, vb->deflate_vq, page); > + } else { > + vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE; > + set_page_pfns(vb, vb->pfns, page); > + tell_host(vb, vb->deflate_vq); > + } > mutex_unlock(&vb->balloon_lock); > > put_page(page); /* balloon reference */ > @@ -511,6 +817,32 @@ static struct file_system_type balloon_fs = { > > #endif /* CONFIG_BALLOON_COMPACTION */ > > +static void balloon_page_chunk_init(struct virtio_balloon *vb) > +{ > + void *buf; > + > + /* > + * By default, we allocate page_bmap[0] only. More page_bmap will be > + * allocated on demand. > + */ > + vb->page_bmap[0] = kmalloc(PAGE_BMAP_SIZE, GFP_KERNEL); > + buf = kmalloc(sizeof(struct virtio_balloon_page_chunk_hdr) + > + sizeof(struct virtio_balloon_page_chunk) * > + MAX_PAGE_CHUNKS, GFP_KERNEL); > + if (!vb->page_bmap[0] || !buf) { > + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS);this doesn't work as expected as features has been OK'd by then. You want something like validate_features that I posted. See "virtio: allow drivers to validate features".> + kfree(vb->page_bmap[0]);Looks like this will double free. you want to zero them I think.> + kfree(vb->balloon_page_chunk_hdr); > + dev_warn(&vb->vdev->dev, "%s: failed\n", __func__); > + } else { > + vb->page_bmaps = 1; > + vb->balloon_page_chunk_hdr = buf; > + vb->balloon_page_chunk_hdr->chunks = 0; > + vb->balloon_page_chunk = buf + > + sizeof(struct virtio_balloon_page_chunk_hdr); > + } > +} > + > static int virtballoon_probe(struct virtio_device *vdev) > { > struct virtio_balloon *vb; > @@ -533,6 +865,10 @@ static int virtballoon_probe(struct virtio_device *vdev) > spin_lock_init(&vb->stop_update_lock); > vb->stop_update = false; > vb->num_pages = 0; > + > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS)) > + balloon_page_chunk_init(vb); > + > mutex_init(&vb->balloon_lock); > init_waitqueue_head(&vb->acked); > vb->vdev = vdev; > @@ -609,6 +945,7 @@ static void virtballoon_remove(struct virtio_device *vdev) > cancel_work_sync(&vb->update_balloon_stats_work); > > remove_common(vb); > + free_page_bmap(vb); > if (vb->vb_dev_info.inode) > iput(vb->vb_dev_info.inode); > kfree(vb); > @@ -649,6 +986,7 @@ static unsigned int features[] = { > VIRTIO_BALLOON_F_MUST_TELL_HOST, > VIRTIO_BALLOON_F_STATS_VQ, > VIRTIO_BALLOON_F_DEFLATE_ON_OOM, > + VIRTIO_BALLOON_F_BALLOON_CHUNKS, > }; > > static struct virtio_driver virtio_balloon_driver = { > diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h > index 343d7dd..be317b7 100644 > --- a/include/uapi/linux/virtio_balloon.h > +++ b/include/uapi/linux/virtio_balloon.h > @@ -34,6 +34,7 @@ > #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ > #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ > #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ > +#define VIRTIO_BALLOON_F_BALLOON_CHUNKS 3 /* Inflate/Deflate pages in chunks */ > > /* Size of a PFN in the balloon interface. */ > #define VIRTIO_BALLOON_PFN_SHIFT 12 > @@ -82,4 +83,16 @@ struct virtio_balloon_stat { > __virtio64 val; > } __attribute__((packed)); > > +struct virtio_balloon_page_chunk_hdr { > + /* Number of chunks in the payload */ > + __le32 chunks;You want to make this __le64 to align everything to 64 bit.> +}; > + > +#define VIRTIO_BALLOON_CHUNK_BASE_SHIFT 12 > +#define VIRTIO_BALLOON_CHUNK_SIZE_SHIFT 12 > +struct virtio_balloon_page_chunk {so rename this virtio_balloon_page_chunk_entry> + __le64 base; > + __le64 size; > +}; > +And then: struct virtio_balloon_page_chunk { struct virtio_balloon_page_chunk_hdr hdr; struct virtio_balloon_page_chunk_entry entries[]; };> #endif /* _LINUX_VIRTIO_BALLOON_H */ > -- > 2.7.4
Michael S. Tsirkin
2017-Apr-13 17:08 UTC
[PATCH v9 5/5] virtio-balloon: VIRTIO_BALLOON_F_MISC_VQ
On Thu, Apr 13, 2017 at 05:35:08PM +0800, Wei Wang wrote:> Add a new vq, miscq, to handle miscellaneous requests between the device > and the driver. > > This patch implemnts the VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGESimplements> request sent from the device.Commands are sent from host and handled on guest. In fact how is this so different from stats? How about reusing the stats vq then? You can use one buffer for stats and one buffer for commands.> Upon receiving this request from the > miscq, the driver offers to the device the guest unused pages. > > Tests have shown that skipping the transfer of unused pages of a 32G > guest can get the live migration time reduced to 1/8. > > Signed-off-by: Wei Wang <wei.w.wang at intel.com> > Signed-off-by: Liang Li <liang.z.li at intel.com> > --- > drivers/virtio/virtio_balloon.c | 209 +++++++++++++++++++++++++++++++++--- > include/uapi/linux/virtio_balloon.h | 8 ++ > 2 files changed, 204 insertions(+), 13 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index 5e2e7cc..95c703e 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -56,11 +56,12 @@ static struct vfsmount *balloon_mnt; > > /* Types of pages to chunk */ > #define PAGE_CHUNK_TYPE_BALLOON 0 > +#define PAGE_CHUNK_TYPE_UNUSED 1 > > #define MAX_PAGE_CHUNKS 4096 > struct virtio_balloon { > struct virtio_device *vdev; > - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; > + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *miscq; > > /* The balloon servicing is delegated to a freezable workqueue. */ > struct work_struct update_balloon_stats_work; > @@ -94,6 +95,19 @@ struct virtio_balloon { > struct virtio_balloon_page_chunk_hdr *balloon_page_chunk_hdr; > struct virtio_balloon_page_chunk *balloon_page_chunk; > > + /* > + * Buffer for PAGE_CHUNK_TYPE_UNUSED: > + * virtio_balloon_miscq_hdr + > + * virtio_balloon_page_chunk_hdr + > + * virtio_balloon_page_chunk * MAX_PAGE_CHUNKS > + */ > + struct virtio_balloon_miscq_hdr *miscq_out_hdr; > + struct virtio_balloon_page_chunk_hdr *unused_page_chunk_hdr; > + struct virtio_balloon_page_chunk *unused_page_chunk; > + > + /* Buffer for host to send cmd to miscq */ > + struct virtio_balloon_miscq_hdr *miscq_in_hdr; > + > /* Bitmap used to record pages */ > unsigned long *page_bmap[PAGE_BMAP_COUNT_MAX]; > /* Number of the allocated page_bmap */ > @@ -220,6 +234,10 @@ static void send_page_chunks(struct virtio_balloon *vb, struct virtqueue *vq, > hdr = vb->balloon_page_chunk_hdr; > len = 0; > break; > + case PAGE_CHUNK_TYPE_UNUSED: > + hdr = vb->unused_page_chunk_hdr; > + len = sizeof(struct virtio_balloon_miscq_hdr); > + break; > default: > dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", > __func__, type); > @@ -254,6 +272,10 @@ static void add_one_chunk(struct virtio_balloon *vb, struct virtqueue *vq, > hdr = vb->balloon_page_chunk_hdr; > chunk = vb->balloon_page_chunk; > break; > + case PAGE_CHUNK_TYPE_UNUSED: > + hdr = vb->unused_page_chunk_hdr; > + chunk = vb->unused_page_chunk; > + break; > default: > dev_warn(&vb->vdev->dev, "%s: chunk %d of unknown pages\n", > __func__, type); > @@ -686,28 +708,139 @@ static void update_balloon_size_func(struct work_struct *work) > queue_work(system_freezable_wq, work); > } > > +static void miscq_in_hdr_add(struct virtio_balloon *vb) > +{ > + struct scatterlist sg_in; > + > + sg_init_one(&sg_in, vb->miscq_in_hdr, > + sizeof(struct virtio_balloon_miscq_hdr)); > + if (virtqueue_add_inbuf(vb->miscq, &sg_in, 1, vb->miscq_in_hdr, > + GFP_KERNEL) < 0) { > + __virtio_clear_bit(vb->vdev, > + VIRTIO_BALLOON_F_MISC_VQ); > + dev_warn(&vb->vdev->dev, "%s: add miscq_in_hdr err\n", > + __func__); > + return; > + } > + virtqueue_kick(vb->miscq); > +} > + > +static void miscq_send_unused_pages(struct virtio_balloon *vb) > +{ > + struct virtio_balloon_miscq_hdr *miscq_out_hdr = vb->miscq_out_hdr; > + struct virtqueue *vq = vb->miscq; > + int ret = 0; > + unsigned int order = 0, migratetype = 0; > + struct zone *zone = NULL; > + struct page *page = NULL; > + u64 pfn; > + > + miscq_out_hdr->cmd = VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES;Gets endian-ness and whitespace wrong. Pls use static checkers to catch this type of error.> + miscq_out_hdr->flags = 0; > + > + for_each_populated_zone(zone) { > + for (order = MAX_ORDER - 1; order > 0; order--) { > + for (migratetype = 0; migratetype < MIGRATE_TYPES; > + migratetype++) { > + do { > + ret = inquire_unused_page_block(zone, > + order, migratetype, &page); > + if (!ret) { > + pfn = (u64)page_to_pfn(page); > + add_one_chunk(vb, vq, > + PAGE_CHUNK_TYPE_UNUSED, > + pfn, > + (u64)(1 << order)); > + } > + } while (!ret); > + } > + } > + } > + miscq_out_hdr->flags |= VIRTIO_BALLOON_MISCQ_F_COMPLETE;And where is miscq_out_hdr used? I see no add_outbuf anywhere. Things like this should be passed through function parameters and not stuffed into device structure, fields should be initialized before use and not where we happen to have the data handy. Also, _F_ is normally a bit number, you use it as a value here.> + send_page_chunks(vb, vq, PAGE_CHUNK_TYPE_UNUSED, true); > +} > + > +static void miscq_handle(struct virtqueue *vq) > +{ > + struct virtio_balloon *vb = vq->vdev->priv; > + struct virtio_balloon_miscq_hdr *hdr; > + unsigned int len; > + > + hdr = virtqueue_get_buf(vb->miscq, &len); > + if (!hdr || len != sizeof(struct virtio_balloon_miscq_hdr)) { > + dev_warn(&vb->vdev->dev, "%s: invalid miscq hdr len\n", > + __func__); > + miscq_in_hdr_add(vb); > + return; > + } > + switch (hdr->cmd) { > + case VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES: > + miscq_send_unused_pages(vb); > + break; > + default: > + dev_warn(&vb->vdev->dev, "%s: miscq cmd %d not supported\n", > + __func__, hdr->cmd); > + } > + miscq_in_hdr_add(vb); > +} > + > static int init_vqs(struct virtio_balloon *vb) > { > - struct virtqueue *vqs[3]; > - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; > - static const char * const names[] = { "inflate", "deflate", "stats" }; > - int err, nvqs; > + struct virtqueue **vqs; > + vq_callback_t **callbacks; > + const char **names; > + int err = -ENOMEM; > + int i, nvqs; > + > + /* Inflateq and deflateq are used unconditionally */ > + nvqs = 2; > + > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) > + nvqs++; > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) > + nvqs++; > + > + /* Allocate space for find_vqs parameters */ > + vqs = kcalloc(nvqs, sizeof(*vqs), GFP_KERNEL); > + if (!vqs) > + goto err_vq; > + callbacks = kmalloc_array(nvqs, sizeof(*callbacks), GFP_KERNEL); > + if (!callbacks) > + goto err_callback; > + names = kmalloc_array(nvqs, sizeof(*names), GFP_KERNEL); > + if (!names) > + goto err_names; > +All of 4 VQs, why are dynamic allocations called for?> + callbacks[0] = balloon_ack; > + names[0] = "inflate"; > + callbacks[1] = balloon_ack; > + names[1] = "deflate"; > + > + i = 2; > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { > + callbacks[i] = stats_request; > + names[i] = "stats"; > + i++; > + } > > - /* > - * We expect two virtqueues: inflate and deflate, and > - * optionally stat. > - */ > - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; > - err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, names); > + if (virtio_has_feature(vb->vdev, > + VIRTIO_BALLOON_F_MISC_VQ)) { > + callbacks[i] = miscq_handle; > + names[i] = "miscq"; > + } > + > + err = vb->vdev->config->find_vqs(vb->vdev, nvqs, vqs, callbacks, > + names); > if (err) > - return err; > + goto err_find; > > vb->inflate_vq = vqs[0]; > vb->deflate_vq = vqs[1]; > + i = 2; > if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { > struct scatterlist sg; > - vb->stats_vq = vqs[2]; > > + vb->stats_vq = vqs[i++]; > /* > * Prime this virtqueue with one buffer so the hypervisor can > * use it to signal us later (it can't be broken yet!). > @@ -718,7 +851,25 @@ static int init_vqs(struct virtio_balloon *vb) > BUG(); > virtqueue_kick(vb->stats_vq); > } > + > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ)) { > + vb->miscq = vqs[i]; > + miscq_in_hdr_add(vb); > + } > + > + kfree(names); > + kfree(callbacks); > + kfree(vqs); > return 0; > + > +err_find: > + kfree(names); > +err_names: > + kfree(callbacks); > +err_callback: > + kfree(vqs); > +err_vq: > + return err; > } > > #ifdef CONFIG_BALLOON_COMPACTION > @@ -843,6 +994,32 @@ static void balloon_page_chunk_init(struct virtio_balloon *vb) > } > } > > +static void miscq_init(struct virtio_balloon *vb) > +{ > + void *buf; > + > + vb->miscq_in_hdr = kmalloc(sizeof(struct virtio_balloon_miscq_hdr), > + GFP_KERNEL); > + buf = kmalloc(sizeof(struct virtio_balloon_miscq_hdr) + > + sizeof(struct virtio_balloon_page_chunk_hdr) + > + sizeof(struct virtio_balloon_page_chunk) * > + MAX_PAGE_CHUNKS, GFP_KERNEL);Mabe reduce MAX_PAGE_CHUNKS even further to fit in order-3 allocation.> + if (!vb->miscq_in_hdr || !buf) { > + kfree(buf); > + kfree(vb->miscq_in_hdr); > + __virtio_clear_bit(vb->vdev, VIRTIO_BALLOON_F_MISC_VQ);Again this does not really work here. In this case it might be best to just fail probe.> + dev_warn(&vb->vdev->dev, "%s: failed\n", __func__); > + } else { > + vb->miscq_out_hdr = buf; > + vb->unused_page_chunk_hdr = buf + > + sizeof(struct virtio_balloon_miscq_hdr); > + vb->unused_page_chunk_hdr->chunks = 0; > + vb->unused_page_chunk = buf + > + sizeof(struct virtio_balloon_miscq_hdr) + > + sizeof(struct virtio_balloon_page_chunk_hdr); > + } > +} > + > static int virtballoon_probe(struct virtio_device *vdev) > { > struct virtio_balloon *vb; > @@ -869,6 +1046,9 @@ static int virtballoon_probe(struct virtio_device *vdev) > if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_BALLOON_CHUNKS)) > balloon_page_chunk_init(vb); > > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_MISC_VQ)) > + miscq_init(vb); > + > mutex_init(&vb->balloon_lock); > init_waitqueue_head(&vb->acked); > vb->vdev = vdev; > @@ -946,6 +1126,8 @@ static void virtballoon_remove(struct virtio_device *vdev) > > remove_common(vb); > free_page_bmap(vb); > + kfree(vb->miscq_out_hdr); > + kfree(vb->miscq_in_hdr); > if (vb->vb_dev_info.inode) > iput(vb->vb_dev_info.inode); > kfree(vb); > @@ -987,6 +1169,7 @@ static unsigned int features[] = { > VIRTIO_BALLOON_F_STATS_VQ, > VIRTIO_BALLOON_F_DEFLATE_ON_OOM, > VIRTIO_BALLOON_F_BALLOON_CHUNKS, > + VIRTIO_BALLOON_F_MISC_VQ, > }; > > static struct virtio_driver virtio_balloon_driver = { > diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h > index be317b7..96bdc86 100644 > --- a/include/uapi/linux/virtio_balloon.h > +++ b/include/uapi/linux/virtio_balloon.h > @@ -35,6 +35,7 @@ > #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ > #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ > #define VIRTIO_BALLOON_F_BALLOON_CHUNKS 3 /* Inflate/Deflate pages in chunks */ > +#define VIRTIO_BALLOON_F_MISC_VQ 4 /* Virtqueue for misc. requests */Is "misc" the best we can do? I think these are actually host commands - aren't they?> > /* Size of a PFN in the balloon interface. */ > #define VIRTIO_BALLOON_PFN_SHIFT 12 > @@ -95,4 +96,11 @@ struct virtio_balloon_page_chunk { > __le64 size; > }; > > +#define VIRTIO_BALLOON_MISCQ_INQUIRE_UNUSED_PAGES 0meaning what? Is this a command value? Is this a command to report unused memory then? Let's call it this then.> +#define VIRTIO_BALLOON_MISCQ_F_COMPLETE 0x1meaning what?> +struct virtio_balloon_miscq_hdr { > + __le16 cmd; > + __le16 flags;Add padding to make it full 64 bit.> +}; > + > #endif /* _LINUX_VIRTIO_BALLOON_H */ > -- > 2.7.4
Andrew Morton
2017-Apr-13 20:02 UTC
[PATCH v9 3/5] mm: function to offer a page block on the free list
On Thu, 13 Apr 2017 17:35:06 +0800 Wei Wang <wei.w.wang at intel.com> wrote:> Add a function to find a page block on the free list specified by the > caller. Pages from the page block may be used immediately after the > function returns. The caller is responsible for detecting or preventing > the use of such pages. > > ... > > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4498,6 +4498,93 @@ void show_free_areas(unsigned int filter) > show_swap_cache_info(); > } > > +/** > + * Heuristically get a page block in the system that is unused. > + * It is possible that pages from the page block are used immediately after > + * inquire_unused_page_block() returns. It is the caller's responsibility > + * to either detect or prevent the use of such pages. > + * > + * The free list to check: zone->free_area[order].free_list[migratetype]. > + * > + * If the caller supplied page block (i.e. **page) is on the free list, offer > + * the next page block on the list to the caller. Otherwise, offer the first > + * page block on the list. > + * > + * Return 0 when a page block is found on the caller specified free list. > + */ > +int inquire_unused_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, struct page **page) > +{Perhaps we can wrap this in the appropriate ifdef so the kernels which won't be using virtio-balloon don't carry the added overhead.
Matthew Wilcox
2017-Apr-13 20:44 UTC
[PATCH v9 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On Thu, Apr 13, 2017 at 05:35:03PM +0800, Wei Wang wrote:> 2) transfer the guest unused pages to the host so that they > can be skipped to migrate in live migration.I don't understand this second bit. You leave the pages on the free list, and tell the host they're free. What's preventing somebody else from allocating them and using them for something? Is the guest semi-frozen at this point with just enough of it running to ask the balloon driver to do things?
Michael S. Tsirkin
2017-Apr-14 01:50 UTC
[PATCH v9 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On Thu, Apr 13, 2017 at 01:44:11PM -0700, Matthew Wilcox wrote:> On Thu, Apr 13, 2017 at 05:35:03PM +0800, Wei Wang wrote: > > 2) transfer the guest unused pages to the host so that they > > can be skipped to migrate in live migration. > > I don't understand this second bit. You leave the pages on the free list, > and tell the host they're free. What's preventing somebody else from > allocating them and using them for something? Is the guest semi-frozen > at this point with just enough of it running to ask the balloon driver > to do things?There's missing documentation here. The way things actually work is host sends to guest a request for unused pages and then write-protects all memory. So guest isn't frozen but any changes will be detected by host.
Maybe Matching Threads
- [PATCH v9 3/5] mm: function to offer a page block on the free list
- [PATCH v9 3/5] mm: function to offer a page block on the free list
- [PATCH v9 3/5] mm: function to offer a page block on the free list
- [PATCH v12 6/8] mm: support reporting free page blocks
- [PATCH v12 6/8] mm: support reporting free page blocks