Wei Wang
2018-Jun-25 12:05 UTC
[PATCH v34 0/4] Virtio-balloon: support free page reporting
This patch series is separated from the previous "Virtio-balloon Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series enables the virtio-balloon driver to report hints of guest free pages to the host. It can be used to accelerate live migration of VMs. Here is an introduction of this usage: Live migration needs to transfer the VM's memory from the source machine to the destination round by round. For the 1st round, all the VM's memory is transferred. From the 2nd round, only the pieces of memory that were written by the guest (after the 1st round) are transferred. One method that is popularly used by the hypervisor to track which part of memory is written is to write-protect all the guest memory. This feature enables the optimization by skipping the transfer of guest free pages during VM live migration. It is not concerned that the memory pages are used after they are given to the hypervisor as a hint of the free pages, because they will be tracked by the hypervisor and transferred in the subsequent round if they are used and written. * Tests - Test Environment Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz Guest: 8G RAM, 4 vCPU Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second - Test Results - Idle Guest Live Migration Time (results are averaged over 10 runs): - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction - Guest with Linux Compilation Workload (make bzImage -j4): - Live Migration Time (average) Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction - Linux Compilation Time Optimization v.s. Legacy = 5min6s v.s. 5min12s --> no obvious difference ChangeLog: v33->v34: - mm: - add a new API max_free_page_blocks, which estimates the max number of free page blocks that a free page list may have - get_from_free_page_list: store addresses to multiple arrays, instead of just one array. This removes the limitation of being able to report only 2TB free memory (the largest array memory that can be allocated on x86 is 4MB, which can store 2^19 addresses of 4MB free page blocks). - virtio-balloon: - Allocate multiple arrays to load free page hints; - Use the same method in v32 to do guest/host interaction, the differeces are - the hints are tranferred array by array, instead of one by one. - send the free page block size of a hint along with the cmd id to host, so that host knows each address represents e.g. a 4MB memory in our case. v32->v33: - mm/get_from_free_page_list: The new implementation to get free page hints based on the suggestions from Linus: https://lkml.org/lkml/2018/6/11/764 This avoids the complex call chain, and looks more prudent. - virtio-balloon: - use a fix-sized buffer to get free page hints; - remove the cmd id related interface. Now host can just send a free page hint command to the guest (via the host_cmd config register) to start the reporting. Currentlty the guest reports only the max order free page hints to host, which has generated similar good results as before. But the interface used by virtio-balloon to report can support reporting more orders in the future when there is a need. v31->v32: - virtio-balloon: - rename cmd_id_use to cmd_id_active; - report_free_page_func: detach used buffers after host sends a vq interrupt, instead of busy waiting for used buffers. v30->v31: - virtio-balloon: - virtio_balloon_send_free_pages: return -EINTR rather than 1 to indicate an active stop requested by host; and add more comments to explain about access to cmd_id_received without locks; - add_one_sg: add TODO to comments about possible improvement. v29->v30: - mm/walk_free_mem_block: add cond_sched() for each order v28->v29: - mm/page_poison: only expose page_poison_enabled(), rather than more changes did in v28, as we are not 100% confident about that for now. - virtio-balloon: use a separate buffer for the stop cmd, instead of having the start and stop cmd use the same buffer. This avoids the corner case that the start cmd is overridden by the stop cmd when the host has a delay in reading the start cmd. v27->v28: - mm/page_poison: Move PAGE_POISON to page_poison.c and add a function to expose page poison val to kernel modules. v26->v27: - add a new patch to expose page_poisoning_enabled to kernel modules - virtio-balloon: set poison_val to 0xaaaaaaaa, instead of 0xaa v25->v26: virtio-balloon changes only - remove kicking free page vq since the host now polls the vq after initiating the reporting - report_free_page_func: detach all the used buffers after sending the stop cmd id. This avoids leaving the detaching burden (i.e. overhead) to the next cmd id. Detaching here isn't considered overhead since the stop cmd id has been sent, and host has already moved formard. v24->v25: - mm: change walk_free_mem_block to return 0 (instead of true) on completing the report, and return a non-zero value from the callabck, which stops the reporting. - virtio-balloon: - use enum instead of define for VIRTIO_BALLOON_VQ_INFLATE etc. - avoid __virtio_clear_bit when bailing out; - a new method to avoid reporting the some cmd id to host twice - destroy_workqueue can cancel free page work when the feature is negotiated; - fail probe when the free page vq size is less than 2. v23->v24: - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to VIRTIO_BALLOON_F_FREE_PAGE_HINT - kick when vq->num_free < half full, instead of "= half full" - replace BUG_ON with bailing out - check vb->balloon_wq in probe(), if null, bail out - add a new feature bit for page poisoning - solve the corner case that one cmd id being sent to host twice v22->v23: - change to kick the device when the vq is half-way full; - open-code batch_free_page_sg into add_one_sg; - change cmd_id from "uint32_t" to "__virtio32"; - reserver one entry in the vq for the driver to send cmd_id, instead of busywaiting for an available entry; - add "stop_update" check before queue_work for prudence purpose for now, will have a separate patch to discuss this flag check later; - init_vqs: change to put some variables on stack to have simpler implementation; - add destroy_workqueue(vb->balloon_wq); v21->v22: - add_one_sg: some code and comment re-arrangement - send_cmd_id: handle a cornercase For previous ChangeLog, please reference https://lwn.net/Articles/743660/ Wei Wang (4): mm: support to get hints of free page blocks virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT mm/page_poison: expose page_poisoning_enabled to kernel modules virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON drivers/virtio/virtio_balloon.c | 357 ++++++++++++++++++++++++++++++++---- include/linux/mm.h | 3 + include/uapi/linux/virtio_balloon.h | 14 ++ mm/page_alloc.c | 82 +++++++++ mm/page_poison.c | 6 + 5 files changed, 426 insertions(+), 36 deletions(-) -- 2.7.4
Wei Wang
2018-Jun-25 12:05 UTC
[PATCH v34 1/4] mm: support to get hints of free page blocks
This patch adds support to get free page blocks from a free page list. The physical addresses of the blocks are stored to the arrays passed from the caller. The obtained free page blocks are hints about free pages, because there is no guarantee that they are still on the free page list after the function returns. One use example of this patch is to accelerate live migration by skipping the transfer of free pages reported from the guest. A popular method used by the hypervisor to track which part of memory is written during live migration is to write-protect all the guest memory. So, those pages that are hinted as free pages but are written after this function returns will be captured by the hypervisor, and they will be added to the next round of memory transfer. Suggested-by: Linus Torvalds <torvalds at linux-foundation.org> Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michal Hocko <mhocko at kernel.org> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Linus Torvalds <torvalds at linux-foundation.org> --- include/linux/mm.h | 3 ++ mm/page_alloc.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index a0fbb9f..1b51d43 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long * zones_size); extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +uint32_t max_free_page_blocks(int order); +uint32_t get_from_free_page_list(int order, uint32_t num, __le64 *buf[], + uint32_t size); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100..2e462ab 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5042,6 +5042,88 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) show_swap_cache_info(); } +/** + * max_free_page_blocks - estimate the max number of free page blocks + * @order: the order of the free page blocks to estimate + * + * This function gives a rough estimation of the possible maximum number of + * free page blocks a free list may have. The estimation works on an assumption + * that all the system pages are on that list. + * + * Context: Any context. + * + * Return: The largest number of free page blocks that the free list can have. + */ +uint32_t max_free_page_blocks(int order) +{ + return totalram_pages / (1 << order); +} +EXPORT_SYMBOL_GPL(max_free_page_blocks); + +/** + * get_from_free_page_list - get hints of free pages from a free page list + * @order: the order of the free page list to check + * @num: the number of arrays + * @bufs: the arrays to store the physical addresses of the free page blocks + * @size: the number of entries each array has + * + * This function offers hints about free pages. The addresses of free page + * blocks are stored to the arrays passed from the caller. There is no + * guarantee that the obtained free pages are still on the free page list + * after the function returns. pfn_to_page on the obtained free pages is + * strongly discouraged and if there is an absolute need for that, make sure + * to contact MM people to discuss potential problems. + * + * The addresses are currently stored to an array in little endian. This + * avoids the overhead of converting endianness by the caller who needs data + * in the little endian format. Big endian support can be added on demand in + * the future. The maximum number of free page blocks that can be obtained is + * limited to the size of arrays. + * + * Context: Process context. + * + * Return: The number of free page blocks obtained from the free page list. + */ +uint32_t get_from_free_page_list(int order, uint32_t num, __le64 *bufs[], + uint32_t size) +{ + struct zone *zone; + enum migratetype mt; + struct page *page; + struct list_head *list; + unsigned long addr; + uint32_t array_index = 0, entry_index = 0; + __le64 *array = bufs[array_index]; + + /* Validity check */ + if (order < 0 || order >= MAX_ORDER) + return 0; + + for_each_populated_zone(zone) { + spin_lock_irq(&zone->lock); + for (mt = 0; mt < MIGRATE_TYPES; mt++) { + list = &zone->free_area[order].free_list[mt]; + list_for_each_entry(page, list, lru) { + addr = page_to_pfn(page) << PAGE_SHIFT; + /* This array is full, so use the next one */ + if (entry_index == size) { + /* All the arrays are consumed */ + if (++array_index == num) { + spin_unlock_irq(&zone->lock); + return array_index * size; + } + array = bufs[array_index]; + entry_index = 0; + } + array[entry_index++] = cpu_to_le64(addr); + } + } + spin_unlock_irq(&zone->lock); + } + + return array_index * size + entry_index; +} +EXPORT_SYMBOL_GPL(get_from_free_page_list); static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { -- 2.7.4
Wei Wang
2018-Jun-25 12:05 UTC
[PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the support of reporting hints of guest free pages to host via virtio-balloon. Host requests the guest to report free page hints by sending a new cmd id to the guest via the free_page_report_cmd_id configuration register. As the first step here, virtio-balloon only reports free page hints from the max order (i.e. 10) free page list to host. This has generated similar good results as reporting all free page hints during our tests. When the guest starts to report, it first sends a start cmd to host via the free page vq, which acks to host the cmd id received, and tells it the hint size (e.g. 4MB each on x86). When the guest finishes the reporting, a stop cmd is sent to host via the vq. TODO: - support reporting free page hints from smaller order free page lists when there is a need/request from users. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michal Hocko <mhocko at kernel.org> Cc: Andrew Morton <akpm at linux-foundation.org> --- drivers/virtio/virtio_balloon.c | 347 ++++++++++++++++++++++++++++++++---- include/uapi/linux/virtio_balloon.h | 11 ++ 2 files changed, 322 insertions(+), 36 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 6b237e3..d05f0ba 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -43,6 +43,11 @@ #define OOM_VBALLOON_DEFAULT_PAGES 256 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 +/* The order used to allocate an array to load free page hints */ +#define ARRAY_ALLOC_ORDER (MAX_ORDER - 1) +/* The size of an array in bytes */ +#define ARRAY_ALLOC_SIZE ((1 << ARRAY_ALLOC_ORDER) << PAGE_SHIFT) + static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -51,9 +56,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif +enum virtio_balloon_vq { + VIRTIO_BALLOON_VQ_INFLATE, + VIRTIO_BALLOON_VQ_DEFLATE, + VIRTIO_BALLOON_VQ_STATS, + VIRTIO_BALLOON_VQ_FREE_PAGE, + VIRTIO_BALLOON_VQ_MAX +}; + struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq; + + /* Balloon's own wq for cpu-intensive work items */ + struct workqueue_struct *balloon_wq; + /* The free page reporting work item submitted to the balloon wq */ + struct work_struct report_free_page_work; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -63,6 +81,15 @@ struct virtio_balloon { spinlock_t stop_update_lock; bool stop_update; + /* Command buffers to start and stop the reporting of hints to host */ + struct virtio_balloon_free_page_hints_cmd cmd_start; + struct virtio_balloon_free_page_hints_cmd cmd_stop; + + /* The cmd id received from host */ + uint32_t cmd_id_received; + /* The cmd id that is actively in use */ + uint32_t cmd_id_active; + /* Waiting for host to ack the pages we released. */ wait_queue_head_t acked; @@ -326,17 +353,6 @@ static void stats_handle_request(struct virtio_balloon *vb) virtqueue_kick(vq); } -static void virtballoon_changed(struct virtio_device *vdev) -{ - struct virtio_balloon *vb = vdev->priv; - unsigned long flags; - - spin_lock_irqsave(&vb->stop_update_lock, flags); - if (!vb->stop_update) - queue_work(system_freezable_wq, &vb->update_balloon_size_work); - spin_unlock_irqrestore(&vb->stop_update_lock, flags); -} - static inline s64 towards_target(struct virtio_balloon *vb) { s64 target; @@ -353,6 +369,35 @@ static inline s64 towards_target(struct virtio_balloon *vb) return target - vb->num_pages; } +static void virtballoon_changed(struct virtio_device *vdev) +{ + struct virtio_balloon *vb = vdev->priv; + unsigned long flags; + s64 diff = towards_target(vb); + + if (diff) { + spin_lock_irqsave(&vb->stop_update_lock, flags); + if (!vb->stop_update) + queue_work(system_freezable_wq, + &vb->update_balloon_size_work); + spin_unlock_irqrestore(&vb->stop_update_lock, flags); + } + + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + virtio_cread(vdev, struct virtio_balloon_config, + free_page_report_cmd_id, &vb->cmd_id_received); + if (vb->cmd_id_received !+ VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID && + vb->cmd_id_received != vb->cmd_id_active) { + spin_lock_irqsave(&vb->stop_update_lock, flags); + if (!vb->stop_update) + queue_work(vb->balloon_wq, + &vb->report_free_page_work); + spin_unlock_irqrestore(&vb->stop_update_lock, flags); + } + } +} + static void update_balloon_size(struct virtio_balloon *vb) { u32 actual = vb->num_pages; @@ -425,44 +470,253 @@ static void update_balloon_size_func(struct work_struct *work) queue_work(system_freezable_wq, work); } +static void free_page_vq_cb(struct virtqueue *vq) +{ + unsigned int len; + void *buf; + struct virtio_balloon *vb = vq->vdev->priv; + + while (1) { + buf = virtqueue_get_buf(vq, &len); + + if (!buf || buf == &vb->cmd_start || buf == &vb->cmd_stop) + break; + free_pages((unsigned long)buf, ARRAY_ALLOC_ORDER); + } +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; - static const char * const names[] = { "inflate", "deflate", "stats" }; - int err, nvqs; + struct virtqueue *vqs[VIRTIO_BALLOON_VQ_MAX]; + vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_MAX]; + const char *names[VIRTIO_BALLOON_VQ_MAX]; + struct scatterlist sg; + int ret; /* - * We expect two virtqueues: inflate and deflate, and - * optionally stat. + * Inflateq and deflateq are used unconditionally. The names[] + * will be NULL if the related feature is not enabled, which will + * cause no allocation for the corresponding virtqueue in find_vqs. */ - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; - err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL); - if (err) - return err; + callbacks[VIRTIO_BALLOON_VQ_INFLATE] = balloon_ack; + names[VIRTIO_BALLOON_VQ_INFLATE] = "inflate"; + callbacks[VIRTIO_BALLOON_VQ_DEFLATE] = balloon_ack; + names[VIRTIO_BALLOON_VQ_DEFLATE] = "deflate"; + names[VIRTIO_BALLOON_VQ_STATS] = NULL; + names[VIRTIO_BALLOON_VQ_FREE_PAGE] = NULL; - vb->inflate_vq = vqs[0]; - vb->deflate_vq = vqs[1]; if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { - struct scatterlist sg; - unsigned int num_stats; - vb->stats_vq = vqs[2]; + names[VIRTIO_BALLOON_VQ_STATS] = "stats"; + callbacks[VIRTIO_BALLOON_VQ_STATS] = stats_request; + } + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + names[VIRTIO_BALLOON_VQ_FREE_PAGE] = "free_page_vq"; + callbacks[VIRTIO_BALLOON_VQ_FREE_PAGE] = free_page_vq_cb; + } + + ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_MAX, + vqs, callbacks, names, NULL, NULL); + if (ret) + return ret; + + vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_INFLATE]; + vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_DEFLATE]; + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_STATS]; /* * Prime this virtqueue with one buffer so the hypervisor can * use it to signal us later (it can't be broken yet!). */ - num_stats = update_balloon_stats(vb); - - sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats); - if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL) - < 0) - BUG(); + sg_init_one(&sg, vb->stats, sizeof(vb->stats)); + ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, + GFP_KERNEL); + if (ret) { + dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n", + __func__); + return ret; + } virtqueue_kick(vb->stats_vq); } + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) + vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_FREE_PAGE]; + return 0; } +static int send_start_cmd_id(struct virtio_balloon *vb) +{ + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + + vb->cmd_start.id = cpu_to_virtio32(vb->vdev, vb->cmd_id_active); + vb->cmd_start.size = cpu_to_virtio32(vb->vdev, + MAX_ORDER_NR_PAGES * PAGE_SIZE); + sg_init_one(&sg, &vb->cmd_start, + sizeof(struct virtio_balloon_free_page_hints_cmd)); + return virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_start, GFP_KERNEL); +} + +static int send_stop_cmd_id(struct virtio_balloon *vb) +{ + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + + vb->cmd_stop.id = cpu_to_virtio32(vb->vdev, + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID); + vb->cmd_stop.size = 0; + sg_init_one(&sg, &vb->cmd_stop, + sizeof(struct virtio_balloon_free_page_hints_cmd)); + return virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_stop, GFP_KERNEL); +} + +/* + * virtio_balloon_send_hints - send arrays of hints to host + * @vb: the virtio_balloon struct + * @arrays: the arrays of hints + * @array_num: the number of arrays give by the caller + * @last_array_hints: the number of hints in the last array + * + * Send hints to host array by array. This begins by sending a start cmd, + * which contains a cmd id received from host and the free page block size in + * bytes of each hint. At the end, a stop cmd is sent to host to indicate the + * end of this reporting. If host actively requests to stop the reporting, free + * the arrays that have not been sent. + */ +static void virtio_balloon_send_hints(struct virtio_balloon *vb, + __le64 **arrays, + uint32_t array_num, + uint32_t last_array_hints) +{ + int err, i = 0; + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + + /* Start by sending the received cmd id to host with an outbuf. */ + err = send_start_cmd_id(vb); + if (unlikely(err)) + goto out_err; + /* Kick host to start taking entries from the vq. */ + virtqueue_kick(vq); + + for (i = 0; i < array_num; i++) { + /* + * If a stop id or a new cmd id was just received from host, + * stop the reporting, and free the remaining arrays that + * haven't been sent to host. + */ + if (vb->cmd_id_received != vb->cmd_id_active) + goto out_free; + + if (i + 1 == array_num) + sg_init_one(&sg, (void *)arrays[i], + last_array_hints * sizeof(__le64)); + else + sg_init_one(&sg, (void *)arrays[i], ARRAY_ALLOC_SIZE); + err = virtqueue_add_inbuf(vq, &sg, 1, (void *)arrays[i], + GFP_KERNEL); + if (unlikely(err)) + goto out_err; + } + + /* End by sending a stop id to host with an outbuf. */ + err = send_stop_cmd_id(vb); + if (unlikely(err)) + goto out_err; + return; + +out_err: + dev_err(&vb->vdev->dev, "%s: err = %d\n", __func__, err); +out_free: + while (i < array_num) + free_pages((unsigned long)arrays[i++], ARRAY_ALLOC_ORDER); +} + +/* + * virtio_balloon_load_hints - load free page hints into arrays + * @vb: the virtio_balloon struct + * @array_num: the number of arrays allocated + * @last_array_hints: the number of hints loaded into the last array + * + * Only free pages blocks of MAX_ORDER - 1 are loaded into the arrays. + * Each array size is MAX_ORDER_NR_PAGES * PAGE_SIZE (e.g. 4MB on x86). Failing + * to allocate such an array essentially implies that no such free page blocks + * could be reported. Alloacte the number of arrays according to the free page + * blocks of MAX_ORDER - 1 that the system may have, and free the unused ones + * after loading the free page hints. The last array may be partially loaded, + * and @last_array_hints tells the caller about the number of hints there. + * + * Return the pointer to the memory that holds the addresses of the allocated + * arrays, or NULL if no arrays are allocated. + */ +static __le64 **virtio_balloon_load_hints(struct virtio_balloon *vb, + uint32_t *array_num, + uint32_t *last_array_hints) +{ + __le64 **arrays; + uint32_t max_entries, entries_per_page, entries_per_array, + max_array_num, loaded_hints; + int i; + + max_entries = max_free_page_blocks(ARRAY_ALLOC_ORDER); + entries_per_page = PAGE_SIZE / sizeof(__le64); + entries_per_array = entries_per_page * (1 << ARRAY_ALLOC_ORDER); + max_array_num = max_entries / entries_per_array + + !!(max_entries % entries_per_array); + arrays = kmalloc_array(max_array_num, sizeof(__le64 *), GFP_KERNEL); + if (!arrays) + return NULL; + + for (i = 0; i < max_array_num; i++) { + arrays[i] + (__le64 *)__get_free_pages(__GFP_ATOMIC | __GFP_NOMEMALLOC, + ARRAY_ALLOC_ORDER); + if (!arrays[i]) { + /* + * If any one of the arrays fails to be allocated, it + * implies that the free list that we are interested + * in is empty, and there is no need to continue the + * reporting. So just free what's allocated and return + * NULL. + */ + while (i > 0) + free_pages((unsigned long)arrays[i--], + ARRAY_ALLOC_ORDER); + kfree(arrays); + return NULL; + } + } + loaded_hints = get_from_free_page_list(ARRAY_ALLOC_ORDER, + max_array_num, arrays, + entries_per_array); + *array_num = loaded_hints / entries_per_array + + !!(max_entries % entries_per_array); + *last_array_hints = loaded_hints - + (*array_num - 1) * entries_per_array; + for (i = *array_num; i < max_array_num; i++) + free_pages((unsigned long)arrays[i], ARRAY_ALLOC_ORDER); + + return arrays; +} + +static void report_free_page_func(struct work_struct *work) +{ + struct virtio_balloon *vb; + uint32_t array_num = 0, last_array_hints = 0; + __le64 **arrays; + + vb = container_of(work, struct virtio_balloon, report_free_page_work); + vb->cmd_id_active = vb->cmd_id_received; + + arrays = virtio_balloon_load_hints(vb, &array_num, &last_array_hints); + if (arrays) { + virtio_balloon_send_hints(vb, arrays, array_num, + last_array_hints); + kfree(arrays); + } +} + #ifdef CONFIG_BALLOON_COMPACTION /* * virtballoon_migratepage - perform the balloon page migration on behalf of @@ -576,18 +830,30 @@ static int virtballoon_probe(struct virtio_device *vdev) if (err) goto out_free_vb; + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + vb->balloon_wq = alloc_workqueue("balloon-wq", + WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0); + if (!vb->balloon_wq) { + err = -ENOMEM; + goto out_del_vqs; + } + INIT_WORK(&vb->report_free_page_work, report_free_page_func); + vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; + vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; + } + vb->nb.notifier_call = virtballoon_oom_notify; vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY; err = register_oom_notifier(&vb->nb); if (err < 0) - goto out_del_vqs; + goto out_del_balloon_wq; #ifdef CONFIG_BALLOON_COMPACTION balloon_mnt = kern_mount(&balloon_fs); if (IS_ERR(balloon_mnt)) { err = PTR_ERR(balloon_mnt); unregister_oom_notifier(&vb->nb); - goto out_del_vqs; + goto out_del_balloon_wq; } vb->vb_dev_info.migratepage = virtballoon_migratepage; @@ -597,7 +863,7 @@ static int virtballoon_probe(struct virtio_device *vdev) kern_unmount(balloon_mnt); unregister_oom_notifier(&vb->nb); vb->vb_dev_info.inode = NULL; - goto out_del_vqs; + goto out_del_balloon_wq; } vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops; #endif @@ -608,6 +874,9 @@ static int virtballoon_probe(struct virtio_device *vdev) virtballoon_changed(vdev); return 0; +out_del_balloon_wq: + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) + destroy_workqueue(vb->balloon_wq); out_del_vqs: vdev->config->del_vqs(vdev); out_free_vb: @@ -641,6 +910,11 @@ static void virtballoon_remove(struct virtio_device *vdev) cancel_work_sync(&vb->update_balloon_size_work); cancel_work_sync(&vb->update_balloon_stats_work); + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + cancel_work_sync(&vb->report_free_page_work); + destroy_workqueue(vb->balloon_wq); + } + remove_common(vb); #ifdef CONFIG_BALLOON_COMPACTION if (vb->vb_dev_info.inode) @@ -692,6 +966,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_FREE_PAGE_HINT, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index 13b8cb5..860456f 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,15 +34,26 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID 0 struct virtio_balloon_config { /* Number of pages host wants Guest to give up. */ __u32 num_pages; /* Number of pages we've actually got in balloon. */ __u32 actual; + /* Free page report command id, readonly by guest */ + __u32 free_page_report_cmd_id; +}; + +struct virtio_balloon_free_page_hints_cmd { + /* The command id received from host */ + __le32 id; + /* The free page block size in bytes */ + __le32 size; }; #define VIRTIO_BALLOON_S_SWAP_IN 0 /* Amount of memory swapped in */ -- 2.7.4
Wei Wang
2018-Jun-25 12:05 UTC
[PATCH v34 3/4] mm/page_poison: expose page_poisoning_enabled to kernel modules
In some usages, e.g. virtio-balloon, a kernel module needs to know if page poisoning is in use. This patch exposes the page_poisoning_enabled function to kernel modules. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Michal Hocko <mhocko at kernel.org> Cc: Michael S. Tsirkin <mst at redhat.com> Acked-by: Andrew Morton <akpm at linux-foundation.org> --- mm/page_poison.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/page_poison.c b/mm/page_poison.c index aa2b3d3..830f604 100644 --- a/mm/page_poison.c +++ b/mm/page_poison.c @@ -17,6 +17,11 @@ static int __init early_page_poison_param(char *buf) } early_param("page_poison", early_page_poison_param); +/** + * page_poisoning_enabled - check if page poisoning is enabled + * + * Return true if page poisoning is enabled, or false if not. + */ bool page_poisoning_enabled(void) { /* @@ -29,6 +34,7 @@ bool page_poisoning_enabled(void) (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && debug_pagealloc_enabled())); } +EXPORT_SYMBOL_GPL(page_poisoning_enabled); static void poison_page(struct page *page) { -- 2.7.4
Wei Wang
2018-Jun-25 12:05 UTC
[PATCH v34 4/4] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON
The VIRTIO_BALLOON_F_PAGE_POISON feature bit is used to indicate if the guest is using page poisoning. Guest writes to the poison_val config field to tell host about the page poisoning value that is in use. Suggested-by: Michael S. Tsirkin <mst at redhat.com> Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michal Hocko <mhocko at suse.com> Cc: Andrew Morton <akpm at linux-foundation.org> --- drivers/virtio/virtio_balloon.c | 10 ++++++++++ include/uapi/linux/virtio_balloon.h | 3 +++ 2 files changed, 13 insertions(+) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index d05f0ba..c834ef1 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -801,6 +801,7 @@ static struct file_system_type balloon_fs = { static int virtballoon_probe(struct virtio_device *vdev) { struct virtio_balloon *vb; + __u32 poison_val; int err; if (!vdev->config->get) { @@ -840,6 +841,11 @@ static int virtballoon_probe(struct virtio_device *vdev) INIT_WORK(&vb->report_free_page_work, report_free_page_func); vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) { + memset(&poison_val, PAGE_POISON, sizeof(poison_val)); + virtio_cwrite(vb->vdev, struct virtio_balloon_config, + poison_val, &poison_val); + } } vb->nb.notifier_call = virtballoon_oom_notify; @@ -958,6 +964,9 @@ static int virtballoon_restore(struct virtio_device *vdev) static int virtballoon_validate(struct virtio_device *vdev) { + if (!page_poisoning_enabled()) + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON); + __virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM); return 0; } @@ -967,6 +976,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_FREE_PAGE_HINT, + VIRTIO_BALLOON_F_PAGE_POISON, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index 860456f..12b5a4f 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */ +#define VIRTIO_BALLOON_F_PAGE_POISON 4 /* Guest is using page poisoning */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -47,6 +48,8 @@ struct virtio_balloon_config { __u32 actual; /* Free page report command id, readonly by guest */ __u32 free_page_report_cmd_id; + /* Stores PAGE_POISON if page poisoning is in use */ + __u32 poison_val; }; struct virtio_balloon_free_page_hints_cmd { -- 2.7.4
Michael S. Tsirkin
2018-Jun-26 01:37 UTC
[PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Mon, Jun 25, 2018 at 08:05:10PM +0800, Wei Wang wrote:> Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the > support of reporting hints of guest free pages to host via virtio-balloon. > > Host requests the guest to report free page hints by sending a new cmd id > to the guest via the free_page_report_cmd_id configuration register. > > As the first step here, virtio-balloon only reports free page hints from > the max order (i.e. 10) free page list to host. This has generated similar > good results as reporting all free page hints during our tests. > > When the guest starts to report, it first sends a start cmd to host via > the free page vq, which acks to host the cmd id received, and tells it the > hint size (e.g. 4MB each on x86). When the guest finishes the reporting, > a stop cmd is sent to host via the vq. > > TODO: > - support reporting free page hints from smaller order free page lists > when there is a need/request from users. > > Signed-off-by: Wei Wang <wei.w.wang at intel.com> > Signed-off-by: Liang Li <liang.z.li at intel.com> > Cc: Michael S. Tsirkin <mst at redhat.com> > Cc: Michal Hocko <mhocko at kernel.org> > Cc: Andrew Morton <akpm at linux-foundation.org> > --- > drivers/virtio/virtio_balloon.c | 347 ++++++++++++++++++++++++++++++++---- > include/uapi/linux/virtio_balloon.h | 11 ++ > 2 files changed, 322 insertions(+), 36 deletions(-) > > diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c > index 6b237e3..d05f0ba 100644 > --- a/drivers/virtio/virtio_balloon.c > +++ b/drivers/virtio/virtio_balloon.c > @@ -43,6 +43,11 @@ > #define OOM_VBALLOON_DEFAULT_PAGES 256 > #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 > > +/* The order used to allocate an array to load free page hints */ > +#define ARRAY_ALLOC_ORDER (MAX_ORDER - 1) > +/* The size of an array in bytes */ > +#define ARRAY_ALLOC_SIZE ((1 << ARRAY_ALLOC_ORDER) << PAGE_SHIFT) > +Pls prefix macros so we can figure out they are local ones.> static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; > module_param(oom_pages, int, S_IRUSR | S_IWUSR); > MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > @@ -51,9 +56,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); > static struct vfsmount *balloon_mnt; > #endif > > +enum virtio_balloon_vq { > + VIRTIO_BALLOON_VQ_INFLATE, > + VIRTIO_BALLOON_VQ_DEFLATE, > + VIRTIO_BALLOON_VQ_STATS, > + VIRTIO_BALLOON_VQ_FREE_PAGE, > + VIRTIO_BALLOON_VQ_MAX > +}; > + > struct virtio_balloon { > struct virtio_device *vdev; > - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; > + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq; > + > + /* Balloon's own wq for cpu-intensive work items */ > + struct workqueue_struct *balloon_wq; > + /* The free page reporting work item submitted to the balloon wq */ > + struct work_struct report_free_page_work; > > /* The balloon servicing is delegated to a freezable workqueue. */ > struct work_struct update_balloon_stats_work; > @@ -63,6 +81,15 @@ struct virtio_balloon { > spinlock_t stop_update_lock; > bool stop_update; > > + /* Command buffers to start and stop the reporting of hints to host */ > + struct virtio_balloon_free_page_hints_cmd cmd_start; > + struct virtio_balloon_free_page_hints_cmd cmd_stop; > + > + /* The cmd id received from host */ > + uint32_t cmd_id_received; > + /* The cmd id that is actively in use */ > + uint32_t cmd_id_active; > + > /* Waiting for host to ack the pages we released. */ > wait_queue_head_t acked; >You want u32 types.> @@ -326,17 +353,6 @@ static void stats_handle_request(struct virtio_balloon *vb) > virtqueue_kick(vq); > } > > -static void virtballoon_changed(struct virtio_device *vdev) > -{ > - struct virtio_balloon *vb = vdev->priv; > - unsigned long flags; > - > - spin_lock_irqsave(&vb->stop_update_lock, flags); > - if (!vb->stop_update) > - queue_work(system_freezable_wq, &vb->update_balloon_size_work); > - spin_unlock_irqrestore(&vb->stop_update_lock, flags); > -} > - > static inline s64 towards_target(struct virtio_balloon *vb) > { > s64 target; > @@ -353,6 +369,35 @@ static inline s64 towards_target(struct virtio_balloon *vb) > return target - vb->num_pages; > } > > +static void virtballoon_changed(struct virtio_device *vdev) > +{ > + struct virtio_balloon *vb = vdev->priv; > + unsigned long flags; > + s64 diff = towards_target(vb); > + > + if (diff) { > + spin_lock_irqsave(&vb->stop_update_lock, flags); > + if (!vb->stop_update) > + queue_work(system_freezable_wq, > + &vb->update_balloon_size_work); > + spin_unlock_irqrestore(&vb->stop_update_lock, flags); > + } > + > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { > + virtio_cread(vdev, struct virtio_balloon_config, > + free_page_report_cmd_id, &vb->cmd_id_received); > + if (vb->cmd_id_received !> + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID && > + vb->cmd_id_received != vb->cmd_id_active) { > + spin_lock_irqsave(&vb->stop_update_lock, flags); > + if (!vb->stop_update) > + queue_work(vb->balloon_wq, > + &vb->report_free_page_work); > + spin_unlock_irqrestore(&vb->stop_update_lock, flags); > + } > + } > +} > + > static void update_balloon_size(struct virtio_balloon *vb) > { > u32 actual = vb->num_pages; > @@ -425,44 +470,253 @@ static void update_balloon_size_func(struct work_struct *work) > queue_work(system_freezable_wq, work); > } > > +static void free_page_vq_cb(struct virtqueue *vq) > +{ > + unsigned int len; > + void *buf; > + struct virtio_balloon *vb = vq->vdev->priv; > + > + while (1) { > + buf = virtqueue_get_buf(vq, &len); > + > + if (!buf || buf == &vb->cmd_start || buf == &vb->cmd_stop) > + break;If there's any buffer after this one we might never get another callback.> + free_pages((unsigned long)buf, ARRAY_ALLOC_ORDER); > + } > +} > + > static int init_vqs(struct virtio_balloon *vb) > { > - struct virtqueue *vqs[3]; > - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; > - static const char * const names[] = { "inflate", "deflate", "stats" }; > - int err, nvqs; > + struct virtqueue *vqs[VIRTIO_BALLOON_VQ_MAX]; > + vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_MAX]; > + const char *names[VIRTIO_BALLOON_VQ_MAX]; > + struct scatterlist sg; > + int ret; > > /* > - * We expect two virtqueues: inflate and deflate, and > - * optionally stat. > + * Inflateq and deflateq are used unconditionally. The names[] > + * will be NULL if the related feature is not enabled, which will > + * cause no allocation for the corresponding virtqueue in find_vqs. > */ > - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; > - err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL); > - if (err) > - return err; > + callbacks[VIRTIO_BALLOON_VQ_INFLATE] = balloon_ack; > + names[VIRTIO_BALLOON_VQ_INFLATE] = "inflate"; > + callbacks[VIRTIO_BALLOON_VQ_DEFLATE] = balloon_ack; > + names[VIRTIO_BALLOON_VQ_DEFLATE] = "deflate"; > + names[VIRTIO_BALLOON_VQ_STATS] = NULL; > + names[VIRTIO_BALLOON_VQ_FREE_PAGE] = NULL; > > - vb->inflate_vq = vqs[0]; > - vb->deflate_vq = vqs[1]; > if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { > - struct scatterlist sg; > - unsigned int num_stats; > - vb->stats_vq = vqs[2]; > + names[VIRTIO_BALLOON_VQ_STATS] = "stats"; > + callbacks[VIRTIO_BALLOON_VQ_STATS] = stats_request; > + } > > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { > + names[VIRTIO_BALLOON_VQ_FREE_PAGE] = "free_page_vq"; > + callbacks[VIRTIO_BALLOON_VQ_FREE_PAGE] = free_page_vq_cb; > + } > + > + ret = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_MAX, > + vqs, callbacks, names, NULL, NULL); > + if (ret) > + return ret; > + > + vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_INFLATE]; > + vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_DEFLATE]; > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { > + vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_STATS]; > /* > * Prime this virtqueue with one buffer so the hypervisor can > * use it to signal us later (it can't be broken yet!). > */ > - num_stats = update_balloon_stats(vb); > - > - sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats); > - if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL) > - < 0) > - BUG(); > + sg_init_one(&sg, vb->stats, sizeof(vb->stats)); > + ret = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, > + GFP_KERNEL); > + if (ret) { > + dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n", > + __func__); > + return ret; > + }Why the change? Is it more likely to happen now?> virtqueue_kick(vb->stats_vq); > } > + > + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) > + vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_FREE_PAGE]; > + > return 0; > } > > +static int send_start_cmd_id(struct virtio_balloon *vb) > +{ > + struct scatterlist sg; > + struct virtqueue *vq = vb->free_page_vq; > + > + vb->cmd_start.id = cpu_to_virtio32(vb->vdev, vb->cmd_id_active); > + vb->cmd_start.size = cpu_to_virtio32(vb->vdev, > + MAX_ORDER_NR_PAGES * PAGE_SIZE); > + sg_init_one(&sg, &vb->cmd_start, > + sizeof(struct virtio_balloon_free_page_hints_cmd)); > + return virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_start, GFP_KERNEL); > +} > + > +static int send_stop_cmd_id(struct virtio_balloon *vb) > +{ > + struct scatterlist sg; > + struct virtqueue *vq = vb->free_page_vq; > + > + vb->cmd_stop.id = cpu_to_virtio32(vb->vdev, > + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID); > + vb->cmd_stop.size = 0; > + sg_init_one(&sg, &vb->cmd_stop, > + sizeof(struct virtio_balloon_free_page_hints_cmd)); > + return virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_stop, GFP_KERNEL); > +} > + > +/* > + * virtio_balloon_send_hints - send arrays of hints to host > + * @vb: the virtio_balloon struct > + * @arrays: the arrays of hints > + * @array_num: the number of arrays give by the caller > + * @last_array_hints: the number of hints in the last array > + * > + * Send hints to host array by array. This begins by sending a start cmd, > + * which contains a cmd id received from host and the free page block size in > + * bytes of each hint. At the end, a stop cmd is sent to host to indicate the > + * end of this reporting. If host actively requests to stop the reporting, free > + * the arrays that have not been sent. > + */ > +static void virtio_balloon_send_hints(struct virtio_balloon *vb, > + __le64 **arrays, > + uint32_t array_num, > + uint32_t last_array_hints) > +{ > + int err, i = 0; > + struct scatterlist sg; > + struct virtqueue *vq = vb->free_page_vq; > + > + /* Start by sending the received cmd id to host with an outbuf. */ > + err = send_start_cmd_id(vb); > + if (unlikely(err)) > + goto out_err; > + /* Kick host to start taking entries from the vq. */ > + virtqueue_kick(vq); > + > + for (i = 0; i < array_num; i++) { > + /* > + * If a stop id or a new cmd id was just received from host, > + * stop the reporting, and free the remaining arrays that > + * haven't been sent to host. > + */ > + if (vb->cmd_id_received != vb->cmd_id_active) > + goto out_free; > + > + if (i + 1 == array_num) > + sg_init_one(&sg, (void *)arrays[i], > + last_array_hints * sizeof(__le64)); > + else > + sg_init_one(&sg, (void *)arrays[i], ARRAY_ALLOC_SIZE); > + err = virtqueue_add_inbuf(vq, &sg, 1, (void *)arrays[i], > + GFP_KERNEL); > + if (unlikely(err)) > + goto out_err; > + } > + > + /* End by sending a stop id to host with an outbuf. */ > + err = send_stop_cmd_id(vb); > + if (unlikely(err)) > + goto out_err;Don't we need to kick here?> + return; > + > +out_err: > + dev_err(&vb->vdev->dev, "%s: err = %d\n", __func__, err); > +out_free: > + while (i < array_num) > + free_pages((unsigned long)arrays[i++], ARRAY_ALLOC_ORDER); > +} > + > +/* > + * virtio_balloon_load_hints - load free page hints into arrays > + * @vb: the virtio_balloon struct > + * @array_num: the number of arrays allocated > + * @last_array_hints: the number of hints loaded into the last array > + * > + * Only free pages blocks of MAX_ORDER - 1 are loaded into the arrays. > + * Each array size is MAX_ORDER_NR_PAGES * PAGE_SIZE (e.g. 4MB on x86). Failing > + * to allocate such an array essentially implies that no such free page blocks > + * could be reported. Alloacte the number of arrays according to the free page > + * blocks of MAX_ORDER - 1 that the system may have, and free the unused ones > + * after loading the free page hints. The last array may be partially loaded, > + * and @last_array_hints tells the caller about the number of hints there. > + * > + * Return the pointer to the memory that holds the addresses of the allocated > + * arrays, or NULL if no arrays are allocated. > + */ > +static __le64 **virtio_balloon_load_hints(struct virtio_balloon *vb, > + uint32_t *array_num, > + uint32_t *last_array_hints) > +{ > + __le64 **arrays; > + uint32_t max_entries, entries_per_page, entries_per_array, > + max_array_num, loaded_hints;All above likely should be int.> + int i; > + > + max_entries = max_free_page_blocks(ARRAY_ALLOC_ORDER); > + entries_per_page = PAGE_SIZE / sizeof(__le64); > + entries_per_array = entries_per_page * (1 << ARRAY_ALLOC_ORDER); > + max_array_num = max_entries / entries_per_array + > + !!(max_entries % entries_per_array); > + arrays = kmalloc_array(max_array_num, sizeof(__le64 *), GFP_KERNEL);Instead of all this mess, how about get_free_pages here as well? Also why do we need GFP_KERNEL for this?> + if (!arrays) > + return NULL; > + > + for (i = 0; i < max_array_num; i++) {So we are getting a ton of memory here just to free it up a bit later. Why doesn't get_from_free_page_list get the pages from free list for us? We could also avoid the 1st allocation then - just build a list of these.> + arrays[i] > + (__le64 *)__get_free_pages(__GFP_ATOMIC | __GFP_NOMEMALLOC, > + ARRAY_ALLOC_ORDER);Coding style says: Descendants are always substantially shorter than the parent and are placed substantially to the right.> + if (!arrays[i]) {Also if it does fail (small guest), shall we try with less arrays?> + /* > + * If any one of the arrays fails to be allocated, it > + * implies that the free list that we are interested > + * in is empty, and there is no need to continue the > + * reporting. So just free what's allocated and return > + * NULL. > + */ > + while (i > 0) > + free_pages((unsigned long)arrays[i--], > + ARRAY_ALLOC_ORDER); > + kfree(arrays); > + return NULL; > + } > + } > + loaded_hints = get_from_free_page_list(ARRAY_ALLOC_ORDER, > + max_array_num, arrays, > + entries_per_array); > + *array_num = loaded_hints / entries_per_array + > + !!(max_entries % entries_per_array); > + *last_array_hints = loaded_hints - > + (*array_num - 1) * entries_per_array; > + for (i = *array_num; i < max_array_num; i++) > + free_pages((unsigned long)arrays[i], ARRAY_ALLOC_ORDER); > + > + return arrays; > +} > + > +static void report_free_page_func(struct work_struct *work) > +{ > + struct virtio_balloon *vb; > + uint32_t array_num = 0, last_array_hints = 0; > + __le64 **arrays; > + > + vb = container_of(work, struct virtio_balloon, report_free_page_work); > + vb->cmd_id_active = vb->cmd_id_received; > + > + arrays = virtio_balloon_load_hints(vb, &array_num, &last_array_hints); > + if (arrays) { > + virtio_balloon_send_hints(vb, arrays, array_num, > + last_array_hints); > + kfree(arrays); > + } > +} > + > #ifdef CONFIG_BALLOON_COMPACTION > /* > * virtballoon_migratepage - perform the balloon page migration on behalf of > @@ -576,18 +830,30 @@ static int virtballoon_probe(struct virtio_device *vdev) > if (err) > goto out_free_vb; > > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { > + vb->balloon_wq = alloc_workqueue("balloon-wq", > + WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0); > + if (!vb->balloon_wq) { > + err = -ENOMEM; > + goto out_del_vqs; > + } > + INIT_WORK(&vb->report_free_page_work, report_free_page_func); > + vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; > + vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; > + } > + > vb->nb.notifier_call = virtballoon_oom_notify; > vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY; > err = register_oom_notifier(&vb->nb); > if (err < 0) > - goto out_del_vqs; > + goto out_del_balloon_wq; > > #ifdef CONFIG_BALLOON_COMPACTION > balloon_mnt = kern_mount(&balloon_fs); > if (IS_ERR(balloon_mnt)) { > err = PTR_ERR(balloon_mnt); > unregister_oom_notifier(&vb->nb); > - goto out_del_vqs; > + goto out_del_balloon_wq; > } > > vb->vb_dev_info.migratepage = virtballoon_migratepage; > @@ -597,7 +863,7 @@ static int virtballoon_probe(struct virtio_device *vdev) > kern_unmount(balloon_mnt); > unregister_oom_notifier(&vb->nb); > vb->vb_dev_info.inode = NULL; > - goto out_del_vqs; > + goto out_del_balloon_wq; > } > vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops; > #endif > @@ -608,6 +874,9 @@ static int virtballoon_probe(struct virtio_device *vdev) > virtballoon_changed(vdev); > return 0; > > +out_del_balloon_wq: > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) > + destroy_workqueue(vb->balloon_wq); > out_del_vqs: > vdev->config->del_vqs(vdev); > out_free_vb: > @@ -641,6 +910,11 @@ static void virtballoon_remove(struct virtio_device *vdev) > cancel_work_sync(&vb->update_balloon_size_work); > cancel_work_sync(&vb->update_balloon_stats_work); > > + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { > + cancel_work_sync(&vb->report_free_page_work); > + destroy_workqueue(vb->balloon_wq); > + } > + > remove_common(vb); > #ifdef CONFIG_BALLOON_COMPACTION > if (vb->vb_dev_info.inode) > @@ -692,6 +966,7 @@ static unsigned int features[] = { > VIRTIO_BALLOON_F_MUST_TELL_HOST, > VIRTIO_BALLOON_F_STATS_VQ, > VIRTIO_BALLOON_F_DEFLATE_ON_OOM, > + VIRTIO_BALLOON_F_FREE_PAGE_HINT, > }; > > static struct virtio_driver virtio_balloon_driver = { > diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h > index 13b8cb5..860456f 100644 > --- a/include/uapi/linux/virtio_balloon.h > +++ b/include/uapi/linux/virtio_balloon.h > @@ -34,15 +34,26 @@ > #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ > #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ > #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ > +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */ > > /* Size of a PFN in the balloon interface. */ > #define VIRTIO_BALLOON_PFN_SHIFT 12 > > +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID 0 > struct virtio_balloon_config { > /* Number of pages host wants Guest to give up. */ > __u32 num_pages; > /* Number of pages we've actually got in balloon. */ > __u32 actual; > + /* Free page report command id, readonly by guest */ > + __u32 free_page_report_cmd_id; > +}; > + > +struct virtio_balloon_free_page_hints_cmd { > + /* The command id received from host */ > + __le32 id; > + /* The free page block size in bytes */ > + __le32 size; > }; > > #define VIRTIO_BALLOON_S_SWAP_IN 0 /* Amount of memory swapped in */ > -- > 2.7.4
David Hildenbrand
2018-Jun-27 11:06 UTC
[PATCH v34 0/4] Virtio-balloon: support free page reporting
On 25.06.2018 14:05, Wei Wang wrote:> This patch series is separated from the previous "Virtio-balloon > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, > implemented by this series enables the virtio-balloon driver to report > hints of guest free pages to the host. It can be used to accelerate live > migration of VMs. Here is an introduction of this usage: > > Live migration needs to transfer the VM's memory from the source machine > to the destination round by round. For the 1st round, all the VM's memory > is transferred. From the 2nd round, only the pieces of memory that were > written by the guest (after the 1st round) are transferred. One method > that is popularly used by the hypervisor to track which part of memory is > written is to write-protect all the guest memory. > > This feature enables the optimization by skipping the transfer of guest > free pages during VM live migration. It is not concerned that the memory > pages are used after they are given to the hypervisor as a hint of the > free pages, because they will be tracked by the hypervisor and transferred > in the subsequent round if they are used and written. > > * Tests > - Test Environment > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz > Guest: 8G RAM, 4 vCPU > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second > > - Test Results > - Idle Guest Live Migration Time (results are averaged over 10 runs): > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction > - Guest with Linux Compilation Workload (make bzImage -j4): > - Live Migration Time (average) > Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction > - Linux Compilation Time > Optimization v.s. Legacy = 5min6s v.s. 5min12s > --> no obvious difference >Being in version 34 already, this whole thing still looks and feels like a big hack to me. It might just be me, but especially if I read about assumptions like "QEMU will not hotplug memory during migration". This does not feel like a clean solution. I am still not sure if we really need this interface, especially as real free page hinting might be on its way. a) we perform free page hinting by setting all free pages (arch_free_page()) to zero. Migration will detect zero pages and minimize #pages to migrate. I don't think this is a good idea but Michel suggested to do a performance evaluation and Nitesh is looking into that right now. b) we perform free page hinting using something that Nitesh proposed. We get in QEMU blocks of free pages that we can MADV_FREE. In addition we could e.g. clear the dirty bit of these pages in the dirty bitmap, to hinder them from getting migrated. Right now the hinting mechanism is synchronous (called from arch_free_page()) but we might be able to convert it into something asynchronous. So we might be able to completely get rid of this interface. And looking at all the discussions and problems that already happened during the development of this series, I think we should rather look into how clean free page hinting might solve the same problem. If it can't be solved using free page hinting, fair enough.> ChangeLog: > v33->v34: > - mm: > - add a new API max_free_page_blocks, which estimates the max > number of free page blocks that a free page list may have > - get_from_free_page_list: store addresses to multiple arrays, > instead of just one array. This removes the limitation of being > able to report only 2TB free memory (the largest array memory > that can be allocated on x86 is 4MB, which can store 2^19 > addresses of 4MB free page blocks). > - virtio-balloon: > - Allocate multiple arrays to load free page hints; > - Use the same method in v32 to do guest/host interaction, the > differeces are > - the hints are tranferred array by array, instead of > one by one. > - send the free page block size of a hint along with the cmd > id to host, so that host knows each address represents e.g. > a 4MB memory in our case. > v32->v33: > - mm/get_from_free_page_list: The new implementation to get free page > hints based on the suggestions from Linus: > https://lkml.org/lkml/2018/6/11/764 > This avoids the complex call chain, and looks more prudent. > - virtio-balloon: > - use a fix-sized buffer to get free page hints; > - remove the cmd id related interface. Now host can just send a free > page hint command to the guest (via the host_cmd config register) > to start the reporting. Currentlty the guest reports only the max > order free page hints to host, which has generated similar good > results as before. But the interface used by virtio-balloon to > report can support reporting more orders in the future when there > is a need. > v31->v32: > - virtio-balloon: > - rename cmd_id_use to cmd_id_active; > - report_free_page_func: detach used buffers after host sends a vq > interrupt, instead of busy waiting for used buffers. > v30->v31: > - virtio-balloon: > - virtio_balloon_send_free_pages: return -EINTR rather than 1 to > indicate an active stop requested by host; and add more > comments to explain about access to cmd_id_received without > locks; > - add_one_sg: add TODO to comments about possible improvement. > v29->v30: > - mm/walk_free_mem_block: add cond_sched() for each order > v28->v29: > - mm/page_poison: only expose page_poison_enabled(), rather than more > changes did in v28, as we are not 100% confident about that for now. > - virtio-balloon: use a separate buffer for the stop cmd, instead of > having the start and stop cmd use the same buffer. This avoids the > corner case that the start cmd is overridden by the stop cmd when > the host has a delay in reading the start cmd. > v27->v28: > - mm/page_poison: Move PAGE_POISON to page_poison.c and add a function > to expose page poison val to kernel modules. > v26->v27: > - add a new patch to expose page_poisoning_enabled to kernel modules > - virtio-balloon: set poison_val to 0xaaaaaaaa, instead of 0xaa > v25->v26: virtio-balloon changes only > - remove kicking free page vq since the host now polls the vq after > initiating the reporting > - report_free_page_func: detach all the used buffers after sending > the stop cmd id. This avoids leaving the detaching burden (i.e. > overhead) to the next cmd id. Detaching here isn't considered > overhead since the stop cmd id has been sent, and host has already > moved formard. > v24->v25: > - mm: change walk_free_mem_block to return 0 (instead of true) on > completing the report, and return a non-zero value from the > callabck, which stops the reporting. > - virtio-balloon: > - use enum instead of define for VIRTIO_BALLOON_VQ_INFLATE etc. > - avoid __virtio_clear_bit when bailing out; > - a new method to avoid reporting the some cmd id to host twice > - destroy_workqueue can cancel free page work when the feature is > negotiated; > - fail probe when the free page vq size is less than 2. > v23->v24: > - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to > VIRTIO_BALLOON_F_FREE_PAGE_HINT > - kick when vq->num_free < half full, instead of "= half full" > - replace BUG_ON with bailing out > - check vb->balloon_wq in probe(), if null, bail out > - add a new feature bit for page poisoning > - solve the corner case that one cmd id being sent to host twice > v22->v23: > - change to kick the device when the vq is half-way full; > - open-code batch_free_page_sg into add_one_sg; > - change cmd_id from "uint32_t" to "__virtio32"; > - reserver one entry in the vq for the driver to send cmd_id, instead > of busywaiting for an available entry; > - add "stop_update" check before queue_work for prudence purpose for > now, will have a separate patch to discuss this flag check later; > - init_vqs: change to put some variables on stack to have simpler > implementation; > - add destroy_workqueue(vb->balloon_wq); > v21->v22: > - add_one_sg: some code and comment re-arrangement > - send_cmd_id: handle a cornercase > > For previous ChangeLog, please reference > https://lwn.net/Articles/743660/ > > > > Wei Wang (4): > mm: support to get hints of free page blocks > virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT > mm/page_poison: expose page_poisoning_enabled to kernel modules > virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON > > drivers/virtio/virtio_balloon.c | 357 ++++++++++++++++++++++++++++++++---- > include/linux/mm.h | 3 + > include/uapi/linux/virtio_balloon.h | 14 ++ > mm/page_alloc.c | 82 +++++++++ > mm/page_poison.c | 6 + > 5 files changed, 426 insertions(+), 36 deletions(-) >-- Thanks, David / dhildenb
Wei Wang
2018-Jun-29 03:51 UTC
[PATCH v34 0/4] Virtio-balloon: support free page reporting
On 06/27/2018 07:06 PM, David Hildenbrand wrote:> On 25.06.2018 14:05, Wei Wang wrote: >> This patch series is separated from the previous "Virtio-balloon >> Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, >> implemented by this series enables the virtio-balloon driver to report >> hints of guest free pages to the host. It can be used to accelerate live >> migration of VMs. Here is an introduction of this usage: >> >> Live migration needs to transfer the VM's memory from the source machine >> to the destination round by round. For the 1st round, all the VM's memory >> is transferred. From the 2nd round, only the pieces of memory that were >> written by the guest (after the 1st round) are transferred. One method >> that is popularly used by the hypervisor to track which part of memory is >> written is to write-protect all the guest memory. >> >> This feature enables the optimization by skipping the transfer of guest >> free pages during VM live migration. It is not concerned that the memory >> pages are used after they are given to the hypervisor as a hint of the >> free pages, because they will be tracked by the hypervisor and transferred >> in the subsequent round if they are used and written. >> >> * Tests >> - Test Environment >> Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz >> Guest: 8G RAM, 4 vCPU >> Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second >> >> - Test Results >> - Idle Guest Live Migration Time (results are averaged over 10 runs): >> - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction >> - Guest with Linux Compilation Workload (make bzImage -j4): >> - Live Migration Time (average) >> Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction >> - Linux Compilation Time >> Optimization v.s. Legacy = 5min6s v.s. 5min12s >> --> no obvious difference >> > Being in version 34 already, this whole thing still looks and feels like > a big hack to me. It might just be me, but especially if I read about > assumptions like "QEMU will not hotplug memory during migration". This > does not feel like a clean solution. > > I am still not sure if we really need this interface, especially as real > free page hinting might be on its way. > > a) we perform free page hinting by setting all free pages > (arch_free_page()) to zero. Migration will detect zero pages and > minimize #pages to migrate. I don't think this is a good idea but Michel > suggested to do a performance evaluation and Nitesh is looking into that > right now.The hypervisor doesn't get the zero pages for free. It pays lots of CPU utilization and memory bandwidth (there are some guys complaining abut this on the QEMU mailinglist) In the above results, the legacy part already has the zero page feature in use.> > b) we perform free page hinting using something that Nitesh proposed. We > get in QEMU blocks of free pages that we can MADV_FREE. In addition we > could e.g. clear the dirty bit of these pages in the dirty bitmap, to > hinder them from getting migrated. Right now the hinting mechanism is > synchronous (called from arch_free_page()) but we might be able to > convert it into something asynchronous. > > So we might be able to completely get rid of this interface. And looking > at all the discussions and problems that already happened during the > development of this series, I think we should rather look into how clean > free page hinting might solve the same problem. > > If it can't be solved using free page hinting, fair enough. >I'm afraid it can't. For example, when we have a guest booted, without too many memory activities. Assume the guest has 8GB free memory. The arch_free_page there won't be able to capture the 8GB free pages since there is no free() called. This results in no free pages reported to host. Best, Wei
Michael S. Tsirkin
2018-Jun-29 14:45 UTC
[PATCH v34 0/4] Virtio-balloon: support free page reporting
On Wed, Jun 27, 2018 at 01:06:32PM +0200, David Hildenbrand wrote:> On 25.06.2018 14:05, Wei Wang wrote: > > This patch series is separated from the previous "Virtio-balloon > > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, > > implemented by this series enables the virtio-balloon driver to report > > hints of guest free pages to the host. It can be used to accelerate live > > migration of VMs. Here is an introduction of this usage: > > > > Live migration needs to transfer the VM's memory from the source machine > > to the destination round by round. For the 1st round, all the VM's memory > > is transferred. From the 2nd round, only the pieces of memory that were > > written by the guest (after the 1st round) are transferred. One method > > that is popularly used by the hypervisor to track which part of memory is > > written is to write-protect all the guest memory. > > > > This feature enables the optimization by skipping the transfer of guest > > free pages during VM live migration. It is not concerned that the memory > > pages are used after they are given to the hypervisor as a hint of the > > free pages, because they will be tracked by the hypervisor and transferred > > in the subsequent round if they are used and written. > > > > * Tests > > - Test Environment > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz > > Guest: 8G RAM, 4 vCPU > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second > > > > - Test Results > > - Idle Guest Live Migration Time (results are averaged over 10 runs): > > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reduction > > - Guest with Linux Compilation Workload (make bzImage -j4): > > - Live Migration Time (average) > > Optimization v.s. Legacy = 1402ms v.s. 2528ms --> ~44% reduction > > - Linux Compilation Time > > Optimization v.s. Legacy = 5min6s v.s. 5min12s > > --> no obvious difference > > > > Being in version 34 already, this whole thing still looks and feels like > a big hack to me. It might just be me, but especially if I read about > assumptions like "QEMU will not hotplug memory during migration". This > does not feel like a clean solution. > > I am still not sure if we really need this interface, especially as real > free page hinting might be on its way. > > a) we perform free page hinting by setting all free pages > (arch_free_page()) to zero. Migration will detect zero pages and > minimize #pages to migrate. I don't think this is a good idea but Michel > suggested to do a performance evaluation and Nitesh is looking into that > right now.Yes this test is needed I think. If we can get most of the benefit without PV interfaces, that's nice. Wei, I think you need this as part of your performance comparison too: set page poisoning value to 0 and enable KSM, compare with your patches.> b) we perform free page hinting using something that Nitesh proposed. We > get in QEMU blocks of free pages that we can MADV_FREE. In addition we > could e.g. clear the dirty bit of these pages in the dirty bitmap, to > hinder them from getting migrated. Right now the hinting mechanism is > synchronous (called from arch_free_page()) but we might be able to > convert it into something asynchronous. > > So we might be able to completely get rid of this interface.The way I see it, hinting during alloc/free will always add overhead which might be unacceptable for some people. So even with Nitesh's patches there's value in enabling / disabling hinting dynamically. And Wei's patches would then be useful to set the stage where we know the initial page state.> And looking at all the discussions and problems that already happened > during the development of this series, I think we should rather look > into how clean free page hinting might solve the same problem.I'm not sure I follow the logic. We found that neat tricks especially re-using the max order free page for reporting.> If it can't be solved using free page hinting, fair enough.I suspect Nitesh will need to find a way not to have mm code call out to random drivers or subsystems before that code is acceptable. -- MST
Wei Wang
2018-Jun-30 04:31 UTC
[PATCH v34 0/4] Virtio-balloon: support free page reporting
On 06/25/2018 08:05 PM, Wei Wang wrote:> This patch series is separated from the previous "Virtio-balloon > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, > implemented by this series enables the virtio-balloon driver to report > hints of guest free pages to the host. It can be used to accelerate live > migration of VMs. Here is an introduction of this usage: > > Live migration needs to transfer the VM's memory from the source machine > to the destination round by round. For the 1st round, all the VM's memory > is transferred. From the 2nd round, only the pieces of memory that were > written by the guest (after the 1st round) are transferred. One method > that is popularly used by the hypervisor to track which part of memory is > written is to write-protect all the guest memory. > > This feature enables the optimization by skipping the transfer of guest > free pages during VM live migration. It is not concerned that the memory > pages are used after they are given to the hypervisor as a hint of the > free pages, because they will be tracked by the hypervisor and transferred > in the subsequent round if they are used and written. > > * Tests > - Test Environment > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz > Guest: 8G RAM, 4 vCPU > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second > > - Test Results > - Idle Guest Live Migration Time (results are averaged over 10 runs): > - Optimization v.s. Legacy = 284ms vs 1757ms --> ~84% reductionAccording to Michael's comments, add one more set of data here: Enabling page poison with value=0, and enable KSM. The legacy live migration time is 1806ms (averaged across 10 runs), compared to the case with this optimization feature in use (i.e. 284ms), there is still around ~84% reduction. Best, Wei
Apparently Analagous Threads
- [PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
- [PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
- [PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
- [PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
- [PATCH v34 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT