Wei Wang
2018-Jul-10 09:31 UTC
[PATCH v35 0/5] Virtio-balloon: support free page reporting
This patch series is separated from the previous "Virtio-balloon Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, implemented by this series enables the virtio-balloon driver to report hints of guest free pages to the host. It can be used to accelerate live migration of VMs. Here is an introduction of this usage: Live migration needs to transfer the VM's memory from the source machine to the destination round by round. For the 1st round, all the VM's memory is transferred. From the 2nd round, only the pieces of memory that were written by the guest (after the 1st round) are transferred. One method that is popularly used by the hypervisor to track which part of memory is written is to write-protect all the guest memory. This feature enables the optimization by skipping the transfer of guest free pages during VM live migration. It is not concerned that the memory pages are used after they are given to the hypervisor as a hint of the free pages, because they will be tracked by the hypervisor and transferred in the subsequent round if they are used and written. * Tests - Test Environment Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz Guest: 8G RAM, 4 vCPU Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second - Test Results - Idle Guest Live Migration Time (results are averaged over 10 runs): - Optimization v.s. Legacy = 291ms vs 1757ms --> ~84% reduction (setting page poisoning zero and enabling ksm don't affect the comparison result) - Guest with Linux Compilation Workload (make bzImage -j4): - Live Migration Time (average) Optimization v.s. Legacy = 1420ms v.s. 2528ms --> ~44% reduction - Linux Compilation Time Optimization v.s. Legacy = 5min8s v.s. 5min12s --> no obvious difference ChangeLog: v34->v35: - mm: - get_from_free_page_list: use a list of page blocks as buffers to store addresses, instead of an array of buffers. - virtio-balloon: - Allocate a list of buffers, instead of an array of buffers. - Used buffers are freed after host puts the buffer to the used ring; unused buffers are freed immediately when guest finishes reporting. - change uint32_t to u32; - patch 2 is split out as an independent patch, as it's unrelated to the free page hinting feature. v33->v34: - mm: - add a new API max_free_page_blocks, which estimates the max number of free page blocks that a free page list may have - get_from_free_page_list: store addresses to multiple arrays, instead of just one array. This removes the limitation of being able to report only 2TB free memory (the largest array memory that can be allocated on x86 is 4MB, which can store 2^19 addresses of 4MB free page blocks). - virtio-balloon: - Allocate multiple arrays to load free page hints; - Use the same method in v32 to do guest/host interaction, the differeces are - the hints are tranferred array by array, instead of one by one. - send the free page block size of a hint along with the cmd id to host, so that host knows each address represents e.g. a 4MB memory in our case. v32->v33: - mm/get_from_free_page_list: The new implementation to get free page hints based on the suggestions from Linus: https://lkml.org/lkml/2018/6/11/764 This avoids the complex call chain, and looks more prudent. - virtio-balloon: - use a fix-sized buffer to get free page hints; - remove the cmd id related interface. Now host can just send a free page hint command to the guest (via the host_cmd config register) to start the reporting. Currentlty the guest reports only the max order free page hints to host, which has generated similar good results as before. But the interface used by virtio-balloon to report can support reporting more orders in the future when there is a need. v31->v32: - virtio-balloon: - rename cmd_id_use to cmd_id_active; - report_free_page_func: detach used buffers after host sends a vq interrupt, instead of busy waiting for used buffers. v30->v31: - virtio-balloon: - virtio_balloon_send_free_pages: return -EINTR rather than 1 to indicate an active stop requested by host; and add more comments to explain about access to cmd_id_received without locks; - add_one_sg: add TODO to comments about possible improvement. v29->v30: - mm/walk_free_mem_block: add cond_sched() for each order v28->v29: - mm/page_poison: only expose page_poison_enabled(), rather than more changes did in v28, as we are not 100% confident about that for now. - virtio-balloon: use a separate buffer for the stop cmd, instead of having the start and stop cmd use the same buffer. This avoids the corner case that the start cmd is overridden by the stop cmd when the host has a delay in reading the start cmd. v27->v28: - mm/page_poison: Move PAGE_POISON to page_poison.c and add a function to expose page poison val to kernel modules. v26->v27: - add a new patch to expose page_poisoning_enabled to kernel modules - virtio-balloon: set poison_val to 0xaaaaaaaa, instead of 0xaa v25->v26: virtio-balloon changes only - remove kicking free page vq since the host now polls the vq after initiating the reporting - report_free_page_func: detach all the used buffers after sending the stop cmd id. This avoids leaving the detaching burden (i.e. overhead) to the next cmd id. Detaching here isn't considered overhead since the stop cmd id has been sent, and host has already moved formard. v24->v25: - mm: change walk_free_mem_block to return 0 (instead of true) on completing the report, and return a non-zero value from the callabck, which stops the reporting. - virtio-balloon: - use enum instead of define for VIRTIO_BALLOON_VQ_INFLATE etc. - avoid __virtio_clear_bit when bailing out; - a new method to avoid reporting the some cmd id to host twice - destroy_workqueue can cancel free page work when the feature is negotiated; - fail probe when the free page vq size is less than 2. v23->v24: - change feature name VIRTIO_BALLOON_F_FREE_PAGE_VQ to VIRTIO_BALLOON_F_FREE_PAGE_HINT - kick when vq->num_free < half full, instead of "= half full" - replace BUG_ON with bailing out - check vb->balloon_wq in probe(), if null, bail out - add a new feature bit for page poisoning - solve the corner case that one cmd id being sent to host twice v22->v23: - change to kick the device when the vq is half-way full; - open-code batch_free_page_sg into add_one_sg; - change cmd_id from "uint32_t" to "__virtio32"; - reserver one entry in the vq for the driver to send cmd_id, instead of busywaiting for an available entry; - add "stop_update" check before queue_work for prudence purpose for now, will have a separate patch to discuss this flag check later; - init_vqs: change to put some variables on stack to have simpler implementation; - add destroy_workqueue(vb->balloon_wq); v21->v22: - add_one_sg: some code and comment re-arrangement - send_cmd_id: handle a cornercase For previous ChangeLog, please reference https://lwn.net/Articles/743660/ Wei Wang (5): mm: support to get hints of free page blocks virtio-balloon: remove BUG() in init_vqs virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT mm/page_poison: expose page_poisoning_enabled to kernel modules virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON drivers/virtio/virtio_balloon.c | 419 +++++++++++++++++++++++++++++++++--- include/linux/mm.h | 3 + include/uapi/linux/virtio_balloon.h | 14 ++ mm/page_alloc.c | 98 +++++++++ mm/page_poison.c | 6 + 5 files changed, 511 insertions(+), 29 deletions(-) -- 2.7.4
Wei Wang
2018-Jul-10 09:31 UTC
[PATCH v35 1/5] mm: support to get hints of free page blocks
This patch adds support to get free page blocks from a free page list. The physical addresses of the blocks are stored to a list of buffers passed from the caller. The obtained free page blocks are hints about free pages, because there is no guarantee that they are still on the free page list after the function returns. One use example of this patch is to accelerate live migration by skipping the transfer of free pages reported from the guest. A popular method used by the hypervisor to track which part of memory is written during live migration is to write-protect all the guest memory. So, those pages that are hinted as free pages but are written after this function returns will be captured by the hypervisor, and they will be added to the next round of memory transfer. Suggested-by: Linus Torvalds <torvalds at linux-foundation.org> Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michal Hocko <mhocko at kernel.org> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Linus Torvalds <torvalds at linux-foundation.org> --- include/linux/mm.h | 3 ++ mm/page_alloc.c | 98 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 101 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index a0fbb9f..5ce654f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long * zones_size); extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +unsigned long max_free_page_blocks(int order); +int get_from_free_page_list(int order, struct list_head *pages, + unsigned int size, unsigned long *loaded_num); /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100..b67839b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5043,6 +5043,104 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) show_swap_cache_info(); } +/** + * max_free_page_blocks - estimate the max number of free page blocks + * @order: the order of the free page blocks to estimate + * + * This function gives a rough estimation of the possible maximum number of + * free page blocks a free list may have. The estimation works on an assumption + * that all the system pages are on that list. + * + * Context: Any context. + * + * Return: The largest number of free page blocks that the free list can have. + */ +unsigned long max_free_page_blocks(int order) +{ + return totalram_pages / (1 << order); +} +EXPORT_SYMBOL_GPL(max_free_page_blocks); + +/** + * get_from_free_page_list - get hints of free pages from a free page list + * @order: the order of the free page list to check + * @pages: the list of page blocks used as buffers to load the addresses + * @size: the size of each buffer in bytes + * @loaded_num: the number of addresses loaded to the buffers + * + * This function offers hints about free pages. The addresses of free page + * blocks are stored to the list of buffers passed from the caller. There is + * no guarantee that the obtained free pages are still on the free page list + * after the function returns. pfn_to_page on the obtained free pages is + * strongly discouraged and if there is an absolute need for that, make sure + * to contact MM people to discuss potential problems. + * + * The addresses are currently stored to a buffer in little endian. This + * avoids the overhead of converting endianness by the caller who needs data + * in the little endian format. Big endian support can be added on demand in + * the future. + * + * Context: Process context. + * + * Return: 0 if all the free page block addresses are stored to the buffers; + * -ENOSPC if the buffers are not sufficient to store all the + * addresses; or -EINVAL if an unexpected argument is received (e.g. + * incorrect @order, empty buffer list). + */ +int get_from_free_page_list(int order, struct list_head *pages, + unsigned int size, unsigned long *loaded_num) +{ + struct zone *zone; + enum migratetype mt; + struct list_head *free_list; + struct page *free_page, *buf_page; + unsigned long addr; + __le64 *buf; + unsigned int used_buf_num = 0, entry_index = 0, + entries = size / sizeof(__le64); + *loaded_num = 0; + + /* Validity check */ + if (order < 0 || order >= MAX_ORDER) + return -EINVAL; + + buf_page = list_first_entry_or_null(pages, struct page, lru); + if (!buf_page) + return -EINVAL; + buf = (__le64 *)page_address(buf_page); + + for_each_populated_zone(zone) { + spin_lock_irq(&zone->lock); + for (mt = 0; mt < MIGRATE_TYPES; mt++) { + free_list = &zone->free_area[order].free_list[mt]; + list_for_each_entry(free_page, free_list, lru) { + addr = page_to_pfn(free_page) << PAGE_SHIFT; + /* This buffer is full, so use the next one */ + if (entry_index == entries) { + buf_page = list_next_entry(buf_page, + lru); + /* All the buffers are consumed */ + if (!buf_page) { + spin_unlock_irq(&zone->lock); + *loaded_num = used_buf_num * + entries; + return -ENOSPC; + } + buf = (__le64 *)page_address(buf_page); + entry_index = 0; + used_buf_num++; + } + buf[entry_index++] = cpu_to_le64(addr); + } + } + spin_unlock_irq(&zone->lock); + } + + *loaded_num = used_buf_num * entries + entry_index; + return 0; +} +EXPORT_SYMBOL_GPL(get_from_free_page_list); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone; -- 2.7.4
It's a bit overkill to use BUG when failing to add an entry to the stats_vq in init_vqs. So remove it and just return the error to the caller to bail out nicely. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> --- drivers/virtio/virtio_balloon.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 6b237e3..9356a1a 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -455,9 +455,13 @@ static int init_vqs(struct virtio_balloon *vb) num_stats = update_balloon_stats(vb); sg_init_one(&sg, vb->stats, sizeof(vb->stats[0]) * num_stats); - if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL) - < 0) - BUG(); + err = virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, + GFP_KERNEL); + if (err) { + dev_warn(&vb->vdev->dev, "%s: add stat_vq failed\n", + __func__); + return err; + } virtqueue_kick(vb->stats_vq); } return 0; -- 2.7.4
Wei Wang
2018-Jul-10 09:31 UTC
[PATCH v35 3/5] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
Negotiation of the VIRTIO_BALLOON_F_FREE_PAGE_HINT feature indicates the support of reporting hints of guest free pages to host via virtio-balloon. Host requests the guest to report free page hints by sending a new cmd id to the guest via the free_page_report_cmd_id configuration register. As the first step here, virtio-balloon only reports free page hints from the max order (i.e. 10) free page list to host. This has generated similar good results as reporting all free page hints during our tests. When the guest starts to report, it first sends a start cmd to host via the free page vq, which acks to host the cmd id received, and tells it the hint size (e.g. 4MB each on x86). When the guest finishes the reporting, a stop cmd is sent to host via the vq. TODO: - support reporting free page hints from smaller order free page lists when there is a need/request from users. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Signed-off-by: Liang Li <liang.z.li at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michal Hocko <mhocko at kernel.org> Cc: Andrew Morton <akpm at linux-foundation.org> --- drivers/virtio/virtio_balloon.c | 399 +++++++++++++++++++++++++++++++++--- include/uapi/linux/virtio_balloon.h | 11 + 2 files changed, 384 insertions(+), 26 deletions(-) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 9356a1a..8754154 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -43,6 +43,14 @@ #define OOM_VBALLOON_DEFAULT_PAGES 256 #define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80 +/* The order used to allocate a buffer to load free page hints */ +#define VIRTIO_BALLOON_HINT_BUF_ORDER (MAX_ORDER - 1) +/* The number of pages a hint buffer has */ +#define VIRTIO_BALLOON_HINT_BUF_PAGES (1 << VIRTIO_BALLOON_HINT_BUF_ORDER) +/* The size of a hint buffer in bytes */ +#define VIRTIO_BALLOON_HINT_BUF_SIZE (VIRTIO_BALLOON_HINT_BUF_PAGES << \ + PAGE_SHIFT) + static int oom_pages = OOM_VBALLOON_DEFAULT_PAGES; module_param(oom_pages, int, S_IRUSR | S_IWUSR); MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); @@ -51,9 +59,22 @@ MODULE_PARM_DESC(oom_pages, "pages to free on OOM"); static struct vfsmount *balloon_mnt; #endif +enum virtio_balloon_vq { + VIRTIO_BALLOON_VQ_INFLATE, + VIRTIO_BALLOON_VQ_DEFLATE, + VIRTIO_BALLOON_VQ_STATS, + VIRTIO_BALLOON_VQ_FREE_PAGE, + VIRTIO_BALLOON_VQ_MAX +}; + struct virtio_balloon { struct virtio_device *vdev; - struct virtqueue *inflate_vq, *deflate_vq, *stats_vq; + struct virtqueue *inflate_vq, *deflate_vq, *stats_vq, *free_page_vq; + + /* Balloon's own wq for cpu-intensive work items */ + struct workqueue_struct *balloon_wq; + /* The free page reporting work item submitted to the balloon wq */ + struct work_struct report_free_page_work; /* The balloon servicing is delegated to a freezable workqueue. */ struct work_struct update_balloon_stats_work; @@ -63,6 +84,15 @@ struct virtio_balloon { spinlock_t stop_update_lock; bool stop_update; + /* Command buffers to start and stop the reporting of hints to host */ + struct virtio_balloon_free_page_hints_cmd cmd_start; + struct virtio_balloon_free_page_hints_cmd cmd_stop; + + /* The cmd id received from host */ + u32 cmd_id_received; + /* The cmd id that is actively in use */ + u32 cmd_id_active; + /* Waiting for host to ack the pages we released. */ wait_queue_head_t acked; @@ -326,17 +356,6 @@ static void stats_handle_request(struct virtio_balloon *vb) virtqueue_kick(vq); } -static void virtballoon_changed(struct virtio_device *vdev) -{ - struct virtio_balloon *vb = vdev->priv; - unsigned long flags; - - spin_lock_irqsave(&vb->stop_update_lock, flags); - if (!vb->stop_update) - queue_work(system_freezable_wq, &vb->update_balloon_size_work); - spin_unlock_irqrestore(&vb->stop_update_lock, flags); -} - static inline s64 towards_target(struct virtio_balloon *vb) { s64 target; @@ -353,6 +372,35 @@ static inline s64 towards_target(struct virtio_balloon *vb) return target - vb->num_pages; } +static void virtballoon_changed(struct virtio_device *vdev) +{ + struct virtio_balloon *vb = vdev->priv; + unsigned long flags; + s64 diff = towards_target(vb); + + if (diff) { + spin_lock_irqsave(&vb->stop_update_lock, flags); + if (!vb->stop_update) + queue_work(system_freezable_wq, + &vb->update_balloon_size_work); + spin_unlock_irqrestore(&vb->stop_update_lock, flags); + } + + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + virtio_cread(vdev, struct virtio_balloon_config, + free_page_report_cmd_id, &vb->cmd_id_received); + if (vb->cmd_id_received !+ VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID && + vb->cmd_id_received != vb->cmd_id_active) { + spin_lock_irqsave(&vb->stop_update_lock, flags); + if (!vb->stop_update) + queue_work(vb->balloon_wq, + &vb->report_free_page_work); + spin_unlock_irqrestore(&vb->stop_update_lock, flags); + } + } +} + static void update_balloon_size(struct virtio_balloon *vb) { u32 actual = vb->num_pages; @@ -425,28 +473,61 @@ static void update_balloon_size_func(struct work_struct *work) queue_work(system_freezable_wq, work); } +static void virtio_balloon_free_used_hint_buf(struct virtqueue *vq) +{ + unsigned int len; + void *buf; + struct virtio_balloon *vb = vq->vdev->priv; + + do { + buf = virtqueue_get_buf(vq, &len); + if (buf == &vb->cmd_start || buf == &vb->cmd_stop) + continue; + free_pages((unsigned long)buf, VIRTIO_BALLOON_HINT_BUF_ORDER); + } while (buf); +} + static int init_vqs(struct virtio_balloon *vb) { - struct virtqueue *vqs[3]; - vq_callback_t *callbacks[] = { balloon_ack, balloon_ack, stats_request }; - static const char * const names[] = { "inflate", "deflate", "stats" }; - int err, nvqs; + struct virtqueue *vqs[VIRTIO_BALLOON_VQ_MAX]; + vq_callback_t *callbacks[VIRTIO_BALLOON_VQ_MAX]; + const char *names[VIRTIO_BALLOON_VQ_MAX]; + int err; /* - * We expect two virtqueues: inflate and deflate, and - * optionally stat. + * Inflateq and deflateq are used unconditionally. The names[] + * will be NULL if the related feature is not enabled, which will + * cause no allocation for the corresponding virtqueue in find_vqs. */ - nvqs = virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ) ? 3 : 2; - err = virtio_find_vqs(vb->vdev, nvqs, vqs, callbacks, names, NULL); + callbacks[VIRTIO_BALLOON_VQ_INFLATE] = balloon_ack; + names[VIRTIO_BALLOON_VQ_INFLATE] = "inflate"; + callbacks[VIRTIO_BALLOON_VQ_DEFLATE] = balloon_ack; + names[VIRTIO_BALLOON_VQ_DEFLATE] = "deflate"; + names[VIRTIO_BALLOON_VQ_STATS] = NULL; + names[VIRTIO_BALLOON_VQ_FREE_PAGE] = NULL; + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { + names[VIRTIO_BALLOON_VQ_STATS] = "stats"; + callbacks[VIRTIO_BALLOON_VQ_STATS] = stats_request; + } + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + names[VIRTIO_BALLOON_VQ_FREE_PAGE] = "free_page_vq"; + callbacks[VIRTIO_BALLOON_VQ_FREE_PAGE] + virtio_balloon_free_used_hint_buf; + } + + err = vb->vdev->config->find_vqs(vb->vdev, VIRTIO_BALLOON_VQ_MAX, + vqs, callbacks, names, NULL, NULL); if (err) return err; - vb->inflate_vq = vqs[0]; - vb->deflate_vq = vqs[1]; + vb->inflate_vq = vqs[VIRTIO_BALLOON_VQ_INFLATE]; + vb->deflate_vq = vqs[VIRTIO_BALLOON_VQ_DEFLATE]; if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_STATS_VQ)) { struct scatterlist sg; unsigned int num_stats; - vb->stats_vq = vqs[2]; + vb->stats_vq = vqs[VIRTIO_BALLOON_VQ_STATS]; /* * Prime this virtqueue with one buffer so the hypervisor can @@ -464,9 +545,246 @@ static int init_vqs(struct virtio_balloon *vb) } virtqueue_kick(vb->stats_vq); } + + if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) + vb->free_page_vq = vqs[VIRTIO_BALLOON_VQ_FREE_PAGE]; + return 0; } +static int send_start_cmd_id(struct virtio_balloon *vb) +{ + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + int err; + + virtio_balloon_free_used_hint_buf(vq); + + vb->cmd_start.id = cpu_to_virtio32(vb->vdev, vb->cmd_id_active); + vb->cmd_start.size = cpu_to_virtio32(vb->vdev, + MAX_ORDER_NR_PAGES * PAGE_SIZE); + sg_init_one(&sg, &vb->cmd_start, + sizeof(struct virtio_balloon_free_page_hints_cmd)); + + err = virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_start, GFP_KERNEL); + if (!err) + virtqueue_kick(vq); + return err; +} + +static int send_stop_cmd_id(struct virtio_balloon *vb) +{ + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + int err; + + virtio_balloon_free_used_hint_buf(vq); + + vb->cmd_stop.id = cpu_to_virtio32(vb->vdev, + VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID); + vb->cmd_stop.size = 0; + sg_init_one(&sg, &vb->cmd_stop, + sizeof(struct virtio_balloon_free_page_hints_cmd)); + err = virtqueue_add_outbuf(vq, &sg, 1, &vb->cmd_stop, GFP_KERNEL); + if (!err) + virtqueue_kick(vq); + return err; +} + +static int send_hint_buf(struct virtio_balloon *vb, void *buf, + unsigned int size) +{ + int err; + struct scatterlist sg; + struct virtqueue *vq = vb->free_page_vq; + + virtio_balloon_free_used_hint_buf(vq); + + /* + * If a stop id or a new cmd id was just received from host, + * stop the reporting, return -EINTR to indicate an active stop. + */ + if (vb->cmd_id_received != vb->cmd_id_active) + return -EINTR; + + /* There is always one entry reserved for the cmd id to use. */ + if (vq->num_free < 2) + return -ENOSPC; + + sg_init_one(&sg, buf, size); + err = virtqueue_add_inbuf(vq, &sg, 1, buf, GFP_KERNEL); + if (!err) + virtqueue_kick(vq); + return err; +} + +static void virtio_balloon_free_hint_bufs(struct list_head *pages) +{ + struct page *page, *next; + + list_for_each_entry_safe(page, next, pages, lru) { + __free_pages(page, VIRTIO_BALLOON_HINT_BUF_ORDER); + list_del(&page->lru); + } +} + +/* + * virtio_balloon_send_hints - send buffers of hints to host + * @vb: the virtio_balloon struct + * @pages: the list of page blocks used as buffers + * @hint_num: the total number of hints + * + * Send buffers of hints to host. This begins by sending a start cmd, which + * contains a cmd id received from host and the free page block size in bytes + * of each hint. At the end, a stop cmd is sent to host to indicate the end + * of this reporting. If host actively requests to stop the reporting, free + * the buffers that have not been sent. + */ +static void virtio_balloon_send_hints(struct virtio_balloon *vb, + struct list_head *pages, + unsigned long hint_num) +{ + struct page *page, *next; + void *buf; + unsigned int buf_size, hint_size; + int err; + + /* Start by sending the received cmd id to host with an outbuf. */ + err = send_start_cmd_id(vb); + if (unlikely(err)) + goto out_free; + + list_for_each_entry_safe(page, next, pages, lru) { + /* We've sent all the hints. */ + if (!hint_num) + break; + hint_size = hint_num * sizeof(__le64); + buf = page_address(page); + buf_size = hint_size > VIRTIO_BALLOON_HINT_BUF_SIZE ? + VIRTIO_BALLOON_HINT_BUF_SIZE : hint_size; + hint_num -= buf_size / sizeof(__le64); + err = send_hint_buf(vb, buf, buf_size); + /* + * If host actively stops the reporting or no space to add more + * hint bufs, just stop adding hints and continue to add the + * stop cmd. Other device errors need to bail out with an error + * message. + */ + if (unlikely(err == -EINTR || err == -ENOSPC)) + break; + else if (unlikely(err)) + goto out_free; + /* + * Remove the buffer from the list only when it has been given + * to host. Otherwise, it will stay on the list and will be + * freed via virtio_balloon_free_hint_bufs. + */ + list_del(&page->lru); + } + + /* End by sending a stop id to host with an outbuf. */ + err = send_stop_cmd_id(vb); +out_free: + if (err) + dev_err(&vb->vdev->dev, "%s: err = %d\n", __func__, err); + /* Free all the buffers that are not sent to host. */ + virtio_balloon_free_hint_bufs(pages); +} + +/* + * Allocate a list of buffers to load free page hints. Those buffers are + * allocated based on the estimation of the max number of free page blocks + * that the system may have, so that they are sufficient to store all the + * free page addresses. + * + * Return 0 on success, otherwise false. + */ +static int virtio_balloon_alloc_hint_bufs(struct list_head *pages) +{ + struct page *page; + unsigned long max_entries, entries_per_page, entries_per_buf, + max_buf_num; + int i; + + max_entries = max_free_page_blocks(VIRTIO_BALLOON_HINT_BUF_ORDER); + entries_per_page = PAGE_SIZE / sizeof(__le64); + entries_per_buf = entries_per_page * VIRTIO_BALLOON_HINT_BUF_PAGES; + max_buf_num = max_entries / entries_per_buf + + !!(max_entries % entries_per_buf); + + for (i = 0; i < max_buf_num; i++) { + page = alloc_pages(__GFP_ATOMIC | __GFP_NOMEMALLOC, + VIRTIO_BALLOON_HINT_BUF_ORDER); + if (!page) { + /* + * If any one of the buffers fails to be allocated, it + * implies that the free list that we are interested + * in is empty, and there is no need to continue the + * reporting. So just free what's allocated and return + * -ENOMEM. + */ + virtio_balloon_free_hint_bufs(pages); + return -ENOMEM; + } + list_add(&page->lru, pages); + } + + return 0; +} + +/* + * virtio_balloon_load_hints - load free page hints into buffers + * @vb: the virtio_balloon struct + * @pages: the list of page blocks used as buffers + * + * Only free pages blocks of MAX_ORDER - 1 are loaded into the buffers. + * Each buffer size is MAX_ORDER_NR_PAGES * PAGE_SIZE (e.g. 4MB on x86). + * Failing to allocate such a buffer essentially implies that no such free + * page blocks could be reported. + * + * Return the total number of hints loaded into the buffers. + */ +static unsigned long virtio_balloon_load_hints(struct virtio_balloon *vb, + struct list_head *pages) +{ + unsigned long loaded_hints = 0; + int ret; + + do { + ret = virtio_balloon_alloc_hint_bufs(pages); + if (ret) + return 0; + + ret = get_from_free_page_list(VIRTIO_BALLOON_HINT_BUF_ORDER, + pages, VIRTIO_BALLOON_HINT_BUF_SIZE, + &loaded_hints); + /* + * Retry in the case that memory is onlined quickly, which + * causes the allocated buffers to be insufficient to store + * all the free page addresses. Free the hint buffers before + * retry. + */ + if (unlikely(ret == -ENOSPC)) + virtio_balloon_free_hint_bufs(pages); + } while (ret == -ENOSPC); + + return loaded_hints; +} + +static void report_free_page_func(struct work_struct *work) +{ + struct virtio_balloon *vb; + unsigned long loaded_hints = 0; + LIST_HEAD(pages); + + vb = container_of(work, struct virtio_balloon, report_free_page_work); + vb->cmd_id_active = vb->cmd_id_received; + + loaded_hints = virtio_balloon_load_hints(vb, &pages); + if (loaded_hints) + virtio_balloon_send_hints(vb, &pages, loaded_hints); +} + #ifdef CONFIG_BALLOON_COMPACTION /* * virtballoon_migratepage - perform the balloon page migration on behalf of @@ -580,18 +898,38 @@ static int virtballoon_probe(struct virtio_device *vdev) if (err) goto out_free_vb; + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + /* + * There is always one entry reserved for cmd id, so the ring + * size needs to be at least two to report free page hints. + */ + if (virtqueue_get_vring_size(vb->free_page_vq) < 2) { + err = -ENOSPC; + goto out_del_vqs; + } + vb->balloon_wq = alloc_workqueue("balloon-wq", + WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0); + if (!vb->balloon_wq) { + err = -ENOMEM; + goto out_del_vqs; + } + INIT_WORK(&vb->report_free_page_work, report_free_page_func); + vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; + vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; + } + vb->nb.notifier_call = virtballoon_oom_notify; vb->nb.priority = VIRTBALLOON_OOM_NOTIFY_PRIORITY; err = register_oom_notifier(&vb->nb); if (err < 0) - goto out_del_vqs; + goto out_del_balloon_wq; #ifdef CONFIG_BALLOON_COMPACTION balloon_mnt = kern_mount(&balloon_fs); if (IS_ERR(balloon_mnt)) { err = PTR_ERR(balloon_mnt); unregister_oom_notifier(&vb->nb); - goto out_del_vqs; + goto out_del_balloon_wq; } vb->vb_dev_info.migratepage = virtballoon_migratepage; @@ -601,7 +939,7 @@ static int virtballoon_probe(struct virtio_device *vdev) kern_unmount(balloon_mnt); unregister_oom_notifier(&vb->nb); vb->vb_dev_info.inode = NULL; - goto out_del_vqs; + goto out_del_balloon_wq; } vb->vb_dev_info.inode->i_mapping->a_ops = &balloon_aops; #endif @@ -612,6 +950,9 @@ static int virtballoon_probe(struct virtio_device *vdev) virtballoon_changed(vdev); return 0; +out_del_balloon_wq: + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) + destroy_workqueue(vb->balloon_wq); out_del_vqs: vdev->config->del_vqs(vdev); out_free_vb: @@ -645,6 +986,11 @@ static void virtballoon_remove(struct virtio_device *vdev) cancel_work_sync(&vb->update_balloon_size_work); cancel_work_sync(&vb->update_balloon_stats_work); + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT)) { + cancel_work_sync(&vb->report_free_page_work); + destroy_workqueue(vb->balloon_wq); + } + remove_common(vb); #ifdef CONFIG_BALLOON_COMPACTION if (vb->vb_dev_info.inode) @@ -696,6 +1042,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_MUST_TELL_HOST, VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, + VIRTIO_BALLOON_F_FREE_PAGE_HINT, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index 13b8cb5..b77919b 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -34,15 +34,26 @@ #define VIRTIO_BALLOON_F_MUST_TELL_HOST 0 /* Tell before reclaiming pages */ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ +#define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 +#define VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID 0 struct virtio_balloon_config { /* Number of pages host wants Guest to give up. */ __u32 num_pages; /* Number of pages we've actually got in balloon. */ __u32 actual; + /* Free page report command id, readonly by guest */ + __u32 free_page_report_cmd_id; +}; + +struct virtio_balloon_free_page_hints_cmd { + /* The command id received from host */ + __virtio32 id; + /* The free page block size in bytes */ + __virtio32 size; }; #define VIRTIO_BALLOON_S_SWAP_IN 0 /* Amount of memory swapped in */ -- 2.7.4
Wei Wang
2018-Jul-10 09:31 UTC
[PATCH v35 4/5] mm/page_poison: expose page_poisoning_enabled to kernel modules
In some usages, e.g. virtio-balloon, a kernel module needs to know if page poisoning is in use. This patch exposes the page_poisoning_enabled function to kernel modules. Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Michal Hocko <mhocko at kernel.org> Cc: Michael S. Tsirkin <mst at redhat.com> Acked-by: Andrew Morton <akpm at linux-foundation.org> --- mm/page_poison.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/mm/page_poison.c b/mm/page_poison.c index aa2b3d3..830f604 100644 --- a/mm/page_poison.c +++ b/mm/page_poison.c @@ -17,6 +17,11 @@ static int __init early_page_poison_param(char *buf) } early_param("page_poison", early_page_poison_param); +/** + * page_poisoning_enabled - check if page poisoning is enabled + * + * Return true if page poisoning is enabled, or false if not. + */ bool page_poisoning_enabled(void) { /* @@ -29,6 +34,7 @@ bool page_poisoning_enabled(void) (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) && debug_pagealloc_enabled())); } +EXPORT_SYMBOL_GPL(page_poisoning_enabled); static void poison_page(struct page *page) { -- 2.7.4
Wei Wang
2018-Jul-10 09:31 UTC
[PATCH v35 5/5] virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON
The VIRTIO_BALLOON_F_PAGE_POISON feature bit is used to indicate if the guest is using page poisoning. Guest writes to the poison_val config field to tell host about the page poisoning value that is in use. Suggested-by: Michael S. Tsirkin <mst at redhat.com> Signed-off-by: Wei Wang <wei.w.wang at intel.com> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michal Hocko <mhocko at suse.com> Cc: Andrew Morton <akpm at linux-foundation.org> --- drivers/virtio/virtio_balloon.c | 10 ++++++++++ include/uapi/linux/virtio_balloon.h | 3 +++ 2 files changed, 13 insertions(+) diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 8754154..dd61660 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -869,6 +869,7 @@ static struct file_system_type balloon_fs = { static int virtballoon_probe(struct virtio_device *vdev) { struct virtio_balloon *vb; + __u32 poison_val; int err; if (!vdev->config->get) { @@ -916,6 +917,11 @@ static int virtballoon_probe(struct virtio_device *vdev) INIT_WORK(&vb->report_free_page_work, report_free_page_func); vb->cmd_id_received = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; vb->cmd_id_active = VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID; + if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_PAGE_POISON)) { + memset(&poison_val, PAGE_POISON, sizeof(poison_val)); + virtio_cwrite(vb->vdev, struct virtio_balloon_config, + poison_val, &poison_val); + } } vb->nb.notifier_call = virtballoon_oom_notify; @@ -1034,6 +1040,9 @@ static int virtballoon_restore(struct virtio_device *vdev) static int virtballoon_validate(struct virtio_device *vdev) { + if (!page_poisoning_enabled()) + __virtio_clear_bit(vdev, VIRTIO_BALLOON_F_PAGE_POISON); + __virtio_clear_bit(vdev, VIRTIO_F_IOMMU_PLATFORM); return 0; } @@ -1043,6 +1052,7 @@ static unsigned int features[] = { VIRTIO_BALLOON_F_STATS_VQ, VIRTIO_BALLOON_F_DEFLATE_ON_OOM, VIRTIO_BALLOON_F_FREE_PAGE_HINT, + VIRTIO_BALLOON_F_PAGE_POISON, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/uapi/linux/virtio_balloon.h b/include/uapi/linux/virtio_balloon.h index b77919b..97415ba 100644 --- a/include/uapi/linux/virtio_balloon.h +++ b/include/uapi/linux/virtio_balloon.h @@ -35,6 +35,7 @@ #define VIRTIO_BALLOON_F_STATS_VQ 1 /* Memory Stats virtqueue */ #define VIRTIO_BALLOON_F_DEFLATE_ON_OOM 2 /* Deflate balloon on OOM */ #define VIRTIO_BALLOON_F_FREE_PAGE_HINT 3 /* VQ to report free pages */ +#define VIRTIO_BALLOON_F_PAGE_POISON 4 /* Guest is using page poisoning */ /* Size of a PFN in the balloon interface. */ #define VIRTIO_BALLOON_PFN_SHIFT 12 @@ -47,6 +48,8 @@ struct virtio_balloon_config { __u32 actual; /* Free page report command id, readonly by guest */ __u32 free_page_report_cmd_id; + /* Stores PAGE_POISON if page poisoning is in use */ + __u32 poison_val; }; struct virtio_balloon_free_page_hints_cmd { -- 2.7.4
Wang, Wei W
2018-Jul-10 10:16 UTC
[PATCH v35 1/5] mm: support to get hints of free page blocks
On Tuesday, July 10, 2018 5:31 PM, Wang, Wei W wrote:> Subject: [PATCH v35 1/5] mm: support to get hints of free page blocks > > This patch adds support to get free page blocks from a free page list. > The physical addresses of the blocks are stored to a list of buffers passed > from the caller. The obtained free page blocks are hints about free pages, > because there is no guarantee that they are still on the free page list after the > function returns. > > One use example of this patch is to accelerate live migration by skipping the > transfer of free pages reported from the guest. A popular method used by > the hypervisor to track which part of memory is written during live migration > is to write-protect all the guest memory. So, those pages that are hinted as > free pages but are written after this function returns will be captured by the > hypervisor, and they will be added to the next round of memory transfer. > > Suggested-by: Linus Torvalds <torvalds at linux-foundation.org> > Signed-off-by: Wei Wang <wei.w.wang at intel.com> > Signed-off-by: Liang Li <liang.z.li at intel.com> > Cc: Michal Hocko <mhocko at kernel.org> > Cc: Andrew Morton <akpm at linux-foundation.org> > Cc: Michael S. Tsirkin <mst at redhat.com> > Cc: Linus Torvalds <torvalds at linux-foundation.org> > --- > include/linux/mm.h | 3 ++ > mm/page_alloc.c | 98 > ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 101 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h index a0fbb9f..5ce654f > 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2007,6 +2007,9 @@ extern void free_area_init(unsigned long * > zones_size); extern void free_area_init_node(int nid, unsigned long * > zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > +unsigned long max_free_page_blocks(int order); int > +get_from_free_page_list(int order, struct list_head *pages, > + unsigned int size, unsigned long *loaded_num); > > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1521100..b67839b > 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5043,6 +5043,104 @@ void show_free_areas(unsigned int filter, > nodemask_t *nodemask) > show_swap_cache_info(); > } > > +/** > + * max_free_page_blocks - estimate the max number of free page blocks > + * @order: the order of the free page blocks to estimate > + * > + * This function gives a rough estimation of the possible maximum > +number of > + * free page blocks a free list may have. The estimation works on an > +assumption > + * that all the system pages are on that list. > + * > + * Context: Any context. > + * > + * Return: The largest number of free page blocks that the free list can have. > + */ > +unsigned long max_free_page_blocks(int order) { > + return totalram_pages / (1 << order); > +} > +EXPORT_SYMBOL_GPL(max_free_page_blocks); > + > +/** > + * get_from_free_page_list - get hints of free pages from a free page > +list > + * @order: the order of the free page list to check > + * @pages: the list of page blocks used as buffers to load the > +addresses > + * @size: the size of each buffer in bytes > + * @loaded_num: the number of addresses loaded to the buffers > + * > + * This function offers hints about free pages. The addresses of free > +page > + * blocks are stored to the list of buffers passed from the caller. > +There is > + * no guarantee that the obtained free pages are still on the free page > +list > + * after the function returns. pfn_to_page on the obtained free pages > +is > + * strongly discouraged and if there is an absolute need for that, make > +sure > + * to contact MM people to discuss potential problems. > + * > + * The addresses are currently stored to a buffer in little endian. > +This > + * avoids the overhead of converting endianness by the caller who needs > +data > + * in the little endian format. Big endian support can be added on > +demand in > + * the future. > + * > + * Context: Process context. > + * > + * Return: 0 if all the free page block addresses are stored to the buffers; > + * -ENOSPC if the buffers are not sufficient to store all the > + * addresses; or -EINVAL if an unexpected argument is received (e.g. > + * incorrect @order, empty buffer list). > + */ > +int get_from_free_page_list(int order, struct list_head *pages, > + unsigned int size, unsigned long *loaded_num) {Hi Linus, We took your original suggestion - pass in pre-allocated buffers to load the addresses (now we use a list of pre-allocated page blocks as buffers). Hope that suggestion is still acceptable (the advantage of this method was explained here: https://lkml.org/lkml/2018/6/28/184). Look forward to getting your feedback. Thanks. Best, Wei
Linus Torvalds
2018-Jul-10 17:33 UTC
[PATCH v35 1/5] mm: support to get hints of free page blocks
NAK. On Tue, Jul 10, 2018 at 2:56 AM Wei Wang <wei.w.wang at intel.com> wrote:> > + > + buf_page = list_first_entry_or_null(pages, struct page, lru); > + if (!buf_page) > + return -EINVAL; > + buf = (__le64 *)page_address(buf_page);Stop this garbage. Why the hell would you pass in some crazy "liost of pages" that uses that lru list? That's just insane shit. Just pass in a an array to fill in. No idiotic games like this with odd list entries (what's the locking?) and crazy casting to So if you want an array of page addresses, pass that in as such. If you want to do it in a page, do it with u64 *array = page_address(page); int nr = PAGE_SIZE / sizeof(u64); and now you pass that array in to the thing. None of this completely insane crazy crap interfaces. Plus, I still haven't heard an explanation for why you want so many pages in the first place, and why you want anything but MAX_ORDER-1. So no. This kind of unnecessarily complex code with completely insane calling interfaces does not make it into the VM layer. Maybe that crazy "let's pass a chain of pages that uses the lru list" makes sense to the virtio-balloon code. But you need to understand that it makes ZERO conceptual sense to anybody else. And the core VM code is about a million times more important than the balloon code in this case, so you had better make the interface make sense to *it*. Linus
Possibly Parallel Threads
- [PATCH v35 0/5] Virtio-balloon: support free page reporting
- [PATCH v35 1/5] mm: support to get hints of free page blocks
- [PATCH v35 1/5] mm: support to get hints of free page blocks
- [PATCH v35 1/5] mm: support to get hints of free page blocks
- [PATCH v35 1/5] mm: support to get hints of free page blocks