Displaying 15 results from an estimated 15 matches for "balloon_get_free_pag".
Did you mean:
balloon_get_free_pages
2016 Mar 03
2
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
...cu->unsentmap = bitmap_new(ram_bitmap_pages);
@@ -1945,6 +1971,20 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
DIRTY_MEMORY_MIGRATION);
}
memory_global_dirty_log_start();
+
+ if (balloon_free_pages_support() &&
+ balloon_get_free_pages(migration_bitmap_rcu->free_pages_bmap,
+ &free_pages_count) == 0) {
+ qemu_mutex_unlock_iothread();
+ while (balloon_get_free_pages(migration_bitmap_rcu->free_pages_bmap,
+ &free_pages_count) == 0) {
+...
2016 Jul 27
2
[PATCH v2 repost 7/7] virtio-balloon: tell host vm's free page info
...mutex_lock(&vb->balloon_lock);
> + while (pfn < max_pfn) {
> + memset(vb->page_bitmap, 0, vb->bmap_len);
> + ret = get_free_pages(pfn, pfn + vb->pfn_limit,
> + vb->page_bitmap, vb->bmap_len * BITS_PER_BYTE);
> + hdr->cmd = cpu_to_virtio16(vb->vdev, BALLOON_GET_FREE_PAGES);
> + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT);
> + hdr->req_id = cpu_to_virtio64(vb->vdev, req_id);
> + hdr->start_pfn = cpu_to_virtio64(vb->vdev, pfn);
> + bmap_len = vb->pfn_limit / BITS_PER_BYTE;
> + if (!ret) {
> + hdr->flag = c...
2016 Jul 27
2
[PATCH v2 repost 7/7] virtio-balloon: tell host vm's free page info
...mutex_lock(&vb->balloon_lock);
> + while (pfn < max_pfn) {
> + memset(vb->page_bitmap, 0, vb->bmap_len);
> + ret = get_free_pages(pfn, pfn + vb->pfn_limit,
> + vb->page_bitmap, vb->bmap_len * BITS_PER_BYTE);
> + hdr->cmd = cpu_to_virtio16(vb->vdev, BALLOON_GET_FREE_PAGES);
> + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT);
> + hdr->req_id = cpu_to_virtio64(vb->vdev, req_id);
> + hdr->start_pfn = cpu_to_virtio64(vb->vdev, pfn);
> + bmap_len = vb->pfn_limit / BITS_PER_BYTE;
> + if (!ret) {
> + hdr->flag = c...
2016 Mar 03
0
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
...ons(-)
>
> @@ -1945,6 +1971,20 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> DIRTY_MEMORY_MIGRATION);
> }
> memory_global_dirty_log_start();
> +
> + if (balloon_free_pages_support() &&
> + balloon_get_free_pages(migration_bitmap_rcu->free_pages_bmap,
> + &free_pages_count) == 0) {
> + qemu_mutex_unlock_iothread();
> + while (balloon_get_free_pages(migration_bitmap_rcu->free_pages_bmap,
> + &free_pag...
2016 Mar 03
0
[Qemu-devel] [RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
...deletions(-)
> @@ -1945,6 +1971,20 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> DIRTY_MEMORY_MIGRATION);
> }
> memory_global_dirty_log_start();
> +
> + if (balloon_free_pages_support() &&
> + balloon_get_free_pages(migration_bitmap_rcu->free_pages_bmap,
> + &free_pages_count) == 0) {
> + qemu_mutex_unlock_iothread();
> + while (balloon_get_free_pages(migration_bitmap_rcu->free_pages_bmap,
> + &free_pag...
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Jul 27
0
[PATCH v2 repost 7/7] virtio-balloon: tell host vm's free page info
...x_pfn = get_max_pfn();
+ mutex_lock(&vb->balloon_lock);
+ while (pfn < max_pfn) {
+ memset(vb->page_bitmap, 0, vb->bmap_len);
+ ret = get_free_pages(pfn, pfn + vb->pfn_limit,
+ vb->page_bitmap, vb->bmap_len * BITS_PER_BYTE);
+ hdr->cmd = cpu_to_virtio16(vb->vdev, BALLOON_GET_FREE_PAGES);
+ hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT);
+ hdr->req_id = cpu_to_virtio64(vb->vdev, req_id);
+ hdr->start_pfn = cpu_to_virtio64(vb->vdev, pfn);
+ bmap_len = vb->pfn_limit / BITS_PER_BYTE;
+ if (!ret) {
+ hdr->flag = cpu_to_virtio16(vb->vdev,
+...
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...oid qmp_balloon(int64_t target, Error **errp)
> trace_balloon_event(balloon_opaque, target);
> balloon_event_fn(balloon_opaque, target);
> }
> +
> +bool balloon_free_pages_support(void)
> +{
> + return balloon_free_pages_fn ? true : false;
> +}
> +
> +int balloon_get_free_pages(unsigned long *free_pages_bitmap,
> + unsigned long *free_pages_count)
> +{
> + if (!balloon_free_pages_fn) {
> + return -1;
> + }
> +
> + if (!free_pages_bitmap || !free_pages_count) {
> + return -1;
> + }
> +
>...
2016 Mar 03
2
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...balloon_opaque = NULL;
}
@@ -116,3 +122,23 @@ void qmp_balloon(int64_t target, Error **errp)
trace_balloon_event(balloon_opaque, target);
balloon_event_fn(balloon_opaque, target);
}
+
+bool balloon_free_pages_support(void)
+{
+ return balloon_free_pages_fn ? true : false;
+}
+
+int balloon_get_free_pages(unsigned long *free_pages_bitmap,
+ unsigned long *free_pages_count)
+{
+ if (!balloon_free_pages_fn) {
+ return -1;
+ }
+
+ if (!free_pages_bitmap || !free_pages_count) {
+ return -1;
+ }
+
+ return balloon_free_pages_fn(balloon_opaque,
+...
2016 Jul 28
0
[PATCH v2 repost 7/7] virtio-balloon: tell host vm's free page info
..._lock);
> > + while (pfn < max_pfn) {
> > + memset(vb->page_bitmap, 0, vb->bmap_len);
> > + ret = get_free_pages(pfn, pfn + vb->pfn_limit,
> > + vb->page_bitmap, vb->bmap_len * BITS_PER_BYTE);
> > + hdr->cmd = cpu_to_virtio16(vb->vdev,
> BALLOON_GET_FREE_PAGES);
> > + hdr->page_shift = cpu_to_virtio16(vb->vdev, PAGE_SHIFT);
> > + hdr->req_id = cpu_to_virtio64(vb->vdev, req_id);
> > + hdr->start_pfn = cpu_to_virtio64(vb->vdev, pfn);
> > + bmap_len = vb->pfn_limit / BITS_PER_BYTE;
> > + if (!ret) {...
2016 Jul 27
14
[PATCH v2 repost 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patchset is for kernel and contains two parts of change to the
virtio-balloon.
One is the change for speeding up the inflating & deflating process,
the main idea of this optimization is to use bitmap to send the page
information to host instead of the PFNs, to reduce the overhead of
virtio data transmission, address translation and madvise(). This can
help to improve the performance by
2016 Jul 27
14
[PATCH v2 repost 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patchset is for kernel and contains two parts of change to the
virtio-balloon.
One is the change for speeding up the inflating & deflating process,
the main idea of this optimization is to use bitmap to send the page
information to host instead of the PFNs, to reduce the overhead of
virtio data transmission, address translation and madvise(). This can
help to improve the performance by
2016 Jun 29
11
[PATCH v2 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon.
One is the change for speeding up the inflating & deflating process,
the main idea of this optimization is to use bitmap to send the page
information to host instead of the PFNs, to reduce the overhead of
virtio data transmission, address translation and madvise(). This can
help to improve the performance by about 85%.
2016 Jun 29
11
[PATCH v2 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon.
One is the change for speeding up the inflating & deflating process,
the main idea of this optimization is to use bitmap to send the page
information to host instead of the PFNs, to reduce the overhead of
virtio data transmission, address translation and madvise(). This can
help to improve the performance by about 85%.