Displaying 5 results from an estimated 5 matches for "virtio_balloon_free_pag".
Did you mean:
virtio_balloon_free_pages
2016 Mar 03
2
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...e(&f, VIRTIO_BALLOON_F_STATS_VQ);
+ virtio_add_feature(&f, VIRTIO_BALLOON_F_GET_FREE_PAGES);
return f;
}
@@ -372,6 +410,45 @@ static void virtio_balloon_stat(void *opaque, BalloonInfo *info)
VIRTIO_BALLOON_PFN_SHIFT);
}
+static int virtio_balloon_free_pages(void *opaque,
+ unsigned long *free_pages_bitmap,
+ unsigned long *free_pages_count)
+{
+ VirtIOBalloon *s = opaque;
+ VirtIODevice *vdev = VIRTIO_DEVICE(s);
+ VirtQueueElement *elem = s->free_pages_vq_elem;
+...
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...virtio_add_feature(&f, VIRTIO_BALLOON_F_GET_FREE_PAGES);
> return f;
> }
>
> @@ -372,6 +410,45 @@ static void virtio_balloon_stat(void *opaque, BalloonInfo *info)
> VIRTIO_BALLOON_PFN_SHIFT);
> }
>
> +static int virtio_balloon_free_pages(void *opaque,
> + unsigned long *free_pages_bitmap,
> + unsigned long *free_pages_count)
> +{
> + VirtIOBalloon *s = opaque;
> + VirtIODevice *vdev = VIRTIO_DEVICE(s);
> + VirtQueueElement *elem = s...
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...| 81 ++++++++++++++++++++++++-
> include/hw/virtio/virtio-balloon.h | 17 +++++-
> include/standard-headers/linux/virtio_balloon.h | 1 +
> include/sysemu/balloon.h | 10 ++-
> 5 files changed, 134 insertions(+), 5 deletions(-)
> +static int virtio_balloon_free_pages(void *opaque,
> + unsigned long *free_pages_bitmap,
> + unsigned long *free_pages_count)
> +{
> + VirtIOBalloon *s = opaque;
> + VirtIODevice *vdev = VIRTIO_DEVICE(s);
> + VirtQueueElement *elem = s...
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles