Displaying 6 results from an estimated 6 matches for "pc_machine".
Did you mean:
e_machine
2016 Mar 03
2
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...->free_pages_vq_elem;
+ int len;
+
+ if (!balloon_free_pages_supported(s)) {
+ return -1;
+ }
+
+ if (s->req_status == NOT_STARTED) {
+ s->free_pages_bitmap = free_pages_bitmap;
+ s->req_status = STARTED;
+ s->mem_layout.low_mem = pc_get_lowmem(PC_MACHINE(current_machine));
+ if (!elem->in_num) {
+ elem = virtqueue_pop(s->fvq, sizeof(VirtQueueElement));
+ if (!elem) {
+ return 0;
+ }
+ s->free_pages_vq_elem = elem;
+ }
+ len = iov_from_buf(elem->in_sg, elem-&...
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...> + if (!balloon_free_pages_supported(s)) {
> + return -1;
> + }
> +
> + if (s->req_status == NOT_STARTED) {
> + s->free_pages_bitmap = free_pages_bitmap;
> + s->req_status = STARTED;
> + s->mem_layout.low_mem = pc_get_lowmem(PC_MACHINE(current_machine));
Please don't leak pc-specific information into generic code.
2016 Mar 03
0
[RFC qemu 2/4] virtio-balloon: Add a new feature to balloon device
...> + if (!balloon_free_pages_supported(s)) {
> + return -1;
> + }
> +
> + if (s->req_status == NOT_STARTED) {
> + s->free_pages_bitmap = free_pages_bitmap;
> + s->req_status = STARTED;
> + s->mem_layout.low_mem = pc_get_lowmem(PC_MACHINE(current_machine));
> + if (!elem->in_num) {
> + elem = virtqueue_pop(s->fvq, sizeof(VirtQueueElement));
> + if (!elem) {
> + return 0;
> + }
> + s->free_pages_vq_elem = elem;
> + }
> + l...
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2007 Oct 24
16
PATCH 0/10: Merge PV framebuffer & console into QEMU
The following series of 10 patches is a merge of the xenfb and xenconsoled
functionality into the qemu-dm code. The general approach taken is to have
qemu-dm provide two machine types - one for xen paravirt, the other for
fullyvirt. For compatability the later is the default. The goals overall
are to kill LibVNCServer, remove alot of code duplication and/or parallel
impls of the same concepts, and