search for: pfn_gap

Displaying 7 results from an estimated 7 matches for "pfn_gap".

2016 Mar 03
2
[RFC kernel 0/2]A PV solution for KVM live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
2
[RFC kernel 0/2]A PV solution for KVM live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...39; back to QEMU. That all the new interface does and there are no ' alloc_page' related affairs, so it's faster. Some code snippet: ---------------------------------------------- +static void mark_free_pages_bitmap(struct zone *zone, + unsigned long *free_page_bitmap, unsigned long pfn_gap) { + unsigned long pfn, flags, i; + unsigned int order, t; + struct list_head *curr; + + if (zone_is_empty(zone)) + return; + + spin_lock_irqsave(&zone->lock, flags); + + for_each_migratetype_order(order, t) { + list_for_each(curr, &zone->free_area[order].free_list[t]) { + + pfn =...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...39; back to QEMU. That all the new interface does and there are no ' alloc_page' related affairs, so it's faster. Some code snippet: ---------------------------------------------- +static void mark_free_pages_bitmap(struct zone *zone, + unsigned long *free_page_bitmap, unsigned long pfn_gap) { + unsigned long pfn, flags, i; + unsigned int order, t; + struct list_head *curr; + + if (zone_is_empty(zone)) + return; + + spin_lock_irqsave(&zone->lock, flags); + + for_each_migratetype_order(order, t) { + list_for_each(curr, &zone->free_area[order].free_list[t]) { + + pfn =...
2016 Mar 08
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...interface does and > there are no ' alloc_page' related affairs, so it's faster. > > > Some code snippet: > ---------------------------------------------- > +static void mark_free_pages_bitmap(struct zone *zone, > + unsigned long *free_page_bitmap, unsigned long pfn_gap) { > + unsigned long pfn, flags, i; > + unsigned int order, t; > + struct list_head *curr; > + > + if (zone_is_empty(zone)) > + return; > + > + spin_lock_irqsave(&zone->lock, flags); > + > + for_each_migratetype_order(order, t) { > + list_for_each(curr, &am...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> On Fri, Mar 04, 2016 at 09:12:12AM +0000, Li, Liang Z wrote: > > > Although I wonder which is cheaper; that would be fairly expensive > > > for the guest wouldn't it? And you'd somehow have to kick the guest > > > before migration to do the ballooning - and how long would you wait for > it to finish? > > > > About 5 seconds for an 8G guest,
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> On Fri, Mar 04, 2016 at 09:12:12AM +0000, Li, Liang Z wrote: > > > Although I wonder which is cheaper; that would be fairly expensive > > > for the guest wouldn't it? And you'd somehow have to kick the guest > > > before migration to do the ballooning - and how long would you wait for > it to finish? > > > > About 5 seconds for an 8G guest,