search for: ram_list

Displaying 6 results from an estimated 6 matches for "ram_list".

Did you mean: vram_list
2016 Mar 03
2
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
...g_free(bmap->unsentmap); g_free(bmap); } @@ -1873,6 +1873,28 @@ err: return ret; } +static void filter_out_guest_free_pages(unsigned long *free_pages_bmap) +{ + RAMBlock *block; + DirtyMemoryBlocks *blocks; + unsigned long end, page; + + blocks = atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]); + block = QLIST_FIRST_RCU(&ram_list.blocks); + end = TARGET_PAGE_ALIGN(block->offset + + block->used_length) >> TARGET_PAGE_BITS; + page = block->offset >> TARGET_PAGE_BITS; + + while (page < e...
2020 Feb 05
2
Balloon pressuring page cache
On 05.02.20 10:49, Wang, Wei W wrote: > On Wednesday, February 5, 2020 5:37 PM, David Hildenbrand wrote: >>> >>> Not sure how TCG tracks the dirty bits. But In whatever >>> implementation, the hypervisor should have >> >> There is only a single bitmap for that purpose. (well, the one where KVM >> syncs to) >> >>> already dealt with the
2020 Feb 05
2
Balloon pressuring page cache
On 05.02.20 10:49, Wang, Wei W wrote: > On Wednesday, February 5, 2020 5:37 PM, David Hildenbrand wrote: >>> >>> Not sure how TCG tracks the dirty bits. But In whatever >>> implementation, the hypervisor should have >> >> There is only a single bitmap for that purpose. (well, the one where KVM >> syncs to) >> >>> already dealt with the
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the guest's RAM pages as dirtied in the ram bulk stage, all these pages will be processed and that takes quit a lot of CPU cycles. >From guest's point of view, it doesn't care about the content in free pages. We can make use of this fact and skip processing the free pages in the ram bulk stage, it can save a lot CPU cycles
2010 Aug 12
59
[PATCH 00/15] RFC xen device model support
Hi all, this is the long awaited patch series to add xen device model support in qemu; the main author is Anthony Perard. Developing this series we tried to come up with the cleanest possible solution from the qemu point of view, limiting the amount of changes to common code as much as possible. The end result still requires a couple of hooks in piix_pci but overall the impact should be very