Displaying 6 results from an estimated 6 matches for "dirty_memory".
2020 Feb 05
2
Balloon pressuring page cache
On 05.02.20 10:49, Wang, Wei W wrote:
> On Wednesday, February 5, 2020 5:37 PM, David Hildenbrand wrote:
>>>
>>> Not sure how TCG tracks the dirty bits. But In whatever
>>> implementation, the hypervisor should have
>>
>> There is only a single bitmap for that purpose. (well, the one where KVM
>> syncs to)
>>
>>> already dealt with the
2020 Feb 05
2
Balloon pressuring page cache
On 05.02.20 10:49, Wang, Wei W wrote:
> On Wednesday, February 5, 2020 5:37 PM, David Hildenbrand wrote:
>>>
>>> Not sure how TCG tracks the dirty bits. But In whatever
>>> implementation, the hypervisor should have
>>
>> There is only a single bitmap for that purpose. (well, the one where KVM
>> syncs to)
>>
>>> already dealt with the
2020 Feb 05
0
Balloon pressuring page cache
...t; Yes, an optimization that might easily lead to data corruption when the
> two bitmaps are either not in place or don't play along in that specific
> way (and I suspect this is the case under TCG).
So I checked and TCG has two copies too.
Each block has bmap used for migration and also dirty_memory
where pages are marked dirty. See cpu_physical_memory_sync_dirty_bitmap.
So from QEMU POV, there is a callback that tells balloon when it's safe
to request hints. As that affects the bitmap, that must not happen in
parallel with dirty bitmap handling. Sounds like a reasonable
limitation.
The...
2016 Mar 03
2
[RFC qemu 4/4] migration: filter out guest's free pages in ram bulk stage
...(bmap->unsentmap);
g_free(bmap);
}
@@ -1873,6 +1873,28 @@ err:
return ret;
}
+static void filter_out_guest_free_pages(unsigned long *free_pages_bmap)
+{
+ RAMBlock *block;
+ DirtyMemoryBlocks *blocks;
+ unsigned long end, page;
+
+ blocks = atomic_rcu_read(&ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]);
+ block = QLIST_FIRST_RCU(&ram_list.blocks);
+ end = TARGET_PAGE_ALIGN(block->offset +
+ block->used_length) >> TARGET_PAGE_BITS;
+ page = block->offset >> TARGET_PAGE_BITS;
+
+ while (page < end) {
+...
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles
2016 Mar 03
16
[RFC qemu 0/4] A PV solution for live migration optimization
The current QEMU live migration implementation mark the all the
guest's RAM pages as dirtied in the ram bulk stage, all these pages
will be processed and that takes quit a lot of CPU cycles.
>From guest's point of view, it doesn't care about the content in free
pages. We can make use of this fact and skip processing the free
pages in the ram bulk stage, it can save a lot CPU cycles