Dave Hansen
2016-Dec-07 15:45 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On 12/07/2016 07:42 AM, David Hildenbrand wrote:> Am 07.12.2016 um 14:35 schrieb Li, Liang Z: >>> Am 30.11.2016 um 09:43 schrieb Liang Li: >>>> This patch set contains two parts of changes to the virtio-balloon. >>>> >>>> One is the change for speeding up the inflating & deflating process, >>>> the main idea of this optimization is to use bitmap to send the page >>>> information to host instead of the PFNs, to reduce the overhead of >>>> virtio data transmission, address translation and madvise(). This can >>>> help to improve the performance by about 85%. >>> >>> Do you have some statistics/some rough feeling how many consecutive >>> bits are >>> usually set in the bitmaps? Is it really just purely random or is >>> there some >>> granularity that is usually consecutive? >>> >> >> I did something similar. Filled the balloon with 15GB for a 16GB idle >> guest, by >> using bitmap, the madvise count was reduced to 605. when using the >> PFNs, the madvise count >> was 3932160. It means there are quite a lot consecutive bits in the >> bitmap. >> I didn't test for a guest with heavy memory workload. > > Would it then even make sense to go one step further and report {pfn, > length} combinations? > > So simply send over an array of {pfn, length}?Li's current patches do that. Well, maybe not pfn/length, but they do take a pfn and page-order, which fits perfectly with the kernel's concept of high-order pages.> And it makes sense if you think about: > > a) hugetlb backing: The host may only be able to free huge pages (we > might want to communicate that to the guest later, that's another > story). Still we would have to send bitmaps full of 4k frames (512 bits > for 2mb frames). Of course, we could add a way to communicate that we > are using a different bitmap-granularity.Yeah, please read the patches. If they're not clear, then the descriptions need work, but this is done already.
David Hildenbrand
2016-Dec-07 16:21 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
>>> >>> I did something similar. Filled the balloon with 15GB for a 16GB idle >>> guest, by >>> using bitmap, the madvise count was reduced to 605. when using the >>> PFNs, the madvise count >>> was 3932160. It means there are quite a lot consecutive bits in the >>> bitmap. >>> I didn't test for a guest with heavy memory workload. >> >> Would it then even make sense to go one step further and report {pfn, >> length} combinations? >> >> So simply send over an array of {pfn, length}? > > Li's current patches do that. Well, maybe not pfn/length, but they do > take a pfn and page-order, which fits perfectly with the kernel's > concept of high-order pages.So we can send length in powers of two. Still, I don't see any benefit over a simple pfn/len schema. But I'll have a more detailed look at the implementation first, maybe that will enlighten me :)> >> And it makes sense if you think about: >> >> a) hugetlb backing: The host may only be able to free huge pages (we >> might want to communicate that to the guest later, that's another >> story). Still we would have to send bitmaps full of 4k frames (512 bits >> for 2mb frames). Of course, we could add a way to communicate that we >> are using a different bitmap-granularity. > > Yeah, please read the patches. If they're not clear, then the > descriptions need work, but this is done already. >I missed the page_shift, thanks for the hint. -- David
Dave Hansen
2016-Dec-07 16:57 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
Removing silly virtio-dev@ list because it's bouncing mail... On 12/07/2016 08:21 AM, David Hildenbrand wrote:>> Li's current patches do that. Well, maybe not pfn/length, but they do >> take a pfn and page-order, which fits perfectly with the kernel's >> concept of high-order pages. > > So we can send length in powers of two. Still, I don't see any benefit > over a simple pfn/len schema. But I'll have a more detailed look at the > implementation first, maybe that will enlighten me :)It is more space-efficient. We're fitting the order into 6 bits, which would allows the full 2^64 address space to be represented in one entry, and leaves room for the bitmap size to be encoded as well, if we decide we need a bitmap in the future. If that was purely a length, we'd be limited to 64*4k pages per entry, which isn't even a full large page.
Apparently Analagous Threads
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [Qemu-devel] [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration