Li, Liang Z
2016-Dec-07 13:35 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> Am 30.11.2016 um 09:43 schrieb Liang Li: > > This patch set contains two parts of changes to the virtio-balloon. > > > > One is the change for speeding up the inflating & deflating process, > > the main idea of this optimization is to use bitmap to send the page > > information to host instead of the PFNs, to reduce the overhead of > > virtio data transmission, address translation and madvise(). This can > > help to improve the performance by about 85%. > > Do you have some statistics/some rough feeling how many consecutive bits are > usually set in the bitmaps? Is it really just purely random or is there some > granularity that is usually consecutive? >I did something similar. Filled the balloon with 15GB for a 16GB idle guest, by using bitmap, the madvise count was reduced to 605. when using the PFNs, the madvise count was 3932160. It means there are quite a lot consecutive bits in the bitmap. I didn't test for a guest with heavy memory workload.> IOW in real examples, do we have really large consecutive areas or are all > pages just completely distributed over our memory? >The buddy system of Linux kernel memory management shows there should be quite a lot of consecutive pages as long as there are a portion of free memory in the guest. If all pages just completely distributed over our memory, it means the memory fragmentation is very serious, the kernel has the mechanism to avoid this happened. In the other hand, the inflating should not happen at this time because the guest is almost 'out of memory'. Liang> Thanks! > > -- > > David
Dave Hansen
2016-Dec-07 15:34 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On 12/07/2016 05:35 AM, Li, Liang Z wrote:>> Am 30.11.2016 um 09:43 schrieb Liang Li: >> IOW in real examples, do we have really large consecutive areas or are all >> pages just completely distributed over our memory? > > The buddy system of Linux kernel memory management shows there should > be quite a lot of consecutive pages as long as there are a portion of > free memory in the guest....> If all pages just completely distributed over our memory, it means > the memory fragmentation is very serious, the kernel has the > mechanism to avoid this happened.While it is correct that the kernel has anti-fragmentation mechanisms, I don't think it invalidates the question as to whether a bitmap would be too sparse to be effective.> In the other hand, the inflating should not happen at this time because the guest is almost > 'out of memory'.I don't think this is correct. Most systems try to run with relatively little free memory all the time, using the bulk of it as page cache. We have no reason to expect that ballooning will only occur when there is lots of actual free memory and that it will not occur when that same memory is in use as page cache. In these patches, you're effectively still sending pfns. You're just sending one pfn per high-order page which is giving a really nice speedup. IMNHO, you're avoiding doing a real bitmap because creating a bitmap means either have a really big bitmap, or you would have to do some sorting (or multiple passes) of the free lists before populating a smaller bitmap. Like David, I would still like to see some data on whether the choice between bitmaps and pfn lists is ever clearly in favor of bitmaps. You haven't convinced me, at least, that the data isn't even worth collecting.
David Hildenbrand
2016-Dec-07 15:42 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
Am 07.12.2016 um 14:35 schrieb Li, Liang Z:>> Am 30.11.2016 um 09:43 schrieb Liang Li: >>> This patch set contains two parts of changes to the virtio-balloon. >>> >>> One is the change for speeding up the inflating & deflating process, >>> the main idea of this optimization is to use bitmap to send the page >>> information to host instead of the PFNs, to reduce the overhead of >>> virtio data transmission, address translation and madvise(). This can >>> help to improve the performance by about 85%. >> >> Do you have some statistics/some rough feeling how many consecutive bits are >> usually set in the bitmaps? Is it really just purely random or is there some >> granularity that is usually consecutive? >> > > I did something similar. Filled the balloon with 15GB for a 16GB idle guest, by > using bitmap, the madvise count was reduced to 605. when using the PFNs, the madvise count > was 3932160. It means there are quite a lot consecutive bits in the bitmap. > I didn't test for a guest with heavy memory workload.Would it then even make sense to go one step further and report {pfn, length} combinations? So simply send over an array of {pfn, length}? This idea came up when talking to Andrea Arcangeli (put him on cc). And it makes sense if you think about: a) hugetlb backing: The host may only be able to free huge pages (we might want to communicate that to the guest later, that's another story). Still we would have to send bitmaps full of 4k frames (512 bits for 2mb frames). Of course, we could add a way to communicate that we are using a different bitmap-granularity. b) if we really inflate huge memory regions (and it sounds like that according to your measurements), we can minimize the communication to the hypervisor and therefore the madvice calls. c) we don't want to optimize for inflating guests with almost full memory (and therefore little consecutive memory areas) - my opinion :) Thanks for the explanation! -- David
Dave Hansen
2016-Dec-07 15:45 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
On 12/07/2016 07:42 AM, David Hildenbrand wrote:> Am 07.12.2016 um 14:35 schrieb Li, Liang Z: >>> Am 30.11.2016 um 09:43 schrieb Liang Li: >>>> This patch set contains two parts of changes to the virtio-balloon. >>>> >>>> One is the change for speeding up the inflating & deflating process, >>>> the main idea of this optimization is to use bitmap to send the page >>>> information to host instead of the PFNs, to reduce the overhead of >>>> virtio data transmission, address translation and madvise(). This can >>>> help to improve the performance by about 85%. >>> >>> Do you have some statistics/some rough feeling how many consecutive >>> bits are >>> usually set in the bitmaps? Is it really just purely random or is >>> there some >>> granularity that is usually consecutive? >>> >> >> I did something similar. Filled the balloon with 15GB for a 16GB idle >> guest, by >> using bitmap, the madvise count was reduced to 605. when using the >> PFNs, the madvise count >> was 3932160. It means there are quite a lot consecutive bits in the >> bitmap. >> I didn't test for a guest with heavy memory workload. > > Would it then even make sense to go one step further and report {pfn, > length} combinations? > > So simply send over an array of {pfn, length}?Li's current patches do that. Well, maybe not pfn/length, but they do take a pfn and page-order, which fits perfectly with the kernel's concept of high-order pages.> And it makes sense if you think about: > > a) hugetlb backing: The host may only be able to free huge pages (we > might want to communicate that to the guest later, that's another > story). Still we would have to send bitmaps full of 4k frames (512 bits > for 2mb frames). Of course, we could add a way to communicate that we > are using a different bitmap-granularity.Yeah, please read the patches. If they're not clear, then the descriptions need work, but this is done already.
Li, Liang Z
2016-Dec-09 03:09 UTC
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
> Subject: Re: [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating > & fast live migration > > On 12/07/2016 05:35 AM, Li, Liang Z wrote: > >> Am 30.11.2016 um 09:43 schrieb Liang Li: > >> IOW in real examples, do we have really large consecutive areas or > >> are all pages just completely distributed over our memory? > > > > The buddy system of Linux kernel memory management shows there > should > > be quite a lot of consecutive pages as long as there are a portion of > > free memory in the guest. > ... > > If all pages just completely distributed over our memory, it means the > > memory fragmentation is very serious, the kernel has the mechanism to > > avoid this happened. > > While it is correct that the kernel has anti-fragmentation mechanisms, I don't > think it invalidates the question as to whether a bitmap would be too sparse > to be effective. > > > In the other hand, the inflating should not happen at this time > > because the guest is almost 'out of memory'. > > I don't think this is correct. Most systems try to run with relatively little free > memory all the time, using the bulk of it as page cache. We have no reason > to expect that ballooning will only occur when there is lots of actual free > memory and that it will not occur when that same memory is in use as page > cache. >Yes.> In these patches, you're effectively still sending pfns. You're just sending > one pfn per high-order page which is giving a really nice speedup. IMNHO, > you're avoiding doing a real bitmap because creating a bitmap means either > have a really big bitmap, or you would have to do some sorting (or multiple > passes) of the free lists before populating a smaller bitmap. > > Like David, I would still like to see some data on whether the choice between > bitmaps and pfn lists is ever clearly in favor of bitmaps. You haven't > convinced me, at least, that the data isn't even worth collecting.I will try to get some data with the real workload and share it with your guys. Thanks! Liang
Apparently Analagous Threads
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
- [PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration