Dr. David Alan Gilbert
2018-Jul-23 14:36 UTC
[PATCH v36 0/5] Virtio-balloon: support free page reporting
* Michael S. Tsirkin (mst at redhat.com) wrote:> On Fri, Jul 20, 2018 at 04:33:00PM +0800, Wei Wang wrote: > > This patch series is separated from the previous "Virtio-balloon > > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, > > implemented by this series enables the virtio-balloon driver to report > > hints of guest free pages to the host. It can be used to accelerate live > > migration of VMs. Here is an introduction of this usage: > > > > Live migration needs to transfer the VM's memory from the source machine > > to the destination round by round. For the 1st round, all the VM's memory > > is transferred. From the 2nd round, only the pieces of memory that were > > written by the guest (after the 1st round) are transferred. One method > > that is popularly used by the hypervisor to track which part of memory is > > written is to write-protect all the guest memory. > > > > This feature enables the optimization by skipping the transfer of guest > > free pages during VM live migration. It is not concerned that the memory > > pages are used after they are given to the hypervisor as a hint of the > > free pages, because they will be tracked by the hypervisor and transferred > > in the subsequent round if they are used and written. > > > > * Tests > > - Test Environment > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz > > Guest: 8G RAM, 4 vCPU > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second > > > > - Test Results > > - Idle Guest Live Migration Time (results are averaged over 10 runs): > > - Optimization v.s. Legacy = 409ms vs 1757ms --> ~77% reduction > > (setting page poisoning zero and enabling ksm don't affect the > > comparison result) > > - Guest with Linux Compilation Workload (make bzImage -j4): > > - Live Migration Time (average) > > Optimization v.s. Legacy = 1407ms v.s. 2528ms --> ~44% reduction > > - Linux Compilation Time > > Optimization v.s. Legacy = 5min4s v.s. 5min12s > > --> no obvious difference > > I'd like to see dgilbert's take on whether this kind of gain > justifies adding a PV interfaces, and what kind of guest workload > is appropriate. > > Cc'd.Well, 44% is great ... although the measurement is a bit weird. a) A 2 second downtime is very large; 300-500ms is more normal b) I'm not sure what the 'average' is - is that just between a bunch of repeated migrations? c) What load was running in the guest during the live migration? An interesting measurement to add would be to do the same test but with a VM with a lot more RAM but the same load; you'd hope the gain would be even better. It would be interesting, especially because the users who are interested are people creating VMs allocated with lots of extra memory (for the worst case) but most of the time migrating when it's fairly idle. Dave> > > > ChangeLog: > > v35->v36: > > - remove the mm patch, as Linus has a suggestion to get free page > > addresses via allocation, instead of reading from the free page > > list. > > - virtio-balloon: > > - replace oom notifier with shrinker; > > - the guest to host communication interface remains the same as > > v32. > > - allocate free page blocks and send to host one by one, and free > > them after sending all the pages. > > > > For ChangeLogs from v22 to v35, please reference > > https://lwn.net/Articles/759413/ > > > > For ChangeLogs before v21, please reference > > https://lwn.net/Articles/743660/ > > > > Wei Wang (5): > > virtio-balloon: remove BUG() in init_vqs > > virtio_balloon: replace oom notifier with shrinker > > virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT > > mm/page_poison: expose page_poisoning_enabled to kernel modules > > virtio-balloon: VIRTIO_BALLOON_F_PAGE_POISON > > > > drivers/virtio/virtio_balloon.c | 456 ++++++++++++++++++++++++++++++------ > > include/uapi/linux/virtio_balloon.h | 7 + > > mm/page_poison.c | 6 + > > 3 files changed, 394 insertions(+), 75 deletions(-) > > > > -- > > 2.7.4-- Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK
Wei Wang
2018-Jul-24 08:12 UTC
[PATCH v36 0/5] Virtio-balloon: support free page reporting
On 07/23/2018 10:36 PM, Dr. David Alan Gilbert wrote:> * Michael S. Tsirkin (mst at redhat.com) wrote: >> On Fri, Jul 20, 2018 at 04:33:00PM +0800, Wei Wang wrote: >>> This patch series is separated from the previous "Virtio-balloon >>> Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, >>> implemented by this series enables the virtio-balloon driver to report >>> hints of guest free pages to the host. It can be used to accelerate live >>> migration of VMs. Here is an introduction of this usage: >>> >>> Live migration needs to transfer the VM's memory from the source machine >>> to the destination round by round. For the 1st round, all the VM's memory >>> is transferred. From the 2nd round, only the pieces of memory that were >>> written by the guest (after the 1st round) are transferred. One method >>> that is popularly used by the hypervisor to track which part of memory is >>> written is to write-protect all the guest memory. >>> >>> This feature enables the optimization by skipping the transfer of guest >>> free pages during VM live migration. It is not concerned that the memory >>> pages are used after they are given to the hypervisor as a hint of the >>> free pages, because they will be tracked by the hypervisor and transferred >>> in the subsequent round if they are used and written. >>> >>> * Tests >>> - Test Environment >>> Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz >>> Guest: 8G RAM, 4 vCPU >>> Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second >>> >>> - Test Results >>> - Idle Guest Live Migration Time (results are averaged over 10 runs): >>> - Optimization v.s. Legacy = 409ms vs 1757ms --> ~77% reduction >>> (setting page poisoning zero and enabling ksm don't affect the >>> comparison result) >>> - Guest with Linux Compilation Workload (make bzImage -j4): >>> - Live Migration Time (average) >>> Optimization v.s. Legacy = 1407ms v.s. 2528ms --> ~44% reduction >>> - Linux Compilation Time >>> Optimization v.s. Legacy = 5min4s v.s. 5min12s >>> --> no obvious difference >> I'd like to see dgilbert's take on whether this kind of gain >> justifies adding a PV interfaces, and what kind of guest workload >> is appropriate. >> >> Cc'd. > Well, 44% is great ... although the measurement is a bit weird. > > a) A 2 second downtime is very large; 300-500ms is more normalNo problem, I will set downtime to 400ms for the tests.> b) I'm not sure what the 'average' is - is that just between a bunch of > repeated migrations?Yes, just repeatedly ("source<---->destination" migration) do the tests and get an averaged result.> c) What load was running in the guest during the live migration?The first one above just uses a guest without running any specific workload (named idle guests). The second one uses a guest with the Linux compilation workload running.> > An interesting measurement to add would be to do the same test but > with a VM with a lot more RAM but the same load; you'd hope the gain > would be even better. > It would be interesting, especially because the users who are interested > are people creating VMs allocated with lots of extra memory (for the > worst case) but most of the time migrating when it's fairly idle.OK. I will add tests of a guest with larger memory. Best, Wei
Wei Wang
2018-Sep-06 12:18 UTC
[PATCH v36 0/5] Virtio-balloon: support free page reporting
On 07/23/2018 10:36 PM, Dr. David Alan Gilbert wrote:> * Michael S. Tsirkin (mst at redhat.com) wrote: >> On Fri, Jul 20, 2018 at 04:33:00PM +0800, Wei Wang wrote: >>> This patch series is separated from the previous "Virtio-balloon >>> Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, >>> implemented by this series enables the virtio-balloon driver to report >>> hints of guest free pages to the host. It can be used to accelerate live >>> migration of VMs. Here is an introduction of this usage: >>> >>> Live migration needs to transfer the VM's memory from the source machine >>> to the destination round by round. For the 1st round, all the VM's memory >>> is transferred. From the 2nd round, only the pieces of memory that were >>> written by the guest (after the 1st round) are transferred. One method >>> that is popularly used by the hypervisor to track which part of memory is >>> written is to write-protect all the guest memory. >>> >>> This feature enables the optimization by skipping the transfer of guest >>> free pages during VM live migration. It is not concerned that the memory >>> pages are used after they are given to the hypervisor as a hint of the >>> free pages, because they will be tracked by the hypervisor and transferred >>> in the subsequent round if they are used and written. >>> >>> * Tests >>> - Test Environment >>> Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz >>> Guest: 8G RAM, 4 vCPU >>> Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second >>> >>> - Test Results >>> - Idle Guest Live Migration Time (results are averaged over 10 runs): >>> - Optimization v.s. Legacy = 409ms vs 1757ms --> ~77% reduction >>> (setting page poisoning zero and enabling ksm don't affect the >>> comparison result) >>> - Guest with Linux Compilation Workload (make bzImage -j4): >>> - Live Migration Time (average) >>> Optimization v.s. Legacy = 1407ms v.s. 2528ms --> ~44% reduction >>> - Linux Compilation Time >>> Optimization v.s. Legacy = 5min4s v.s. 5min12s >>> --> no obvious difference >> I'd like to see dgilbert's take on whether this kind of gain >> justifies adding a PV interfaces, and what kind of guest workload >> is appropriate. >> >> Cc'd. > Well, 44% is great ... although the measurement is a bit weird. > > a) A 2 second downtime is very large; 300-500ms is more normal > b) I'm not sure what the 'average' is - is that just between a bunch of > repeated migrations? > c) What load was running in the guest during the live migration? > > An interesting measurement to add would be to do the same test but > with a VM with a lot more RAM but the same load; you'd hope the gain > would be even better. > It would be interesting, especially because the users who are interested > are people creating VMs allocated with lots of extra memory (for the > worst case) but most of the time migrating when it's fairly idle. > > Dave >Hi Dave, The results of the added experiments have been shown in the v37 cover letter. Could you have a look at https://lkml.org/lkml/2018/8/27/29 . Thanks. Best, Wei
Dr. David Alan Gilbert
2018-Sep-07 12:29 UTC
[PATCH v36 0/5] Virtio-balloon: support free page reporting
* Wei Wang (wei.w.wang at intel.com) wrote:> On 07/23/2018 10:36 PM, Dr. David Alan Gilbert wrote: > > * Michael S. Tsirkin (mst at redhat.com) wrote: > > > On Fri, Jul 20, 2018 at 04:33:00PM +0800, Wei Wang wrote: > > > > This patch series is separated from the previous "Virtio-balloon > > > > Enhancement" series. The new feature, VIRTIO_BALLOON_F_FREE_PAGE_HINT, > > > > implemented by this series enables the virtio-balloon driver to report > > > > hints of guest free pages to the host. It can be used to accelerate live > > > > migration of VMs. Here is an introduction of this usage: > > > > > > > > Live migration needs to transfer the VM's memory from the source machine > > > > to the destination round by round. For the 1st round, all the VM's memory > > > > is transferred. From the 2nd round, only the pieces of memory that were > > > > written by the guest (after the 1st round) are transferred. One method > > > > that is popularly used by the hypervisor to track which part of memory is > > > > written is to write-protect all the guest memory. > > > > > > > > This feature enables the optimization by skipping the transfer of guest > > > > free pages during VM live migration. It is not concerned that the memory > > > > pages are used after they are given to the hypervisor as a hint of the > > > > free pages, because they will be tracked by the hypervisor and transferred > > > > in the subsequent round if they are used and written. > > > > > > > > * Tests > > > > - Test Environment > > > > Host: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz > > > > Guest: 8G RAM, 4 vCPU > > > > Migration setup: migrate_set_speed 100G, migrate_set_downtime 2 second > > > > > > > > - Test Results > > > > - Idle Guest Live Migration Time (results are averaged over 10 runs): > > > > - Optimization v.s. Legacy = 409ms vs 1757ms --> ~77% reduction > > > > (setting page poisoning zero and enabling ksm don't affect the > > > > comparison result) > > > > - Guest with Linux Compilation Workload (make bzImage -j4): > > > > - Live Migration Time (average) > > > > Optimization v.s. Legacy = 1407ms v.s. 2528ms --> ~44% reduction > > > > - Linux Compilation Time > > > > Optimization v.s. Legacy = 5min4s v.s. 5min12s > > > > --> no obvious difference > > > I'd like to see dgilbert's take on whether this kind of gain > > > justifies adding a PV interfaces, and what kind of guest workload > > > is appropriate. > > > > > > Cc'd. > > Well, 44% is great ... although the measurement is a bit weird. > > > > a) A 2 second downtime is very large; 300-500ms is more normal > > b) I'm not sure what the 'average' is - is that just between a bunch of > > repeated migrations? > > c) What load was running in the guest during the live migration? > > > > An interesting measurement to add would be to do the same test but > > with a VM with a lot more RAM but the same load; you'd hope the gain > > would be even better. > > It would be interesting, especially because the users who are interested > > are people creating VMs allocated with lots of extra memory (for the > > worst case) but most of the time migrating when it's fairly idle. > > > > Dave > > > > Hi Dave, > > The results of the added experiments have been shown in the v37 cover > letter. > Could you have a look at https://lkml.org/lkml/2018/8/27/29 . Thanks.OK, that's much better. The ~50% reducton with a 8G VM and a real workload is great, and it does what you expect when you put a lot more RAM in and see the 84% reduction on a guest with 128G RAM - 54s vs ~9s is a big win! (The migrate_set_speed is a bit high, since that's in bytes/s - but it's not important). That looks good, Thanks! Dave> Best, > Wei >-- Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK
Possibly Parallel Threads
- [PATCH v36 0/5] Virtio-balloon: support free page reporting
- [PATCH v36 0/5] Virtio-balloon: support free page reporting
- [PATCH v36 0/5] Virtio-balloon: support free page reporting
- [PATCH v36 0/5] Virtio-balloon: support free page reporting
- [PATCH v36 0/5] Virtio-balloon: support free page reporting