On Thu 03-08-17 14:38:18, Wei Wang wrote:> This patch adds support to walk through the free page blocks in the > system and report them via a callback function. Some page blocks may > leave the free list after the report function returns, so it is the > caller's responsibility to either detect or prevent the use of such > pages. > > Signed-off-by: Wei Wang <wei.w.wang at intel.com> > Signed-off-by: Liang Li <liang.z.li at intel.com> > Cc: Michal Hocko <mhocko at kernel.org> > Cc: Michael S. Tsirkin <mst at redhat.com> > --- > include/linux/mm.h | 7 ++++ > include/linux/mmzone.h | 5 +++ > mm/page_alloc.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 121 insertions(+) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 46b9ac5..24481e3 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern void walk_free_mem_block(void *opaque1, > + unsigned int min_order, > + void (*visit)(void *opaque2, > + unsigned long pfn, > + unsigned long nr_pages)); > +#endifIs the ifdef necessary. Sure only virtio balloon driver will use this currently but this looks like a generic functionality not specific to virtio at all so the ifdef is rather confusing.> /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index fc14b8b..59eacf2 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -83,6 +83,11 @@ static inline bool is_migrate_movable(int mt) > for (order = 0; order < MAX_ORDER; order++) \ > for (type = 0; type < MIGRATE_TYPES; type++) > > +#define for_each_migratetype_order_decend(min_order, order, type) \ > + for (order = MAX_ORDER - 1; order < MAX_ORDER && order >= min_order; \ > + order--) \ > + for (type = 0; type < MIGRATE_TYPES; type++) > +Is there going to be any other user outside of mm/page_alloc.c? If not then do not export this.> extern int page_group_by_mobility_disabled; > > #define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 6d30e91..b90b513 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4761,6 +4761,115 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) > show_swap_cache_info(); > } > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > + > +/* > + * Heuristically get a free page block in the system. > + * > + * It is possible that pages from the page block are used immediately after > + * report_free_page_block() returns. It is the caller's responsibility to > + * either detect or prevent the use of such pages. > + * > + * The input parameters specify the free list to check for a free page block: > + * zone->free_area[order].free_list[migratetype] > + * > + * If the caller supplied page block (i.e. **page) is on the free list, offer > + * the next page block on the list to the caller. Otherwise, offer the first > + * page block on the list. > + * > + * Return 0 when a page block is found on the caller specified free list. > + * Otherwise, no page block is found. > + */ > +static int report_free_page_block(struct zone *zone, unsigned int order, > + unsigned int migratetype, struct page **page)This is just too ugly and wrong actually. Never provide struct page pointers outside of the zone->lock. What I've had in mind was to simply walk free lists of the suitable order and call the callback for each one. Something as simple as for (i = 0; i < MAX_NR_ZONES; i++) { struct zone *zone = &pgdat->node_zones[i]; if (!populated_zone(zone)) continue; spin_lock_irqsave(&zone->lock, flags); for (order = min_order; order < MAX_ORDER; ++order) { struct free_area *free_area = &zone->free_area[order]; enum migratetype mt; struct page *page; if (!free_area->nr_pages) continue; for_each_migratetype_order(order, mt) { list_for_each_entry(page, &free_area->free_list[mt], lru) { pfn = page_to_pfn(page); visit(opaque2, prn, 1<<order); } } } spin_unlock_irqrestore(&zone->lock, flags); } [...]> +/* > + * Walk through the free page blocks in the system. The @visit callback is > + * invoked to handle each free page block. > + * > + * Note: some page blocks may be used after the report function returns, so it > + * is not safe for the callback to use any pages or discard data on such page > + * blocks. > + */ > +void walk_free_mem_block(void *opaque1, > + unsigned int min_order, > + void (*visit)(void *opaque2, > + unsigned long pfn, > + unsigned long nr_pages))Is there any reason why there is no node id? I guess you just do not care for your particular use case. Not that I care too much either. If somebody wants this per node then it would be trivial to extend I was just wondering whether this is a deliberate decision or an omission.> +{ > + struct zone *zone = NULL; > + struct page *page = NULL; > + unsigned int order; > + unsigned long pfn, nr_pages; > + int type; > + > + for_each_populated_zone(zone) { > + for_each_migratetype_order_decend(min_order, order, type) { > + while (!report_free_page_block(zone, order, type, > + &page)) { > + pfn = page_to_pfn(page); > + nr_pages = 1 << order; > + visit(opaque1, pfn, nr_pages); > + } > + } > + } > +} > +EXPORT_SYMBOL_GPL(walk_free_mem_block); > + > +#endif > + > static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) > { > zoneref->zone = zone; > -- > 2.7.4-- Michal Hocko SUSE Labs
On 08/03/2017 05:11 PM, Michal Hocko wrote:> On Thu 03-08-17 14:38:18, Wei Wang wrote: >> This patch adds support to walk through the free page blocks in the >> system and report them via a callback function. Some page blocks may >> leave the free list after the report function returns, so it is the >> caller's responsibility to either detect or prevent the use of such >> pages. >> >> Signed-off-by: Wei Wang <wei.w.wang at intel.com> >> Signed-off-by: Liang Li <liang.z.li at intel.com> >> Cc: Michal Hocko <mhocko at kernel.org> >> Cc: Michael S. Tsirkin <mst at redhat.com> >> --- >> include/linux/mm.h | 7 ++++ >> include/linux/mmzone.h | 5 +++ >> mm/page_alloc.c | 109 +++++++++++++++++++++++++++++++++++++++++++++++++ >> 3 files changed, 121 insertions(+) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 46b9ac5..24481e3 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, >> unsigned long zone_start_pfn, unsigned long *zholes_size); >> extern void free_initmem(void); >> >> +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) >> +extern void walk_free_mem_block(void *opaque1, >> + unsigned int min_order, >> + void (*visit)(void *opaque2, >> + unsigned long pfn, >> + unsigned long nr_pages)); >> +#endif > Is the ifdef necessary. Sure only virtio balloon driver will use this > currently but this looks like a generic functionality not specific to > virtio at all so the ifdef is rather confusing.OK. We can remove the condition if no objection from others.> >> extern int page_group_by_mobility_disabled; >> >> #define NR_MIGRATETYPE_BITS (PB_migrate_end - PB_migrate + 1) >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 6d30e91..b90b513 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -4761,6 +4761,115 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) >> show_swap_cache_info(); >> } >> >> +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) >> + >> +/* >> + * Heuristically get a free page block in the system. >> + * >> + * It is possible that pages from the page block are used immediately after >> + * report_free_page_block() returns. It is the caller's responsibility to >> + * either detect or prevent the use of such pages. >> + * >> + * The input parameters specify the free list to check for a free page block: >> + * zone->free_area[order].free_list[migratetype] >> + * >> + * If the caller supplied page block (i.e. **page) is on the free list, offer >> + * the next page block on the list to the caller. Otherwise, offer the first >> + * page block on the list. >> + * >> + * Return 0 when a page block is found on the caller specified free list. >> + * Otherwise, no page block is found. >> + */ >> +static int report_free_page_block(struct zone *zone, unsigned int order, >> + unsigned int migratetype, struct page **page) > This is just too ugly and wrong actually. Never provide struct page > pointers outside of the zone->lock. What I've had in mind was to simply > walk free lists of the suitable order and call the callback for each one. > Something as simple as > > for (i = 0; i < MAX_NR_ZONES; i++) { > struct zone *zone = &pgdat->node_zones[i]; > > if (!populated_zone(zone)) > continue; > spin_lock_irqsave(&zone->lock, flags); > for (order = min_order; order < MAX_ORDER; ++order) { > struct free_area *free_area = &zone->free_area[order]; > enum migratetype mt; > struct page *page; > > if (!free_area->nr_pages) > continue; > > for_each_migratetype_order(order, mt) { > list_for_each_entry(page, > &free_area->free_list[mt], lru) { > > pfn = page_to_pfn(page); > visit(opaque2, prn, 1<<order); > } > } > } > > spin_unlock_irqrestore(&zone->lock, flags); > } > > [...]I think the above would take the lock for too long time. That's why we prefer to take one free page block each time, and taking it one by one also doesn't make a difference, in terms of the performance that we need. The struct page is used as a "state" to get the next free page block. It is only given for an internal implementation of a function in mm ( not seen by the outside caller). Would this be OK? If not, how about pfn - we can also pass in pfn to the function, and do pfn_to_page each time the function starts, and then do page_to_pfn when returns.>> +/* >> + * Walk through the free page blocks in the system. The @visit callback is >> + * invoked to handle each free page block. >> + * >> + * Note: some page blocks may be used after the report function returns, so it >> + * is not safe for the callback to use any pages or discard data on such page >> + * blocks. >> + */ >> +void walk_free_mem_block(void *opaque1, >> + unsigned int min_order, >> + void (*visit)(void *opaque2, >> + unsigned long pfn, >> + unsigned long nr_pages)) > Is there any reason why there is no node id? I guess you just do not > care for your particular use case. Not that I care too much either. If > somebody wants this per node then it would be trivial to extend I was > just wondering whether this is a deliberate decision or an omission. >Right, we don't care about the node id. Live migration transfers all the guest system memory, so we just want to get the hint of all the free page blocks from the system. Best, Wei
On Thu 03-08-17 18:42:15, Wei Wang wrote:> On 08/03/2017 05:11 PM, Michal Hocko wrote: > >On Thu 03-08-17 14:38:18, Wei Wang wrote:[...]> >>+static int report_free_page_block(struct zone *zone, unsigned int order, > >>+ unsigned int migratetype, struct page **page) > >This is just too ugly and wrong actually. Never provide struct page > >pointers outside of the zone->lock. What I've had in mind was to simply > >walk free lists of the suitable order and call the callback for each one. > >Something as simple as > > > > for (i = 0; i < MAX_NR_ZONES; i++) { > > struct zone *zone = &pgdat->node_zones[i]; > > > > if (!populated_zone(zone)) > > continue; > > spin_lock_irqsave(&zone->lock, flags); > > for (order = min_order; order < MAX_ORDER; ++order) { > > struct free_area *free_area = &zone->free_area[order]; > > enum migratetype mt; > > struct page *page; > > > > if (!free_area->nr_pages) > > continue; > > > > for_each_migratetype_order(order, mt) { > > list_for_each_entry(page, > > &free_area->free_list[mt], lru) { > > > > pfn = page_to_pfn(page); > > visit(opaque2, prn, 1<<order); > > } > > } > > } > > > > spin_unlock_irqrestore(&zone->lock, flags); > > } > > > >[...] > > > I think the above would take the lock for too long time. That's why we > prefer to take one free page block each time, and taking it one by one > also doesn't make a difference, in terms of the performance that we > need.I think you should start with simple approach and impove incrementally if this turns out to be not optimal. I really detest taking struct pages outside of the lock. You never know what might happen after the lock is dropped. E.g. can you race with the memory hotremove?> The struct page is used as a "state" to get the next free page block. It is > only > given for an internal implementation of a function in mm ( not seen by the > outside caller). Would this be OK? > If not, how about pfn - we can also pass in pfn to the function, and do > pfn_to_page each time the function starts, and then do page_to_pfn when > returns.No, just do not try to play tricks with struct pages which might have gone away. -- Michal Hocko SUSE Labs
On 08/03/2017 05:11 PM, Michal Hocko wrote:> On Thu 03-08-17 14:38:18, Wei Wang wrote: > This is just too ugly and wrong actually. Never provide struct page > pointers outside of the zone->lock. What I've had in mind was to simply > walk free lists of the suitable order and call the callback for each one. > Something as simple as > > for (i = 0; i < MAX_NR_ZONES; i++) { > struct zone *zone = &pgdat->node_zones[i]; > > if (!populated_zone(zone)) > continue;Can we directly use for_each_populated_zone(zone) here?> spin_lock_irqsave(&zone->lock, flags); > for (order = min_order; order < MAX_ORDER; ++order) {This appears to be covered by for_each_migratetype_order(order, mt) below.> struct free_area *free_area = &zone->free_area[order]; > enum migratetype mt; > struct page *page; > > if (!free_area->nr_pages) > continue; > > for_each_migratetype_order(order, mt) { > list_for_each_entry(page, > &free_area->free_list[mt], lru) { > > pfn = page_to_pfn(page); > visit(opaque2, prn, 1<<order); > } > } > } > > spin_unlock_irqrestore(&zone->lock, flags); > } > > [...] >What do you think if we further simply the above implementation like this: for_each_populated_zone(zone) { for_each_migratetype_order_decend(1, order, mt) { spin_lock_irqsave(&zone->lock, flags); list_for_each_entry(page, &zone->free_area[order].free_list[mt], lru) { pfn = page_to_pfn(page); visit(opaque1, pfn, 1 << order); } spin_unlock_irqrestore(&zone->lock, flags); } } Best, Wei
Wei Wang
2017-Aug-08 06:34 UTC
[virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
On 08/08/2017 02:12 PM, Wei Wang wrote:> On 08/03/2017 05:11 PM, Michal Hocko wrote: >> On Thu 03-08-17 14:38:18, Wei Wang wrote: >> This is just too ugly and wrong actually. Never provide struct page >> pointers outside of the zone->lock. What I've had in mind was to simply >> walk free lists of the suitable order and call the callback for each >> one. >> Something as simple as >> >> for (i = 0; i < MAX_NR_ZONES; i++) { >> struct zone *zone = &pgdat->node_zones[i]; >> >> if (!populated_zone(zone)) >> continue; > > Can we directly use for_each_populated_zone(zone) here? > > >> spin_lock_irqsave(&zone->lock, flags); >> for (order = min_order; order < MAX_ORDER; ++order) { > > > This appears to be covered by for_each_migratetype_order(order, mt) > below. > > >> struct free_area *free_area = &zone->free_area[order]; >> enum migratetype mt; >> struct page *page; >> >> if (!free_area->nr_pages) >> continue; >> >> for_each_migratetype_order(order, mt) { >> list_for_each_entry(page, >> &free_area->free_list[mt], lru) { >> >> pfn = page_to_pfn(page); >> visit(opaque2, prn, 1<<order); >> } >> } >> } >> >> spin_unlock_irqrestore(&zone->lock, flags); >> } >> >> [...] >> > > What do you think if we further simply the above implementation like > this: > > for_each_populated_zone(zone) { > for_each_migratetype_order_decend(1, order, mt) {here it will be min_order (passed by the caller), instead of "1", that is, for_each_migratetype_order_decend(min_order, order, mt)> spin_lock_irqsave(&zone->lock, flags); > list_for_each_entry(page, > &zone->free_area[order].free_list[mt], lru) { > pfn = page_to_pfn(page); > visit(opaque1, pfn, 1 << order); > } > spin_unlock_irqrestore(&zone->lock, flags); > } > } > >Best, Wei
Maybe Matching Threads
- [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
- [PATCH v13 4/5] mm: support reporting free page blocks
- [PATCH v13 4/5] mm: support reporting free page blocks
- [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
- [virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks