search for: min_ord

Displaying 20 results from an estimated 92 matches for "min_ord".

Did you mean: minord
2017 Aug 08
2
[virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
...struct zone *zone = &pgdat->node_zones[i]; >> >> if (!populated_zone(zone)) >> continue; > > Can we directly use for_each_populated_zone(zone) here? > > >> spin_lock_irqsave(&zone->lock, flags); >> for (order = min_order; order < MAX_ORDER; ++order) { > > > This appears to be covered by for_each_migratetype_order(order, mt) > below. > > >> struct free_area *free_area = &zone->free_area[order]; >> enum migratetype mt; >> struct page...
2017 Aug 08
2
[virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
...struct zone *zone = &pgdat->node_zones[i]; >> >> if (!populated_zone(zone)) >> continue; > > Can we directly use for_each_populated_zone(zone) here? > > >> spin_lock_irqsave(&zone->lock, flags); >> for (order = min_order; order < MAX_ORDER; ++order) { > > > This appears to be covered by for_each_migratetype_order(order, mt) > below. > > >> struct free_area *free_area = &zone->free_area[order]; >> enum migratetype mt; >> struct page...
2017 Aug 10
1
[virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
...ne(zone)) >>>> continue; >>> Can we directly use for_each_populated_zone(zone) here? > yes, my example couldn't because I was still assuming per-node API > >>>> spin_lock_irqsave(&zone->lock, flags); >>>> for (order = min_order; order < MAX_ORDER; ++order) { >>> >>> This appears to be covered by for_each_migratetype_order(order, mt) below. > yes but > #define for_each_migratetype_order(order, type) \ > for (order = 0; order < MAX_ORDER; order++) \ > for (type = 0; type < MIGRATE...
2017 Aug 10
1
[virtio-dev] Re: [PATCH v13 4/5] mm: support reporting free page blocks
...ne(zone)) >>>> continue; >>> Can we directly use for_each_populated_zone(zone) here? > yes, my example couldn't because I was still assuming per-node API > >>>> spin_lock_irqsave(&zone->lock, flags); >>>> for (order = min_order; order < MAX_ORDER; ++order) { >>> >>> This appears to be covered by for_each_migratetype_order(order, mt) below. > yes but > #define for_each_migratetype_order(order, type) \ > for (order = 0; order < MAX_ORDER; order++) \ > for (type = 0; type < MIGRATE...
2017 Aug 03
4
[PATCH v13 4/5] mm: support reporting free page blocks
...xtern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern void walk_free_mem_block(void *opaque1, > + unsigned int min_order, > + void (*visit)(void *opaque2, > + unsigned long pfn, > + unsigned long nr_pages)); > +#endif Is the ifdef necessary. Sure only virtio balloon driver will use this currently but this looks like a generic functionality not specific to virtio at all so the ifd...
2017 Aug 03
4
[PATCH v13 4/5] mm: support reporting free page blocks
...xtern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) > +extern void walk_free_mem_block(void *opaque1, > + unsigned int min_order, > + void (*visit)(void *opaque2, > + unsigned long pfn, > + unsigned long nr_pages)); > +#endif Is the ifdef necessary. Sure only virtio balloon driver will use this currently but this looks like a generic functionality not specific to virtio at all so the ifd...
2017 Jul 25
2
[PATCH v12 6/8] mm: support reporting free page blocks
...and size=2MB, to > the hypervisor. So you want to skip pfn walks by regularly calling into the page allocator to update your bitmap. If that is the case then would an API that would allow you to update your bitmap via a callback be s sufficient? Something like void walk_free_mem(int node, int min_order, void (*visit)(unsigned long pfn, unsigned long nr_pages)) The function will call the given callback for each free memory block on the given node starting from the given min_order. The callback will be strictly an atomic and very light context. You can update your bitmap from there. This wou...
2017 Jul 25
2
[PATCH v12 6/8] mm: support reporting free page blocks
...and size=2MB, to > the hypervisor. So you want to skip pfn walks by regularly calling into the page allocator to update your bitmap. If that is the case then would an API that would allow you to update your bitmap via a callback be s sufficient? Something like void walk_free_mem(int node, int min_order, void (*visit)(unsigned long pfn, unsigned long nr_pages)) The function will call the given callback for each free memory block on the given node starting from the given min_order. The callback will be strictly an atomic and very light context. You can update your bitmap from there. This wou...
2017 Aug 03
0
[PATCH v13 4/5] mm: support reporting free page blocks
.../mm.h @@ -1835,6 +1835,13 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +#if IS_ENABLED(CONFIG_VIRTIO_BALLOON) +extern void walk_free_mem_block(void *opaque1, + unsigned int min_order, + void (*visit)(void *opaque2, + unsigned long pfn, + unsigned long nr_pages)); +#endif /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/include/linux/mmz...
2018 Jan 25
4
[PATCH v24 1/2] mm: support reporting free page blocks
...+++ b/include/linux/mm.h > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +extern void walk_free_mem_block(void *opaque, > + int min_order, > + bool (*report_pfn_range)(void *opaque, > + unsigned long pfn, > + unsigned long num)); > + > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern &g...
2018 Jan 25
4
[PATCH v24 1/2] mm: support reporting free page blocks
...+++ b/include/linux/mm.h > @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, > unsigned long zone_start_pfn, unsigned long *zholes_size); > extern void free_initmem(void); > > +extern void walk_free_mem_block(void *opaque, > + int min_order, > + bool (*report_pfn_range)(void *opaque, > + unsigned long pfn, > + unsigned long num)); > + > /* > * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) > * into the buddy system. The freed pages will be poisoned with pattern &g...
2018 Mar 26
4
[PATCH v29 1/4] mm: support reporting free page blocks
...rder); > + if (ret) > + break; > + } > + spin_unlock_irqrestore(&zone->lock, flags); > + > + return ret; > +} > + > +/** > + * walk_free_mem_block - Walk through the free page blocks in the system > + * @opaque: the context passed from the caller > + * @min_order: the minimum order of free lists to check > + * @report_pfn_range: the callback to report the pfn range of the free pages > + * > + * If the callback returns a non-zero value, stop iterating the list of free > + * page blocks. Otherwise, continue to report. > + * > + * Please no...
2018 Mar 26
4
[PATCH v29 1/4] mm: support reporting free page blocks
...rder); > + if (ret) > + break; > + } > + spin_unlock_irqrestore(&zone->lock, flags); > + > + return ret; > +} > + > +/** > + * walk_free_mem_block - Walk through the free page blocks in the system > + * @opaque: the context passed from the caller > + * @min_order: the minimum order of free lists to check > + * @report_pfn_range: the callback to report the pfn range of the free pages > + * > + * If the callback returns a non-zero value, stop iterating the list of free > + * page blocks. Otherwise, continue to report. > + * > + * Please no...
2018 Jan 25
3
[PATCH v25 1/2 RESEND] mm: support reporting free page blocks
...--- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern int walk_free_mem_block(void *opaque, + int min_order, + int (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_a...
2018 Jan 25
3
[PATCH v25 1/2 RESEND] mm: support reporting free page blocks
...--- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern int walk_free_mem_block(void *opaque, + int min_order, + int (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_a...
2017 Jul 25
2
[PATCH v12 6/8] mm: support reporting free page blocks
...> > So you want to skip pfn walks by regularly calling into the page allocator to > > update your bitmap. If that is the case then would an API that would allow you > > to update your bitmap via a callback be s sufficient? Something like > > void walk_free_mem(int node, int min_order, > > void (*visit)(unsigned long pfn, unsigned long nr_pages)) > > > > The function will call the given callback for each free memory block on the given > > node starting from the given min_order. The callback will be strictly an atomic > > and very light context...
2017 Jul 25
2
[PATCH v12 6/8] mm: support reporting free page blocks
...> > So you want to skip pfn walks by regularly calling into the page allocator to > > update your bitmap. If that is the case then would an API that would allow you > > to update your bitmap via a callback be s sufficient? Something like > > void walk_free_mem(int node, int min_order, > > void (*visit)(unsigned long pfn, unsigned long nr_pages)) > > > > The function will call the given callback for each free memory block on the given > > node starting from the given min_order. The callback will be strictly an atomic > > and very light context...
2017 Sep 30
0
[PATCH v16 4/5] mm: support reporting free page blocks
...0644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1835,6 +1835,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern void walk_free_mem_block(void *opaque, + int min_order, + bool (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_alloc.c b/mm/pag...
2018 Jan 24
0
[PATCH v24 1/2] mm: support reporting free page blocks
...0644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern void walk_free_mem_block(void *opaque, + int min_order, + bool (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_alloc.c b/mm/pag...
2018 Jan 25
0
[PATCH v25 1/2] mm: support reporting free page blocks
...--- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern int walk_free_mem_block(void *opaque, + int min_order, + int (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_a...