Displaying 20 results from an estimated 208 matches for "free_list".
2017 Aug 03
0
[PATCH v13 4/5] mm: support reporting free page blocks
...at pages from the page block are used immediately after
+ * report_free_page_block() returns. It is the caller's responsibility to
+ * either detect or prevent the use of such pages.
+ *
+ * The input parameters specify the free list to check for a free page block:
+ * zone->free_area[order].free_list[migratetype]
+ *
+ * If the caller supplied page block (i.e. **page) is on the free list, offer
+ * the next page block on the list to the caller. Otherwise, offer the first
+ * page block on the list.
+ *
+ * Return 0 when a page block is found on the caller specified free list.
+ * Otherwise, no...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...finish.
>
> And these 5 seconds are spent where?
>
The time is spent on allocating the pages and send the allocated pages pfns to QEMU
through virtio.
> > For the PV solution, there is no need to inflate balloon before live
> > migration, the only cost is to traversing the free_list to construct
> > the free pages bitmap, and it takes about 20ms for a 8GB idle guest( less if
> there is less free pages), passing the free pages info to host will take about
> extra 3ms.
> >
> >
> > Liang
>
> So now let's please stop talking about solutio...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...finish.
>
> And these 5 seconds are spent where?
>
The time is spent on allocating the pages and send the allocated pages pfns to QEMU
through virtio.
> > For the PV solution, there is no need to inflate balloon before live
> > migration, the only cost is to traversing the free_list to construct
> > the free pages bitmap, and it takes about 20ms for a 8GB idle guest( less if
> there is less free pages), passing the free pages info to host will take about
> extra 3ms.
> >
> >
> > Liang
>
> So now let's please stop talking about solutio...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...n deflate vq.
>
Maybe I am not clear enough.
I mean if we inflate balloon before live migration, for a 8GB guest, it takes about 5 Seconds for the inflating operation to finish.
For the PV solution, there is no need to inflate balloon before live migration, the only cost is to traversing the free_list to
construct the free pages bitmap, and it takes about 20ms for a 8GB idle guest( less if there is less free pages),
passing the free pages info to host will take about extra 3ms.
Liang
> --
> MST
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...n deflate vq.
>
Maybe I am not clear enough.
I mean if we inflate balloon before live migration, for a 8GB guest, it takes about 5 Seconds for the inflating operation to finish.
For the PV solution, there is no need to inflate balloon before live migration, the only cost is to traversing the free_list to
construct the free pages bitmap, and it takes about 20ms for a 8GB idle guest( less if there is less free pages),
passing the free pages info to host will take about extra 3ms.
Liang
> --
> MST
2010 Feb 09
2
[PATCH 1/3] Introduce nouveau_bo_wait for waiting on a BO with a GPU channel (v2)
Changes in v2:
- Addressed review comments
nouveau_bo_wait will make the GPU channel wait for fence if possible,
otherwise falling back to waiting with the CPU using ttm_bo_wait.
The nouveau_fence_sync function currently returns -ENOSYS, and is
the focus of the next patch.
Signed-off-by: Luca Barbieri <luca at luca-barbieri.com>
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 68
2017 Aug 03
4
[PATCH v13 4/5] mm: support reporting free page blocks
...ock are used immediately after
> + * report_free_page_block() returns. It is the caller's responsibility to
> + * either detect or prevent the use of such pages.
> + *
> + * The input parameters specify the free list to check for a free page block:
> + * zone->free_area[order].free_list[migratetype]
> + *
> + * If the caller supplied page block (i.e. **page) is on the free list, offer
> + * the next page block on the list to the caller. Otherwise, offer the first
> + * page block on the list.
> + *
> + * Return 0 when a page block is found on the caller specified...
2017 Aug 03
4
[PATCH v13 4/5] mm: support reporting free page blocks
...ock are used immediately after
> + * report_free_page_block() returns. It is the caller's responsibility to
> + * either detect or prevent the use of such pages.
> + *
> + * The input parameters specify the free list to check for a free page block:
> + * zone->free_area[order].free_list[migratetype]
> + *
> + * If the caller supplied page block (i.e. **page) is on the free list, offer
> + * the next page block on the list to the caller. Otherwise, offer the first
> + * page block on the list.
> + *
> + * Return 0 when a page block is found on the caller specified...
2016 Jul 27
4
[PATCH v2 repost 6/7] mm: add the related functions to get free page info
On 07/26/2016 06:23 PM, Liang Li wrote:
> + for_each_migratetype_order(order, t) {
> + list_for_each(curr, &zone->free_area[order].free_list[t]) {
> + pfn = page_to_pfn(list_entry(curr, struct page, lru));
> + if (pfn >= start_pfn && pfn <= end_pfn) {
> + page_num = 1UL << order;
> + if (pfn + page_num > end_pfn)
> + page_num = end_pfn - pfn;
> + bitmap_set(bitmap, pfn - start_pf...
2016 Jul 27
4
[PATCH v2 repost 6/7] mm: add the related functions to get free page info
On 07/26/2016 06:23 PM, Liang Li wrote:
> + for_each_migratetype_order(order, t) {
> + list_for_each(curr, &zone->free_area[order].free_list[t]) {
> + pfn = page_to_pfn(list_entry(curr, struct page, lru));
> + if (pfn >= start_pfn && pfn <= end_pfn) {
> + page_num = 1UL << order;
> + if (pfn + page_num > end_pfn)
> + page_num = end_pfn - pfn;
> + bitmap_set(bitmap, pfn - start_pf...
2017 Aug 03
2
[PATCH v13 4/5] mm: support reporting free page blocks
...>
> >>>>> if (!free_area->nr_pages)
> >>>>> continue;
> >>>>>
> >>>>> for_each_migratetype_order(order, mt) {
> >>>>> list_for_each_entry(page,
> >>>>> &free_area->free_list[mt], lru) {
> >>>>>
> >>>>> pfn = page_to_pfn(page);
> >>>>> visit(opaque2, prn, 1<<order);
> >>>>> }
> >>>>> }
> >>>>> }
> >>>>>
> >>>>&g...
2017 Aug 03
2
[PATCH v13 4/5] mm: support reporting free page blocks
...>
> >>>>> if (!free_area->nr_pages)
> >>>>> continue;
> >>>>>
> >>>>> for_each_migratetype_order(order, mt) {
> >>>>> list_for_each_entry(page,
> >>>>> &free_area->free_list[mt], lru) {
> >>>>>
> >>>>> pfn = page_to_pfn(page);
> >>>>> visit(opaque2, prn, 1<<order);
> >>>>> }
> >>>>> }
> >>>>> }
> >>>>>
> >>>>&g...
2018 Jun 16
2
[PATCH v33 1/4] mm: add a function to get free page blocks
...e *page;
> + struct list_head *list;
> + unsigned long addr, flags;
> + uint32_t index = 0;
> +
> + for_each_populated_zone(zone) {
> + spin_lock_irqsave(&zone->lock, flags);
> + for (mt = 0; mt < MIGRATE_TYPES; mt++) {
> + list = &zone->free_area[order].free_list[mt];
> + list_for_each_entry(page, list, lru) {
> + addr = page_to_pfn(page) << PAGE_SHIFT;
> + if (likely(index < size)) {
> + buf[index++] = cpu_to_le64(addr);
> + } else {
> + spin_unlock_irqrestore(&zone->lock,
> + flags);
>...
2018 Jul 10
0
[PATCH v35 1/5] mm: support to get hints of free page blocks
...es; or -EINVAL if an unexpected argument is received (e.g.
+ * incorrect @order, empty buffer list).
+ */
+int get_from_free_page_list(int order, struct list_head *pages,
+ unsigned int size, unsigned long *loaded_num)
+{
+ struct zone *zone;
+ enum migratetype mt;
+ struct list_head *free_list;
+ struct page *free_page, *buf_page;
+ unsigned long addr;
+ __le64 *buf;
+ unsigned int used_buf_num = 0, entry_index = 0,
+ entries = size / sizeof(__le64);
+ *loaded_num = 0;
+
+ /* Validity check */
+ if (order < 0 || order >= MAX_ORDER)
+ return -EINVAL;
+
+ buf_page = list_first...
2016 Mar 08
0
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...ges and send the allocated pages pfns to QEMU
> through virtio.
What if we skip allocating pages but use the existing interface to send pfns
to QEMU?
> > > For the PV solution, there is no need to inflate balloon before live
> > > migration, the only cost is to traversing the free_list to construct
> > > the free pages bitmap, and it takes about 20ms for a 8GB idle guest( less if
> > there is less free pages), passing the free pages info to host will take about
> > extra 3ms.
> > >
> > >
> > > Liang
> >
> > So now let...
2017 Mar 29
2
[PATCH kernel v8 3/4] mm: add inerface to offer info about unused pages
...ing too ugly.
>
> The patch description was too narrowed and may have caused some
> confusion, sorry about that. This function is aimed to be generic. I
> agree with the description suggested by Michael.
>
> Since the main body of the function is related to operating on the
> free_list. I think it is better to have them located here.
> Small helpers may be less efficient and thereby causing some
> performance loss as well.
> I think one improvement we can make is to remove the "chunk format"
> related things from this function. The function can generally off...
2017 Mar 29
2
[PATCH kernel v8 3/4] mm: add inerface to offer info about unused pages
...ing too ugly.
>
> The patch description was too narrowed and may have caused some
> confusion, sorry about that. This function is aimed to be generic. I
> agree with the description suggested by Michael.
>
> Since the main body of the function is related to operating on the
> free_list. I think it is better to have them located here.
> Small helpers may be less efficient and thereby causing some
> performance loss as well.
> I think one improvement we can make is to remove the "chunk format"
> related things from this function. The function can generally off...
2012 Aug 16
0
[RFC v1 3/5] VBD: enlarge max segment per request in blkfront
...anding request that we''ve passed to the lower device layers has a
* ''pending_req'' allocated to it. Each buffer_head that completes decrements
@@ -78,6 +83,11 @@ struct pending_req {
unsigned short operation;
int status;
struct list_head free_list;
+ struct gnttab_map_grant_ref *map;
+ struct gnttab_unmap_grant_ref *unmap;
+ struct seg_buf *seg;
+ struct bio **biolist;
+ struct page **pages;
};
#define BLKBACK_INVALID_HANDLE (~0)
@@ -123,28 +133,9 @@ static inline unsigned long vaddr...
2017 Aug 03
2
[PATCH v13 4/5] mm: support reporting free page blocks
...>>> struct page *page;
> >>>
> >>> if (!free_area->nr_pages)
> >>> continue;
> >>>
> >>> for_each_migratetype_order(order, mt) {
> >>> list_for_each_entry(page,
> >>> &free_area->free_list[mt], lru) {
> >>>
> >>> pfn = page_to_pfn(page);
> >>> visit(opaque2, prn, 1<<order);
> >>> }
> >>> }
> >>> }
> >>>
> >>> spin_unlock_irqrestore(&zone->lock, flags);
> >...
2017 Aug 03
2
[PATCH v13 4/5] mm: support reporting free page blocks
...>>> struct page *page;
> >>>
> >>> if (!free_area->nr_pages)
> >>> continue;
> >>>
> >>> for_each_migratetype_order(order, mt) {
> >>> list_for_each_entry(page,
> >>> &free_area->free_list[mt], lru) {
> >>>
> >>> pfn = page_to_pfn(page);
> >>> visit(opaque2, prn, 1<<order);
> >>> }
> >>> }
> >>> }
> >>>
> >>> spin_unlock_irqrestore(&zone->lock, flags);
> >...