Displaying 7 results from an estimated 7 matches for "i_page".
Did you mean:
s_page
2010 Oct 27
2
[Qemu-devel] Re: [PATCH] Implement a virtio GPU transport
On 19/10/10 11:39, Avi Kivity wrote:
> On 10/19/2010 12:31 PM, Ian Molton wrote:
>>> 2. should start with a patch to the virtio-pci spec to document what
>>> you're doing
>>
>> Where can I find that spec?
>
> http://ozlabs.org/~rusty/virtio-spec/
Ok, but I'm not patching that until theres been some review.
There are links to the associated qemu and
2010 Oct 27
2
[Qemu-devel] Re: [PATCH] Implement a virtio GPU transport
On 19/10/10 11:39, Avi Kivity wrote:
> On 10/19/2010 12:31 PM, Ian Molton wrote:
>>> 2. should start with a patch to the virtio-pci spec to document what
>>> you're doing
>>
>> Where can I find that spec?
>
> http://ozlabs.org/~rusty/virtio-spec/
Ok, but I'm not patching that until theres been some review.
There are links to the associated qemu and
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...refs += folio_nr_pages(folio);
> > + if (folio_get_private(folio))
> > + refs++;
> > +
> > + return refs;
> > }
> >
> > /*
> > @@ -359,7 +364,7 @@ int folio_migrate_mapping(struct address_space *mapping,
> > XA_STATE(xas, &mapping->i_pages, folio_index(folio));
> > struct zone *oldzone, *newzone;
> > int dirty;
> > - int expected_count = expected_page_refs(mapping, &folio->page) + extra_count;
> > + int expected_count = folio_expected_refs(mapping, folio) + extra_count;
> > long nr = folio_...
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...refs += folio_nr_pages(folio);
> > + if (folio_get_private(folio))
> > + refs++;
> > +
> > + return refs;
> > }
> >
> > /*
> > @@ -359,7 +364,7 @@ int folio_migrate_mapping(struct address_space *mapping,
> > XA_STATE(xas, &mapping->i_pages, folio_index(folio));
> > struct zone *oldzone, *newzone;
> > int dirty;
> > - int expected_count = expected_page_refs(mapping, &folio->page) + extra_count;
> > + int expected_count = folio_expected_refs(mapping, folio) + extra_count;
> > long nr = folio_...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...an be beyond i_size: drop them from page cache */
if (head[i].index >= end) {
ClearPageDirty(head + i);
@@ -2474,6 +2497,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
if (PageSwapCache(head)) {
page_ref_add(head, 2);
xa_unlock(&swap_cache->i_pages);
+ } else if (is_device_private_page(head)) {
+ percpu_ref_get_many(page->pgmap->ref, nr - 1);
+ page_ref_add(head, 2);
} else {
page_ref_inc(head);
}
@@ -2485,6 +2511,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
spin_unlock_irqrestore(&am...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan