Displaying 20 results from an estimated 289 matches for "get_page".
2018 Sep 06
2
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...NULL;
> struct bpf_prog *xdp_prog;
> int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> unsigned int delta = 0;
> @@ -1668,6 +1668,9 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> if (copied != len)
> return ERR_PTR(-EFAULT);
>
> + get_page(alloc_frag->page);
> + alloc_frag->offset += buflen;
> +
This adds an atomic op on XDP_DROP which is a data path
operation for some workloads.
> /* There's a small window that XDP may be set after the check
> * of xdp_prog above, this should be rare and for simplicity
&...
2007 Nov 27
4
spurious warnings from get_page() via gnttab_copy() during frontend shutdown
...rious
warning messages during the shutdown of the frontend domain:
(XEN) /export/build/schuster/xvm-gate/xen.hg/xen/include/asm/mm.h:
189:d0 Error pfn 30e290: rd=ffff830000fcf100, od=ffff830000fcf100,
caf=00000000, taf=0000000000000000
(XEN) Xen call trace:
(XEN) [<ffff83000010f240>] get_page+0x107/0x1b4
(XEN) [<ffff83000010f10a>] get_page_and_type+0x21/0x50
(XEN) [<ffff8300001116c4>] __gnttab_copy+0x3f5/0x5b4
(XEN) [<ffff830000111971>] gnttab_copy+0xee/0x1c4
(XEN) [<ffff830000111dbd>] do_grant_table_op+0x376/0x3bc
(XEN) [<ffff8300001b83e2>]...
2018 Sep 06
2
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...NULL;
> struct bpf_prog *xdp_prog;
> int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> unsigned int delta = 0;
> @@ -1668,6 +1668,9 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> if (copied != len)
> return ERR_PTR(-EFAULT);
>
> + get_page(alloc_frag->page);
> + alloc_frag->offset += buflen;
> +
This adds an atomic op on XDP_DROP which is a data path
operation for some workloads.
> /* There's a small window that XDP may be set after the check
> * of xdp_prog above, this should be rare and for simplicity
&...
2008 Jun 12
0
[PATCH] x86: minor adjustment to asm constraint in get_page()
...t;jbeulich@novell.com>
Index: 2008-06-12/xen/arch/x86/mm.c
===================================================================
--- 2008-06-12.orig/xen/arch/x86/mm.c 2008-06-12 09:08:36.000000000 +0200
+++ 2008-06-12/xen/arch/x86/mm.c 2008-06-12 09:08:42.000000000 +0200
@@ -1706,8 +1706,8 @@ int get_page(struct page_info *page, str
return 0;
}
asm volatile (
- LOCK_PREFIX "cmpxchg8b %3"
- : "=d" (nd), "=a" (y), "=c" (d),
+ LOCK_PREFIX "cmpxchg8b %2"
+ : "=d" (nd), &...
2018 Sep 07
0
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
..._prog;
>> int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>> unsigned int delta = 0;
>> @@ -1668,6 +1668,9 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>> if (copied != len)
>> return ERR_PTR(-EFAULT);
>>
>> + get_page(alloc_frag->page);
>> + alloc_frag->offset += buflen;
>> +
> This adds an atomic op on XDP_DROP which is a data path
> operation for some workloads.
Yes, I have patch on top to amortize this, the idea is to have a very
big refcount once after the frag was allocated and mai...
2006 Jul 17
5
Functional Tests misbehaving with Globalize
Howdy all
Apologies to the folks subscribed to the globalize list for dual
posting this message...
I''ve got a project running globalize and rails 1.1.4, and I''ve only
recently adopted a strong love for testing. Now my models are 100%
tested (I must note that I do not make use of any translations in the
database yet), and I''ve now started with functional tests before
2018 Sep 06
0
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
...t sk_buff *skb;
+ struct sk_buff *skb = NULL;
struct bpf_prog *xdp_prog;
int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
unsigned int delta = 0;
@@ -1668,6 +1668,9 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
if (copied != len)
return ERR_PTR(-EFAULT);
+ get_page(alloc_frag->page);
+ alloc_frag->offset += buflen;
+
/* There's a small window that XDP may be set after the check
* of xdp_prog above, this should be rare and for simplicity
* we do XDP on skb in case the headroom is not enough.
@@ -1695,23 +1698,15 @@ static struct sk_buff *tun_...
2007 Oct 03
0
[PATCH 3/3] TLB flushing and IO memory mapping
...h/x86/mm.c Wed Jul 25 14:03:12 2007 +0100
@@ -594,6 +594,14 @@ get_##level##_linear_pagetable(
return 1;
\
}
+
+int iomem_page_test(unsigned long mfn, struct page_info *page)
+{
+ return unlikely(!mfn_valid(mfn)) ||
+ unlikely(page_get_owner(page) == dom_io);
+}
+
+
int
get_page_from_l1e(
l1_pgentry_t l1e, struct domain *d)
@@ -611,8 +619,7 @@ get_page_from_l1e(
return 0;
}
- if ( unlikely(!mfn_valid(mfn)) ||
- unlikely(page_get_owner(page) == dom_io) )
+ if ( iomem_page_test(mfn, page) )
{
/* DOMID_IO reverts to caller for...
2020 Sep 14
2
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...gt; diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
>> index 84e5a2dc8be5..00d97050d7ff 100644
>> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
>> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
>> @@ -711,7 +711,6 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
>>
>> dpage = pfn_to_page(uvmem_pfn);
>> dpage->zone_device_data = pvt;
>> - get_page(dpage);
>> lock_page(dpage);
>> return dpage;
>> out_clear:
>> diff --git a/driver...
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
We do a get_page() which involves a atomic operation. This patch tries
to mitigate a per packet atomic operation by maintaining a reference
bias which is initially USHRT_MAX. Each time a page is got, instead of
calling get_page() we decrease the bias and when we find it's time to
use a new page we will decrease...
2020 Sep 26
1
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...tions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 7705d5557239..e6ec98325fab 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -711,7 +711,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
>
> dpage = pfn_to_page(uvmem_pfn);
> dpage->zone_device_data = pvt;
> - get_page(dpage);
> + init_page_count(dpage);
> lock_page(dpage);
> return dpage;
> out_clear:
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b...
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
...ixed() and a zero
refcount on the ZONE_DEVICE struct page. This is sort of OK because
insert_pfn() increments the reference count on the pgmap which is what
prevents memunmap_pages() from freeing the struct pages and it doesn't
check for a non-zero struct page reference count.
But, any calls to get_page() will hit the VM_BUG_ON_PAGE() that
checks for a reference count == 0.
// mmap() an ext4 file that is mounted -o dax.
ext4_dax_fault()
ext4_dax_huge_fault()
dax_iomap_fault(&ext4_iomap_ops)
dax_iomap_pte_fault()
ops->iomap_begin() // ext4_iomap_begin()
ext4_ma...
2013 Nov 14
4
[PATCH] xen/arm: Allow balooning working with 1:1 memory mapping
...if (!mfn_valid(mfn))
+ {
+ gdprintk(XENLOG_INFO, "Invalid mfn 0x%"PRI_xen_pfn"\n",
+ mfn);
+ goto out;
+ }
+
+ page = mfn_to_page(mfn);
+ if ( !get_page(page, d) )
+ {
+ gdprintk(XENLOG_INFO,
+ "mfn 0x%"PRI_xen_pfn" doesn''t belong to dom0\n",
+ mfn);
+ goto out;
+ }
+ put_page(page);...
2005 May 11
4
Should shadow_lock be spin_lock_recursive?
During our testing, we found this code path where xen attempts to grab
the shadow_lock, while holding it - leading to a deadlock.
>> free_dom_mem->
>> shadow_sync_and_drop_references->
>> shadow_lock -> ..................... first lock
>> shadow_remove_all_access->
>> remove_all_access_in_page->
>> put_page->
>>
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...y)) {
+ spin_unlock(ptl);
return migrate_vma_collect_skip(start, end,
walk);
+ }
+ page = device_private_entry_to_page(entry);
+ if (is_write_device_private_entry(entry))
+ write = MIGRATE_PFN_WRITE;
} else {
- int ret;
+ spin_unlock(ptl);
+ goto again;
+ }
- get_page(page);
+ get_page(page);
+ if (unlikely(!trylock_page(page))) {
spin_unlock(ptl);
- if (unlikely(!trylock_page(page)))
- return migrate_vma_collect_skip(start, end,
- walk);
- ret = split_huge_page(page);
- unlock_page(page);
put_page(page);
- if (ret)
- return migrat...
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
...ged, 51 insertions(+), 47 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 84e5a2dc8be5..a0d08b1d8c1e 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -711,7 +711,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
dpage = pfn_to_page(uvmem_pfn);
dpage->zone_device_data = pvt;
- get_page(dpage);
+ init_page_count(dpage);
lock_page(dpage);
return dpage;
out_clear:
@@ -1151,6 +1151,7 @@ int kvmppc_uvmem_init(void)
struct resource *res;
void *addr;
unsigned...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
...ng these
configurations.
I have been able to successfully run xfstests on ext4 with the memmap
kernel boot option to simulate pmem.
One of the big changes in v2 is that devm_memremap_pages() and
memremap_pages() now return the struct pages' reference count set to
zero instead of one. Normally, get_page() will VM_BUG_ON_PAGE() if
page->_refcount is zero. I didn't see any such warnings running the
xfstests with dax/pmem but I'm not clear how the zero to one reference
count is handled.
Other changes in v2:
Rebased to Linux-5.9.0-rc6 to include pmem fixes.
I added patch 1 to introduce a p...
2018 Sep 07
1
[PATCH net-next 04/11] tuntap: simplify error handling in tun_build_skb()
On Fri, Sep 07, 2018 at 11:22:00AM +0800, Jason Wang wrote:
> > > @@ -1668,6 +1668,9 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> > > if (copied != len)
> > > return ERR_PTR(-EFAULT);
> > > + get_page(alloc_frag->page);
> > > + alloc_frag->offset += buflen;
> > > +
> > This adds an atomic op on XDP_DROP which is a data path
> > operation for some workloads.
>
> Yes, I have patch on top to amortize this, the idea is to have a very big
> refcount once...
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...ed, 41 insertions(+), 142 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 84e5a2dc8be5..00d97050d7ff 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -711,7 +711,6 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
dpage = pfn_to_page(uvmem_pfn);
dpage->zone_device_data = pvt;
- get_page(dpage);
lock_page(dpage);
return dpage;
out_clear:
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index a13c6215bba8..2a4bbe01a45...
2019 Jul 29
2
[PATCH 03/12] block: bio_release_pages: use flags arg instead of bool
On Tue, Jul 23, 2019 at 10:30:53PM -0700, Christoph Hellwig wrote:
> On Tue, Jul 23, 2019 at 09:25:09PM -0700, john.hubbard at gmail.com wrote:
> > From: John Hubbard <jhubbard at nvidia.com>
> >
> > In commit d241a95f3514 ("block: optionally mark pages dirty in
> > bio_release_pages"), new "bool mark_dirty" argument was added to
> >