Displaying 20 results from an estimated 620 matches for "pfns".
Did you mean:
pfn
2016 May 18
4
[PATCH] virtio_balloon: fix PFN format for virtio-1
...-
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 7b6d74f..476c0e3 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -75,7 +75,7 @@ struct virtio_balloon {
/* The array of pfns we tell the Host about. */
unsigned int num_pfns;
- u32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
+ __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
/* Memory statistics */
struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
@@ -127,14 +127,16 @@ static void tell_host(struct virtio_balloon *vb...
2016 May 18
4
[PATCH] virtio_balloon: fix PFN format for virtio-1
...-
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 7b6d74f..476c0e3 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -75,7 +75,7 @@ struct virtio_balloon {
/* The array of pfns we tell the Host about. */
unsigned int num_pfns;
- u32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
+ __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
/* Memory statistics */
struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
@@ -127,14 +127,16 @@ static void tell_host(struct virtio_balloon *vb...
2012 Apr 12
3
[PATCH 0/3] Bugfixes for virtio balloon driver
This series contains one cleanup and two bug fixes for the virtio
balloon driver.
2012 Apr 12
3
[PATCH 0/3] Bugfixes for virtio balloon driver
This series contains one cleanup and two bug fixes for the virtio
balloon driver.
2016 Jun 13
0
[PATCH] virtio_balloon: fix PFN format for virtio-1
...eletions(-)
>
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 7b6d74f..476c0e3 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -75,7 +75,7 @@ struct virtio_balloon {
>
> /* The array of pfns we tell the Host about. */
> unsigned int num_pfns;
> - u32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
> + __virtio32 pfns[VIRTIO_BALLOON_ARRAY_PFNS_MAX];
>
> /* Memory statistics */
> struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
> @@ -127,14 +127,16 @@ static void...
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...'t clear how anything actually could do that
as hmm_range_fault() provides CPU addresses which must be DMA mapped.
Perhaps there is some special HW that does not need DMA mapping, but we
don't have any examples of this, and the theoretical performance win of
avoiding an extra scan over the pfns array doesn't seem worth the
complexity. Plus pfns needs to be scanned anyhow to sort out any
DEVICE_PRIVATE pages.
This version replaces the uint64_t with an usigned long containing a pfn
and fix flags. On input flags is filled with the HMM_PFN_REQ_* values, on
successful output it is filled...
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
...-};
-
/*
* Data structure to track address ranges and register for mmu interval
* notifier updates.
@@ -175,7 +160,7 @@ static inline struct dmirror_device *dmirror_page_to_device(struct page *page)
static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range)
{
- uint64_t *pfns = range->pfns;
+ unsigned long *pfns = range->hmm_pfns;
unsigned long pfn;
for (pfn = (range->start >> PAGE_SHIFT);
@@ -188,15 +173,16 @@ static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range)
* Since we asked for hmm_range_fault() to populate pages,...
2017 Oct 20
0
[PATCH v1 1/3] virtio-balloon: replace the coarse-grained balloon_lock
...mory;
oom_notify: release some inflated memory via leak_balloon();
leak_balloon: wait for balloon_lock to be released by fill_balloon.
This patch breaks the lock into two fine-grained inflate_lock and
deflate_lock, and eliminates the unnecessary use of the shared data
(i.e. vb->pnfs, vb->num_pfns). This enables leak_balloon and
fill_balloon to run concurrently and solves the deadlock issue.
Reported-by: Tetsuo Handa <penguin-kernel at I-love.SAKURA.ne.jp>
Signed-off-by: Wei Wang <wei.w.wang at intel.com>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Tetsuo Handa <peng...
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...g actually could do that
> as hmm_range_fault() provides CPU addresses which must be DMA mapped.
>
> Perhaps there is some special HW that does not need DMA mapping, but we
> don't have any examples of this, and the theoretical performance win of
> avoiding an extra scan over the pfns array doesn't seem worth the
> complexity. Plus pfns needs to be scanned anyhow to sort out any
> DEVICE_PRIVATE pages.
>
> This version replaces the uint64_t with an usigned long containing a pfn
> and fix flags. On input flags is filled with the HMM_PFN_REQ_* values, on
> su...
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...'t clear how anything actually could do that
as hmm_range_fault() provides CPU addresses which must be DMA mapped.
Perhaps there is some special HW that does not need DMA mapping, but we
don't have any examples of this, and the theoretical performance win of
avoiding an extra scan over the pfns array doesn't seem worth the
complexity. Plus pfns needs to be scanned anyhow to sort out any
DEVICE_PRIVATE pages.
This version replaces the uint64_t with an usigned long containing a pfn
and fixed flags. On input flags is filled with the HMM_PFN_REQ_* values,
on successful output it is fille...
2019 Aug 07
4
[PATCH] nouveau/hmm: map pages after migration
...rm *drm,
out_free_page:
nouveau_dmem_page_free_locked(drm, dpage);
out:
+ *pfn = NVIF_VMM_PFNMAP_V0_NONE;
return 0;
}
static void nouveau_dmem_migrate_chunk(struct migrate_vma *args,
- struct nouveau_drm *drm, dma_addr_t *dma_addrs)
+ struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns)
{
struct nouveau_fence *fence;
unsigned long addr = args->start, nr_dma = 0, i;
for (i = 0; addr < args->end; i++) {
args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma,
- addr, args->src[i], &dma_addrs[nr_dma]);
+ args->src[i], &dma_addrs[nr_...
2017 Apr 13
0
[PATCH v9 2/5] virtio-balloon: VIRTIO_BALLOON_F_BALLOON_CHUNKS
...of the previous virtio-balloon is not very
efficient, because the ballooned pages are transferred to the
host one by one. Here is the breakdown of the time in percentage
spent on each step of the balloon inflating process (inflating
7GB of an 8GB idle guest).
1) allocating pages (6.5%)
2) sending PFNs to host (68.3%)
3) address translation (6.1%)
4) madvise (19%)
It takes about 4126ms for the inflating process to complete.
The above profiling shows that the bottlenecks are stage 2)
and stage 4).
This patch optimizes step 2) by transferring pages to the host in
chunks. A chunk consists of guest...
2012 Apr 13
0
[PATCH] virtio_balloon: fix handling of PAGE_SIZE != 4k
As reported by David Gibson, current code handles PAGE_SIZE != 4k
completely wrong which can lead to guest memory corruption errors.
- page_to_balloon_pfn is wrong: e.g. on system with 64K page size
it gives the same pfn value for 16 different pages.
- we also need to convert back to linux pfns when we free.
- for each linux page we need to tell host about multiple balloon
pages, but code only adds one pfn to the array.
Signed-off-by: David Gibson <david at gibson.dropbear.id.au>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
Signed-off-by: David Gibson <david at...
2012 Apr 13
0
[PATCH] virtio_balloon: fix handling of PAGE_SIZE != 4k
As reported by David Gibson, current code handles PAGE_SIZE != 4k
completely wrong which can lead to guest memory corruption errors.
- page_to_balloon_pfn is wrong: e.g. on system with 64K page size
it gives the same pfn value for 16 different pages.
- we also need to convert back to linux pfns when we free.
- for each linux page we need to tell host about multiple balloon
pages, but code only adds one pfn to the array.
Signed-off-by: David Gibson <david at gibson.dropbear.id.au>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
Signed-off-by: David Gibson <david at...
2012 Jul 04
3
[PATCH] xen: populate correct number of pages when across mem boundary
...setup.c
+++ b/arch/x86/xen/setup.c
@@ -157,50 +157,48 @@ static unsigned long __init xen_populate_chunk(
unsigned long dest_pfn;
for (i = 0, entry = list; i < map_size; i++, entry++) {
- unsigned long credits = credits_left;
unsigned long s_pfn;
unsigned long e_pfn;
unsigned long pfns;
long capacity;
- if (credits <= 0)
+ if (credits_left <= 0)
break;
if (entry->type != E820_RAM)
continue;
- e_pfn = PFN_UP(entry->addr + entry->size);
+ e_pfn = PFN_DOWN(entry->addr + entry->size);
/* We only care about E820 after the xen_start_inf...
2012 Jul 04
3
[PATCH] xen: populate correct number of pages when across mem boundary
...setup.c
+++ b/arch/x86/xen/setup.c
@@ -157,50 +157,48 @@ static unsigned long __init xen_populate_chunk(
unsigned long dest_pfn;
for (i = 0, entry = list; i < map_size; i++, entry++) {
- unsigned long credits = credits_left;
unsigned long s_pfn;
unsigned long e_pfn;
unsigned long pfns;
long capacity;
- if (credits <= 0)
+ if (credits_left <= 0)
break;
if (entry->type != E820_RAM)
continue;
- e_pfn = PFN_UP(entry->addr + entry->size);
+ e_pfn = PFN_DOWN(entry->addr + entry->size);
/* We only care about E820 after the xen_start_inf...
2020 Mar 03
2
[PATCH v2] nouveau/hmm: map pages after migration
...m *drm,
out_free_page:
nouveau_dmem_page_free_locked(drm, dpage);
out:
+ *pfn = NVIF_VMM_PFNMAP_V0_NONE;
return 0;
}
static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm,
- struct migrate_vma *args, dma_addr_t *dma_addrs)
+ struct migrate_vma *args, dma_addr_t *dma_addrs, u64 *pfns)
{
struct nouveau_fence *fence;
unsigned long addr = args->start, nr_dma = 0, i;
for (i = 0; addr < args->end; i++) {
args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->src[i],
- dma_addrs + nr_dma);
+ dma_addrs + nr_dma, pfns + i);
if (args->dst[i])...
2017 Mar 16
0
[PATCH kernel v8 2/4] virtio-balloon: VIRTIO_BALLOON_F_CHUNK_TRANSFER
...n of the current virtio-balloon is not very
efficient, because the ballooned pages are transferred to the
host one by one. Here is the breakdown of the time in percentage
spent on each step of the balloon inflating process (inflating
7GB of an 8GB idle guest).
1) allocating pages (6.5%)
2) sending PFNs to host (68.3%)
3) address translation (6.1%)
4) madvise (19%)
It takes about 4126ms for the inflating process to complete.
The above profiling shows that the bottlenecks are stage 2)
and stage 4).
This patch optimizes step 2) by transferring pages to the host in
chunks. A chunk consists of guest...
2019 Aug 13
0
[PATCH] nouveau/hmm: map pages after migration
...mpbell at nvidia.com>
> Cc: Christoph Hellwig <hch at lst.de>
> Cc: Jason Gunthorpe <jgg at mellanox.com>
> Cc: "Jérôme Glisse" <jglisse at redhat.com>
> Cc: Ben Skeggs <bskeggs at redhat.com>
Sorry for delay i am swamp, couple issues:
- nouveau_pfns_map() is never call, it should be call after
the dma copy is done (iirc it is lacking proper fencing
so that would need to be implemented first)
- the migrate ioctl is disconnected from the svm part and
thus we would need first to implement svm reference counting
and ta...
2014 Sep 25
2
[PATCH] virtio_balloon: Convert "vballon" kthread into a workqueue
...The thread servicing the balloon. */
- struct task_struct *thread;
+ /* The workqueue servicing the balloon. */
+ struct workqueue_struct *wq;
+ struct work_struct wq_work;
/* Waiting for host to ack the pages we released. */
wait_queue_head_t acked;
@@ -125,12 +122,15 @@ static void set_page_pfns(u32 pfns[], struct page *page)
pfns[i] = page_to_balloon_pfn(page) + i;
}
-static void fill_balloon(struct virtio_balloon *vb, size_t num)
+static void fill_balloon(struct virtio_balloon *vb, size_t diff)
{
struct balloon_dev_info *vb_dev_info = vb->vb_dev_info;
+ size_t num;
+ bool don...