Daniel De Graaf
2011-Oct-18 20:26 UTC
[Xen-devel] [PATCH 0/5] xen/{net, blk}back support for running in HVM
In HVM domains (or to be exact, when XENFEAT_auto_translated_physmap is enabled) it is not valid to request the hypervisor set up a grant mapping using PFNs referring to valid pages. The balloon driver provides alloc_xenballooned_pages to obtain pages without valid PFNs suitable for grant mappings; use this function when allocating pages for grant mappings. This has been tested with a PV domain using block and network devices exported by an HVM domain. [PATCH 1/5] xen/netback: Use xenballooned pages for comms [PATCH 2/5] xen/netback: Enable netback on HVM guests [PATCH 3/5] xen/blkback: Use xenballooned pages for mapped areas [PATCH 4/5] xen/blkback: don''t add m2p overrides when using [PATCH 5/5] xen/blkback: Enable blkback on HVM guests _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-18 20:26 UTC
[Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
For proper grant mappings, HVM guests require pages allocated using alloc_xenballooned_pages instead of alloc_vm_area. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/net/xen-netback/common.h | 4 ++-- drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++-------------- 2 files changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h index 161f207..d5ee9d1 100644 --- a/drivers/net/xen-netback/common.h +++ b/drivers/net/xen-netback/common.h @@ -70,8 +70,8 @@ struct xenvif { /* The shared rings and indexes. */ struct xen_netif_tx_back_ring tx; struct xen_netif_rx_back_ring rx; - struct vm_struct *tx_comms_area; - struct vm_struct *rx_comms_area; + struct page *tx_comms_page; + struct page *rx_comms_page; /* Frontend feature information. */ u8 can_sg:1; diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index fd00f25..f35e07c 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -42,6 +42,7 @@ #include <xen/events.h> #include <xen/interface/memory.h> +#include <xen/balloon.h> #include <asm/xen/hypercall.h> #include <asm/xen/page.h> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data) void xen_netbk_unmap_frontend_rings(struct xenvif *vif) { struct gnttab_unmap_grant_ref op; + void *addr; if (vif->tx.sring) { - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr, + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page)); + gnttab_set_unmap_op(&op, (unsigned long)addr, GNTMAP_host_map, vif->tx_shmem_handle); if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif) } if (vif->rx.sring) { - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr, + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page)); + gnttab_set_unmap_op(&op, (unsigned long)addr, GNTMAP_host_map, vif->rx_shmem_handle); if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) BUG(); } - if (vif->rx_comms_area) - free_vm_area(vif->rx_comms_area); - if (vif->tx_comms_area) - free_vm_area(vif->tx_comms_area); + if (vif->rx_comms_page) + free_xenballooned_pages(1, &vif->rx_comms_page); + if (vif->tx_comms_page) + free_xenballooned_pages(1, &vif->tx_comms_page); } int xen_netbk_map_frontend_rings(struct xenvif *vif, @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, int err = -ENOMEM; - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE); - if (vif->tx_comms_area == NULL) + if (alloc_xenballooned_pages(1, &vif->tx_comms_page)) goto err; - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE); - if (vif->rx_comms_area == NULL) + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn( + vif->tx_comms_page)); + + if (alloc_xenballooned_pages(1, &vif->rx_comms_page)) goto err; - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr, + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn( + vif->rx_comms_page)); + + gnttab_set_map_op(&op, (unsigned long)txs, GNTMAP_host_map, tx_ring_ref, vif->domid); if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, vif->tx_shmem_ref = tx_ring_ref; vif->tx_shmem_handle = op.handle; - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr; BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE); - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr, + gnttab_set_map_op(&op, (unsigned long)rxs, GNTMAP_host_map, rx_ring_ref, vif->domid); if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, vif->rx_shmem_handle = op.handle; vif->rx_req_cons_peek = 0; - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr; BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE); return 0; -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-18 20:26 UTC
[Xen-devel] [PATCH 2/5] xen/netback: Enable netback on HVM guests
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/net/xen-netback/netback.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index f35e07c..38bfd34 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1678,7 +1678,7 @@ static int __init netback_init(void) int rc = 0; int group; - if (!xen_pv_domain()) + if (!xen_domain()) return -ENODEV; xen_netbk_group_nr = num_online_cpus(); -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-18 20:26 UTC
[Xen-devel] [PATCH 3/5] xen/blkback: Use xenballooned pages for mapped areas
For proper grant mappings, HVM guests require pages allocated using alloc_xenballooned_pages instead of alloc_page or alloc_vm_area. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/block/xen-blkback/blkback.c | 20 +++++++++----------- drivers/block/xen-blkback/common.h | 2 +- drivers/block/xen-blkback/xenbus.c | 22 ++++++++++++---------- 3 files changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 2330a9a..a0d3cbd 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -42,6 +42,7 @@ #include <xen/events.h> #include <xen/page.h> +#include <xen/balloon.h> #include <asm/xen/hypervisor.h> #include <asm/xen/hypercall.h> #include "common.h" @@ -778,14 +779,14 @@ static int __init xen_blkif_init(void) goto out_of_memory; } - for (i = 0; i < mmap_pages; i++) { - blkbk->pending_grant_handles[i] = BLKBACK_INVALID_HANDLE; - blkbk->pending_pages[i] = alloc_page(GFP_KERNEL); - if (blkbk->pending_pages[i] == NULL) { - rc = -ENOMEM; - goto out_of_memory; - } + if (alloc_xenballooned_pages(mmap_pages, blkbk->pending_pages)) { + rc = -ENOMEM; + goto out_of_memory; } + + for (i = 0; i < mmap_pages; i++) + blkbk->pending_grant_handles[i] = BLKBACK_INVALID_HANDLE; + rc = xen_blkif_interface_init(); if (rc) goto failed_init; @@ -812,10 +813,7 @@ static int __init xen_blkif_init(void) kfree(blkbk->pending_reqs); kfree(blkbk->pending_grant_handles); if (blkbk->pending_pages) { - for (i = 0; i < mmap_pages; i++) { - if (blkbk->pending_pages[i]) - __free_page(blkbk->pending_pages[i]); - } + free_xenballooned_pages(mmap_pages, blkbk->pending_pages); kfree(blkbk->pending_pages); } kfree(blkbk); diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h index 00c57c9..944857e 100644 --- a/drivers/block/xen-blkback/common.h +++ b/drivers/block/xen-blkback/common.h @@ -139,7 +139,7 @@ struct xen_blkif { /* Comms information. */ enum blkif_protocol blk_protocol; union blkif_back_rings blk_rings; - struct vm_struct *blk_ring_area; + struct page *blk_ring_page; /* The VBD attached to this interface. */ struct xen_vbd vbd; /* Back pointer to the backend_info. */ diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c index 5fd2010..49acc17 100644 --- a/drivers/block/xen-blkback/xenbus.c +++ b/drivers/block/xen-blkback/xenbus.c @@ -17,6 +17,7 @@ #include <stdarg.h> #include <linux/module.h> #include <linux/kthread.h> +#include <xen/balloon.h> #include <xen/events.h> #include <xen/grant_table.h> #include "common.h" @@ -123,8 +124,9 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid) static int map_frontend_page(struct xen_blkif *blkif, unsigned long shared_page) { struct gnttab_map_grant_ref op; + void *addr = pfn_to_kaddr(page_to_pfn(blkif->blk_ring_page)); - gnttab_set_map_op(&op, (unsigned long)blkif->blk_ring_area->addr, + gnttab_set_map_op(&op, (unsigned long)addr, GNTMAP_host_map, shared_page, blkif->domid); if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) @@ -144,8 +146,9 @@ static int map_frontend_page(struct xen_blkif *blkif, unsigned long shared_page) static void unmap_frontend_page(struct xen_blkif *blkif) { struct gnttab_unmap_grant_ref op; + void *addr = pfn_to_kaddr(page_to_pfn(blkif->blk_ring_page)); - gnttab_set_unmap_op(&op, (unsigned long)blkif->blk_ring_area->addr, + gnttab_set_unmap_op(&op, (unsigned long)addr, GNTMAP_host_map, blkif->shmem_handle); if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) @@ -161,13 +164,12 @@ static int xen_blkif_map(struct xen_blkif *blkif, unsigned long shared_page, if (blkif->irq) return 0; - blkif->blk_ring_area = alloc_vm_area(PAGE_SIZE); - if (!blkif->blk_ring_area) + if (alloc_xenballooned_pages(1, &blkif->blk_ring_page)) return -ENOMEM; err = map_frontend_page(blkif, shared_page); if (err) { - free_vm_area(blkif->blk_ring_area); + free_xenballooned_pages(1, &blkif->blk_ring_page); return err; } @@ -175,21 +177,21 @@ static int xen_blkif_map(struct xen_blkif *blkif, unsigned long shared_page, case BLKIF_PROTOCOL_NATIVE: { struct blkif_sring *sring; - sring = (struct blkif_sring *)blkif->blk_ring_area->addr; + sring = pfn_to_kaddr(page_to_pfn(blkif->blk_ring_page)); BACK_RING_INIT(&blkif->blk_rings.native, sring, PAGE_SIZE); break; } case BLKIF_PROTOCOL_X86_32: { struct blkif_x86_32_sring *sring_x86_32; - sring_x86_32 = (struct blkif_x86_32_sring *)blkif->blk_ring_area->addr; + sring_x86_32 = pfn_to_kaddr(page_to_pfn(blkif->blk_ring_page)); BACK_RING_INIT(&blkif->blk_rings.x86_32, sring_x86_32, PAGE_SIZE); break; } case BLKIF_PROTOCOL_X86_64: { struct blkif_x86_64_sring *sring_x86_64; - sring_x86_64 = (struct blkif_x86_64_sring *)blkif->blk_ring_area->addr; + sring_x86_64 = pfn_to_kaddr(page_to_pfn(blkif->blk_ring_page)); BACK_RING_INIT(&blkif->blk_rings.x86_64, sring_x86_64, PAGE_SIZE); break; } @@ -202,7 +204,7 @@ static int xen_blkif_map(struct xen_blkif *blkif, unsigned long shared_page, "blkif-backend", blkif); if (err < 0) { unmap_frontend_page(blkif); - free_vm_area(blkif->blk_ring_area); + free_xenballooned_pages(1, &blkif->blk_ring_page); blkif->blk_rings.common.sring = NULL; return err; } @@ -229,7 +231,7 @@ static void xen_blkif_disconnect(struct xen_blkif *blkif) if (blkif->blk_rings.common.sring) { unmap_frontend_page(blkif); - free_vm_area(blkif->blk_ring_area); + free_xenballooned_pages(1, &blkif->blk_ring_page); blkif->blk_rings.common.sring = NULL; } } -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-18 20:26 UTC
[Xen-devel] [PATCH 4/5] xen/blkback: don''t add m2p overrides when using autotranslated physmap
This is the same logic as used in grant-table.c, which blkback bypasses. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/block/xen-blkback/blkback.c | 13 ++++++++++--- 1 files changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index a0d3cbd..d8232e7 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -336,6 +336,10 @@ static void xen_blkbk_unmap(struct pending_req *req) ret = HYPERVISOR_grant_table_op( GNTTABOP_unmap_grant_ref, unmap, invcount); BUG_ON(ret); + + if (xen_feature(XENFEAT_auto_translated_physmap)) + return; + /* * Note, we use invcount, so nr->pages, so we can''t index * using vaddr(req, i). @@ -396,6 +400,12 @@ static int xen_blkbk_map(struct blkif_request *req, if (ret) continue; + seg[i].buf = map[i].dev_bus_addr | + (req->u.rw.seg[i].first_sect << 9); + + if (xen_feature(XENFEAT_auto_translated_physmap)) + continue; + ret = m2p_add_override(PFN_DOWN(map[i].dev_bus_addr), blkbk->pending_page(pending_req, i), false); if (ret) { @@ -404,9 +414,6 @@ static int xen_blkbk_map(struct blkif_request *req, /* We could switch over to GNTTABOP_copy */ continue; } - - seg[i].buf = map[i].dev_bus_addr | - (req->u.rw.seg[i].first_sect << 9); } return ret; } -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-18 20:26 UTC
[Xen-devel] [PATCH 5/5] xen/blkback: Enable blkback on HVM guests
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/block/xen-blkback/blkback.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index d8232e7..7456749 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -762,7 +762,7 @@ static int __init xen_blkif_init(void) int i, mmap_pages; int rc = 0; - if (!xen_pv_domain()) + if (!xen_domain()) return -ENODEV; blkbk = kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL); -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-19 09:04 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:> For proper grant mappings, HVM guests require pages allocated using > alloc_xenballooned_pages instead of alloc_vm_area. > > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > --- > drivers/net/xen-netback/common.h | 4 ++-- > drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++-------------- > 2 files changed, 22 insertions(+), 16 deletions(-) > > diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h > index 161f207..d5ee9d1 100644 > --- a/drivers/net/xen-netback/common.h > +++ b/drivers/net/xen-netback/common.h > @@ -70,8 +70,8 @@ struct xenvif { > /* The shared rings and indexes. */ > struct xen_netif_tx_back_ring tx; > struct xen_netif_rx_back_ring rx; > - struct vm_struct *tx_comms_area; > - struct vm_struct *rx_comms_area; > + struct page *tx_comms_page; > + struct page *rx_comms_page;This will conflict with David Vrabel''s patch "net: xen-netback: use API provided by xenbus module to map rings", which I''ve just noticed hasn''t been committed anywhere. I suspect that building on David''s patches (that series does something similar to blkback too) will greatly simplify this one since you can just patch xenbus_map_ring_valloc and friends. Could you also explain where the requirement to use xenballooned pages and not alloc_vm_area comes from in your commit message. David, I guess you should resend your series now that everyone is happy with it. If you cc the netback one to netdev@ with my Ack then Dave Miller will pick it up into his tree (it stands alone, right?). The blkback and grant-table ones go via Konrad I think. I suspect the last one needs to go via akpm, or at least with his Ack.> > /* Frontend feature information. */ > u8 can_sg:1; > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > index fd00f25..f35e07c 100644 > --- a/drivers/net/xen-netback/netback.c > +++ b/drivers/net/xen-netback/netback.c > @@ -42,6 +42,7 @@ > > #include <xen/events.h> > #include <xen/interface/memory.h> > +#include <xen/balloon.h> > > #include <asm/xen/hypercall.h> > #include <asm/xen/page.h> > @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data) > void xen_netbk_unmap_frontend_rings(struct xenvif *vif) > { > struct gnttab_unmap_grant_ref op; > + void *addr; > > if (vif->tx.sring) { > - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr, > + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page)); > + gnttab_set_unmap_op(&op, (unsigned long)addr, > GNTMAP_host_map, vif->tx_shmem_handle); > > if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) > @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif) > } > > if (vif->rx.sring) { > - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr, > + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page)); > + gnttab_set_unmap_op(&op, (unsigned long)addr, > GNTMAP_host_map, vif->rx_shmem_handle); > > if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) > BUG(); > } > - if (vif->rx_comms_area) > - free_vm_area(vif->rx_comms_area); > - if (vif->tx_comms_area) > - free_vm_area(vif->tx_comms_area); > + if (vif->rx_comms_page) > + free_xenballooned_pages(1, &vif->rx_comms_page); > + if (vif->tx_comms_page) > + free_xenballooned_pages(1, &vif->tx_comms_page); > } > > int xen_netbk_map_frontend_rings(struct xenvif *vif, > @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, > > int err = -ENOMEM; > > - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE); > - if (vif->tx_comms_area == NULL) > + if (alloc_xenballooned_pages(1, &vif->tx_comms_page)) > goto err; > > - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE); > - if (vif->rx_comms_area == NULL) > + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn( > + vif->tx_comms_page)); > + > + if (alloc_xenballooned_pages(1, &vif->rx_comms_page)) > goto err; > > - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr, > + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn( > + vif->rx_comms_page)); > + > + gnttab_set_map_op(&op, (unsigned long)txs, > GNTMAP_host_map, tx_ring_ref, vif->domid); > > if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) > @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, > vif->tx_shmem_ref = tx_ring_ref; > vif->tx_shmem_handle = op.handle; > > - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr; > BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE); > > - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr, > + gnttab_set_map_op(&op, (unsigned long)rxs, > GNTMAP_host_map, rx_ring_ref, vif->domid); > > if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) > @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, > vif->rx_shmem_handle = op.handle; > vif->rx_req_cons_peek = 0; > > - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr; > BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE); > > return 0;_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-19 09:10 UTC
Re: [Xen-devel] [PATCH 4/5] xen/blkback: don''t add m2p overrides when using autotranslated physmap
On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote:> This is the same logic as used in grant-table.c, which blkback bypasses.It would be better to make blkback use grant-table.c. Ian.> > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > --- > drivers/block/xen-blkback/blkback.c | 13 ++++++++++--- > 1 files changed, 10 insertions(+), 3 deletions(-) > > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c > index a0d3cbd..d8232e7 100644 > --- a/drivers/block/xen-blkback/blkback.c > +++ b/drivers/block/xen-blkback/blkback.c > @@ -336,6 +336,10 @@ static void xen_blkbk_unmap(struct pending_req *req) > ret = HYPERVISOR_grant_table_op( > GNTTABOP_unmap_grant_ref, unmap, invcount); > BUG_ON(ret); > + > + if (xen_feature(XENFEAT_auto_translated_physmap)) > + return; > + > /* > * Note, we use invcount, so nr->pages, so we can''t index > * using vaddr(req, i). > @@ -396,6 +400,12 @@ static int xen_blkbk_map(struct blkif_request *req, > if (ret) > continue; > > + seg[i].buf = map[i].dev_bus_addr | > + (req->u.rw.seg[i].first_sect << 9); > + > + if (xen_feature(XENFEAT_auto_translated_physmap)) > + continue; > + > ret = m2p_add_override(PFN_DOWN(map[i].dev_bus_addr), > blkbk->pending_page(pending_req, i), false); > if (ret) { > @@ -404,9 +414,6 @@ static int xen_blkbk_map(struct blkif_request *req, > /* We could switch over to GNTTABOP_copy */ > continue; > } > - > - seg[i].buf = map[i].dev_bus_addr | > - (req->u.rw.seg[i].first_sect << 9); > } > return ret; > }_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
David Vrabel
2011-Oct-19 10:39 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On 19/10/11 10:04, Ian Campbell wrote:> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote: >> For proper grant mappings, HVM guests require pages allocated using >> alloc_xenballooned_pages instead of alloc_vm_area. >> >> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> >> --- >> drivers/net/xen-netback/common.h | 4 ++-- >> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++-------------- >> 2 files changed, 22 insertions(+), 16 deletions(-) >> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h >> index 161f207..d5ee9d1 100644 >> --- a/drivers/net/xen-netback/common.h >> +++ b/drivers/net/xen-netback/common.h >> @@ -70,8 +70,8 @@ struct xenvif { >> /* The shared rings and indexes. */ >> struct xen_netif_tx_back_ring tx; >> struct xen_netif_rx_back_ring rx; >> - struct vm_struct *tx_comms_area; >> - struct vm_struct *rx_comms_area; >> + struct page *tx_comms_page; >> + struct page *rx_comms_page; > > This will conflict with David Vrabel''s patch "net: xen-netback: use API > provided by xenbus module to map rings", which I''ve just noticed hasn''t > been committed anywhere. > > I suspect that building on David''s patches (that series does something > similar to blkback too) will greatly simplify this one since you can > just patch xenbus_map_ring_valloc and friends. > > Could you also explain where the requirement to use xenballooned pages > and not alloc_vm_area comes from in your commit message. > > David, I guess you should resend your series now that everyone is happy > with it. If you cc the netback one to netdev@ with my Ack then Dave > Miller will pick it up into his tree (it stands alone, right?). The > blkback and grant-table ones go via Konrad I think. I suspect the last > one needs to go via akpm, or at least with his Ack.I thought Konrad had picked them all up -- they were on his stuff queued for 3.2 list.> >> >> /* Frontend feature information. */ >> u8 can_sg:1; >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c >> index fd00f25..f35e07c 100644 >> --- a/drivers/net/xen-netback/netback.c >> +++ b/drivers/net/xen-netback/netback.c >> @@ -42,6 +42,7 @@ >> >> #include <xen/events.h> >> #include <xen/interface/memory.h> >> +#include <xen/balloon.h> >> >> #include <asm/xen/hypercall.h> >> #include <asm/xen/page.h> >> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data) >> void xen_netbk_unmap_frontend_rings(struct xenvif *vif) >> { >> struct gnttab_unmap_grant_ref op; >> + void *addr; >> >> if (vif->tx.sring) { >> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr, >> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page)); >> + gnttab_set_unmap_op(&op, (unsigned long)addr, >> GNTMAP_host_map, vif->tx_shmem_handle); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) >> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif) >> } >> >> if (vif->rx.sring) { >> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr, >> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page)); >> + gnttab_set_unmap_op(&op, (unsigned long)addr, >> GNTMAP_host_map, vif->rx_shmem_handle); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) >> BUG(); >> } >> - if (vif->rx_comms_area) >> - free_vm_area(vif->rx_comms_area); >> - if (vif->tx_comms_area) >> - free_vm_area(vif->tx_comms_area); >> + if (vif->rx_comms_page) >> + free_xenballooned_pages(1, &vif->rx_comms_page); >> + if (vif->tx_comms_page) >> + free_xenballooned_pages(1, &vif->tx_comms_page); >> } >> >> int xen_netbk_map_frontend_rings(struct xenvif *vif, >> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, >> >> int err = -ENOMEM; >> >> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE); >> - if (vif->tx_comms_area == NULL) >> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page)) >> goto err; >> >> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE); >> - if (vif->rx_comms_area == NULL) >> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn( >> + vif->tx_comms_page)); >> + >> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page)) >> goto err; >> >> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr, >> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn( >> + vif->rx_comms_page)); >> + >> + gnttab_set_map_op(&op, (unsigned long)txs, >> GNTMAP_host_map, tx_ring_ref, vif->domid); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) >> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, >> vif->tx_shmem_ref = tx_ring_ref; >> vif->tx_shmem_handle = op.handle; >> >> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr; >> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE); >> >> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr, >> + gnttab_set_map_op(&op, (unsigned long)rxs, >> GNTMAP_host_map, rx_ring_ref, vif->domid); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) >> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, >> vif->rx_shmem_handle = op.handle; >> vif->rx_req_cons_peek = 0; >> >> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr; >> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE); >> >> return 0; > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-19 15:01 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On 10/19/2011 05:04 AM, Ian Campbell wrote:> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote: >> For proper grant mappings, HVM guests require pages allocated using >> alloc_xenballooned_pages instead of alloc_vm_area. >> >> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> >> --- >> drivers/net/xen-netback/common.h | 4 ++-- >> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++-------------- >> 2 files changed, 22 insertions(+), 16 deletions(-) >> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h >> index 161f207..d5ee9d1 100644 >> --- a/drivers/net/xen-netback/common.h >> +++ b/drivers/net/xen-netback/common.h >> @@ -70,8 +70,8 @@ struct xenvif { >> /* The shared rings and indexes. */ >> struct xen_netif_tx_back_ring tx; >> struct xen_netif_rx_back_ring rx; >> - struct vm_struct *tx_comms_area; >> - struct vm_struct *rx_comms_area; >> + struct page *tx_comms_page; >> + struct page *rx_comms_page; > > This will conflict with David Vrabel''s patch "net: xen-netback: use API > provided by xenbus module to map rings", which I''ve just noticed hasn''t > been committed anywhere. > > I suspect that building on David''s patches (that series does something > similar to blkback too) will greatly simplify this one since you can > just patch xenbus_map_ring_valloc and friends.Looks like that should be possible; I didn''t see that there was already an attempt to centralize the mappings. It seems like the best place to modify is xen_alloc_vm_area, which should be used in place of alloc_vm_area for grant mappings. On HVM, this area needs valid PFNs allocated in the guest, which are allocated from the balloon driver.> Could you also explain where the requirement to use xenballooned pages > and not alloc_vm_area comes from in your commit message.(Will move to commit message). In PV guests, it is sufficient to only reserve kernel address space for grant mappings because Xen modifies the mappings directly. HVM guests require that Xen modify the GFN-to-MFN mapping, so the pages being remapped must already be allocated. Pages obtained from alloc_xenballooned_pages have valid GFNs not currently mapped to an MFN, so are available to be used in grant mappings.> David, I guess you should resend your series now that everyone is happy > with it. If you cc the netback one to netdev@ with my Ack then Dave > Miller will pick it up into his tree (it stands alone, right?). The > blkback and grant-table ones go via Konrad I think. I suspect the last > one needs to go via akpm, or at least with his Ack. > >> >> /* Frontend feature information. */ >> u8 can_sg:1; >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c >> index fd00f25..f35e07c 100644 >> --- a/drivers/net/xen-netback/netback.c >> +++ b/drivers/net/xen-netback/netback.c >> @@ -42,6 +42,7 @@ >> >> #include <xen/events.h> >> #include <xen/interface/memory.h> >> +#include <xen/balloon.h> >> >> #include <asm/xen/hypercall.h> >> #include <asm/xen/page.h> >> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data) >> void xen_netbk_unmap_frontend_rings(struct xenvif *vif) >> { >> struct gnttab_unmap_grant_ref op; >> + void *addr; >> >> if (vif->tx.sring) { >> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr, >> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page)); >> + gnttab_set_unmap_op(&op, (unsigned long)addr, >> GNTMAP_host_map, vif->tx_shmem_handle); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) >> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif) >> } >> >> if (vif->rx.sring) { >> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr, >> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page)); >> + gnttab_set_unmap_op(&op, (unsigned long)addr, >> GNTMAP_host_map, vif->rx_shmem_handle); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) >> BUG(); >> } >> - if (vif->rx_comms_area) >> - free_vm_area(vif->rx_comms_area); >> - if (vif->tx_comms_area) >> - free_vm_area(vif->tx_comms_area); >> + if (vif->rx_comms_page) >> + free_xenballooned_pages(1, &vif->rx_comms_page); >> + if (vif->tx_comms_page) >> + free_xenballooned_pages(1, &vif->tx_comms_page); >> } >> >> int xen_netbk_map_frontend_rings(struct xenvif *vif, >> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, >> >> int err = -ENOMEM; >> >> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE); >> - if (vif->tx_comms_area == NULL) >> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page)) >> goto err; >> >> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE); >> - if (vif->rx_comms_area == NULL) >> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn( >> + vif->tx_comms_page)); >> + >> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page)) >> goto err; >> >> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr, >> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn( >> + vif->rx_comms_page)); >> + >> + gnttab_set_map_op(&op, (unsigned long)txs, >> GNTMAP_host_map, tx_ring_ref, vif->domid); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) >> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, >> vif->tx_shmem_ref = tx_ring_ref; >> vif->tx_shmem_handle = op.handle; >> >> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr; >> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE); >> >> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr, >> + gnttab_set_map_op(&op, (unsigned long)rxs, >> GNTMAP_host_map, rx_ring_ref, vif->domid); >> >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) >> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, >> vif->rx_shmem_handle = op.handle; >> vif->rx_req_cons_peek = 0; >> >> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr; >> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE); >> >> return 0; > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-19 15:12 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On Wed, 2011-10-19 at 11:39 +0100, David Vrabel wrote:> On 19/10/11 10:04, Ian Campbell wrote: > > On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote: > >> For proper grant mappings, HVM guests require pages allocated using > >> alloc_xenballooned_pages instead of alloc_vm_area. > >> > >> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > >> --- > >> drivers/net/xen-netback/common.h | 4 ++-- > >> drivers/net/xen-netback/netback.c | 34 ++++++++++++++++++++-------------- > >> 2 files changed, 22 insertions(+), 16 deletions(-) > >> > >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h > >> index 161f207..d5ee9d1 100644 > >> --- a/drivers/net/xen-netback/common.h > >> +++ b/drivers/net/xen-netback/common.h > >> @@ -70,8 +70,8 @@ struct xenvif { > >> /* The shared rings and indexes. */ > >> struct xen_netif_tx_back_ring tx; > >> struct xen_netif_rx_back_ring rx; > >> - struct vm_struct *tx_comms_area; > >> - struct vm_struct *rx_comms_area; > >> + struct page *tx_comms_page; > >> + struct page *rx_comms_page; > > > > This will conflict with David Vrabel''s patch "net: xen-netback: use API > > provided by xenbus module to map rings", which I''ve just noticed hasn''t > > been committed anywhere. > > > > I suspect that building on David''s patches (that series does something > > similar to blkback too) will greatly simplify this one since you can > > just patch xenbus_map_ring_valloc and friends. > > > > Could you also explain where the requirement to use xenballooned pages > > and not alloc_vm_area comes from in your commit message. > > > > David, I guess you should resend your series now that everyone is happy > > with it. If you cc the netback one to netdev@ with my Ack then Dave > > Miller will pick it up into his tree (it stands alone, right?). The > > blkback and grant-table ones go via Konrad I think. I suspect the last > > one needs to go via akpm, or at least with his Ack. > > I thought Konrad had picked them all up -- they were on his stuff queued > for 3.2 list.Perhaps, git://oss.oracle.com doesn''t seem to be responding so I can''t tell. In any case the netback stuff ought to go via David Miller. Ian.> > > > >> > >> /* Frontend feature information. */ > >> u8 can_sg:1; > >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > >> index fd00f25..f35e07c 100644 > >> --- a/drivers/net/xen-netback/netback.c > >> +++ b/drivers/net/xen-netback/netback.c > >> @@ -42,6 +42,7 @@ > >> > >> #include <xen/events.h> > >> #include <xen/interface/memory.h> > >> +#include <xen/balloon.h> > >> > >> #include <asm/xen/hypercall.h> > >> #include <asm/xen/page.h> > >> @@ -1578,9 +1579,11 @@ static int xen_netbk_kthread(void *data) > >> void xen_netbk_unmap_frontend_rings(struct xenvif *vif) > >> { > >> struct gnttab_unmap_grant_ref op; > >> + void *addr; > >> > >> if (vif->tx.sring) { > >> - gnttab_set_unmap_op(&op, (unsigned long)vif->tx_comms_area->addr, > >> + addr = pfn_to_kaddr(page_to_pfn(vif->tx_comms_page)); > >> + gnttab_set_unmap_op(&op, (unsigned long)addr, > >> GNTMAP_host_map, vif->tx_shmem_handle); > >> > >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) > >> @@ -1588,16 +1591,17 @@ void xen_netbk_unmap_frontend_rings(struct xenvif *vif) > >> } > >> > >> if (vif->rx.sring) { > >> - gnttab_set_unmap_op(&op, (unsigned long)vif->rx_comms_area->addr, > >> + addr = pfn_to_kaddr(page_to_pfn(vif->rx_comms_page)); > >> + gnttab_set_unmap_op(&op, (unsigned long)addr, > >> GNTMAP_host_map, vif->rx_shmem_handle); > >> > >> if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) > >> BUG(); > >> } > >> - if (vif->rx_comms_area) > >> - free_vm_area(vif->rx_comms_area); > >> - if (vif->tx_comms_area) > >> - free_vm_area(vif->tx_comms_area); > >> + if (vif->rx_comms_page) > >> + free_xenballooned_pages(1, &vif->rx_comms_page); > >> + if (vif->tx_comms_page) > >> + free_xenballooned_pages(1, &vif->tx_comms_page); > >> } > >> > >> int xen_netbk_map_frontend_rings(struct xenvif *vif, > >> @@ -1610,15 +1614,19 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, > >> > >> int err = -ENOMEM; > >> > >> - vif->tx_comms_area = alloc_vm_area(PAGE_SIZE); > >> - if (vif->tx_comms_area == NULL) > >> + if (alloc_xenballooned_pages(1, &vif->tx_comms_page)) > >> goto err; > >> > >> - vif->rx_comms_area = alloc_vm_area(PAGE_SIZE); > >> - if (vif->rx_comms_area == NULL) > >> + txs = (struct xen_netif_tx_sring *)pfn_to_kaddr(page_to_pfn( > >> + vif->tx_comms_page)); > >> + > >> + if (alloc_xenballooned_pages(1, &vif->rx_comms_page)) > >> goto err; > >> > >> - gnttab_set_map_op(&op, (unsigned long)vif->tx_comms_area->addr, > >> + rxs = (struct xen_netif_rx_sring *)pfn_to_kaddr(page_to_pfn( > >> + vif->rx_comms_page)); > >> + > >> + gnttab_set_map_op(&op, (unsigned long)txs, > >> GNTMAP_host_map, tx_ring_ref, vif->domid); > >> > >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) > >> @@ -1635,10 +1643,9 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, > >> vif->tx_shmem_ref = tx_ring_ref; > >> vif->tx_shmem_handle = op.handle; > >> > >> - txs = (struct xen_netif_tx_sring *)vif->tx_comms_area->addr; > >> BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE); > >> > >> - gnttab_set_map_op(&op, (unsigned long)vif->rx_comms_area->addr, > >> + gnttab_set_map_op(&op, (unsigned long)rxs, > >> GNTMAP_host_map, rx_ring_ref, vif->domid); > >> > >> if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) > >> @@ -1656,7 +1663,6 @@ int xen_netbk_map_frontend_rings(struct xenvif *vif, > >> vif->rx_shmem_handle = op.handle; > >> vif->rx_req_cons_peek = 0; > >> > >> - rxs = (struct xen_netif_rx_sring *)vif->rx_comms_area->addr; > >> BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE); > >> > >> return 0; > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
David Vrabel
2011-Oct-19 16:32 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On 19/10/11 16:01, Daniel De Graaf wrote:> On 10/19/2011 05:04 AM, Ian Campbell wrote: >> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote: >>> For proper grant mappings, HVM guests require pages allocated using >>> alloc_xenballooned_pages instead of alloc_vm_area. >>> >>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> >>> ---[...]>> Could you also explain where the requirement to use xenballooned pages >> and not alloc_vm_area comes from in your commit message. > > (Will move to commit message). In PV guests, it is sufficient to only > reserve kernel address space for grant mappings because Xen modifies the > mappings directly. HVM guests require that Xen modify the GFN-to-MFN > mapping, so the pages being remapped must already be allocated. Pages > obtained from alloc_xenballooned_pages have valid GFNs not currently > mapped to an MFN, so are available to be used in grant mappings.Why doesn''t (or can''t?) Xen add new entries to the GFN-to-MFN map? Or why hasn''t it reserved a range of GFNs in the map for this? David _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-19 16:56 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On 10/19/2011 12:32 PM, David Vrabel wrote:> On 19/10/11 16:01, Daniel De Graaf wrote: >> On 10/19/2011 05:04 AM, Ian Campbell wrote: >>> On Tue, 2011-10-18 at 21:26 +0100, Daniel De Graaf wrote: >>>> For proper grant mappings, HVM guests require pages allocated using >>>> alloc_xenballooned_pages instead of alloc_vm_area. >>>> >>>> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> >>>> --- > [...] >>> Could you also explain where the requirement to use xenballooned pages >>> and not alloc_vm_area comes from in your commit message. >> >> (Will move to commit message). In PV guests, it is sufficient to only >> reserve kernel address space for grant mappings because Xen modifies the >> mappings directly. HVM guests require that Xen modify the GFN-to-MFN >> mapping, so the pages being remapped must already be allocated. Pages >> obtained from alloc_xenballooned_pages have valid GFNs not currently >> mapped to an MFN, so are available to be used in grant mappings. > > Why doesn''t (or can''t?) Xen add new entries to the GFN-to-MFN map? Or > why hasn''t it reserved a range of GFNs in the map for this? > > David >That would be another way for Xen to solve this, but it would require that the reserved GFN range be large enough for all mappings the guest does, and would also need to be managed by the hypervisor. By allowing the guest to specify which GFN to remap in the grant operation, the guest can use any of the memory was either not populated at startup or was returned to the hypervisor via XENMEM_decrease_reservation. For 32-bit guests with highmem, this also allows the guest to choose if the grant-mapped pages are in high or low memory rather than forcing it to be where the reserved GFN range ended up. Such a change would also break API compatibility since the guest would need to read GFNs back from the grant operation and map those GFNs. -- Daniel De Graaf National Security Agency _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH v2 0/6] xen/{net, blk}back support for running in HVM
Changes from v1: Based on konrad''s testing branch (includes David''s patches) Grant table wrapper functions used where appropriate [PATCH 1/6] xenbus: Support HVM backends [PATCH 2/6] xenbus: Use grant-table wrapper functions [PATCH 3/6] xen/grant-table: Support mappings required by blkback [PATCH 4/6] xen/blkback: use grant-table.c hypercall wrappers [PATCH 5/6] xen/netback: Enable netback on HVM guests [PATCH 6/6] xen/blkback: Enable blkback on HVM guests _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH 1/6] xenbus: Support HVM backends
Add HVM implementations of xenbus_(map,unmap)_ring_v(alloc,free) so that ring mappings can be done without using GNTMAP_contains_pte which is not supported on HVM. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/xen/xenbus/xenbus_client.c | 153 +++++++++++++++++++++++++++++------- 1 files changed, 123 insertions(+), 30 deletions(-) diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c index 52bc57f..534e744 100644 --- a/drivers/xen/xenbus/xenbus_client.c +++ b/drivers/xen/xenbus/xenbus_client.c @@ -32,15 +32,26 @@ #include <linux/slab.h> #include <linux/types.h> +#include <linux/spinlock.h> #include <linux/vmalloc.h> #include <asm/xen/hypervisor.h> #include <asm/xen/page.h> #include <xen/interface/xen.h> #include <xen/interface/event_channel.h> +#include <xen/balloon.h> #include <xen/events.h> #include <xen/grant_table.h> #include <xen/xenbus.h> +struct xenbus_map_node { + struct list_head next; + struct page *page; + grant_handle_t handle; +}; + +static DEFINE_SPINLOCK(xenbus_valloc_lock); +static LIST_HEAD(xenbus_valloc_pages); + const char *xenbus_strstate(enum xenbus_state state) { static const char *const name[] = { @@ -419,21 +430,8 @@ int xenbus_free_evtchn(struct xenbus_device *dev, int port) EXPORT_SYMBOL_GPL(xenbus_free_evtchn); -/** - * xenbus_map_ring_valloc - * @dev: xenbus device - * @gnt_ref: grant reference - * @vaddr: pointer to address to be filled out by mapping - * - * Based on Rusty Russell''s skeleton driver''s map_page. - * Map a page of memory into this domain from another domain''s grant table. - * xenbus_map_ring_valloc allocates a page of virtual address space, maps the - * page to that address, and sets *vaddr to that address. - * Returns 0 on success, and GNTST_* (see xen/include/interface/grant_table.h) - * or -ENOMEM on error. If an error is returned, device will switch to - * XenbusStateClosing and the error message will be saved in XenStore. - */ -int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) +static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev, + int gnt_ref, void **vaddr) { struct gnttab_map_grant_ref op = { .flags = GNTMAP_host_map | GNTMAP_contains_pte, @@ -468,6 +466,64 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) *vaddr = area->addr; return 0; } + +static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev, + int gnt_ref, void **vaddr) +{ + struct xenbus_map_node *node; + int err; + void *addr; + + *vaddr = NULL; + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOMEM; + + err = alloc_xenballooned_pages(1, &node->page, false); + if (err) + goto out_err; + + addr = pfn_to_kaddr(page_to_pfn(node->page)); + + err = xenbus_map_ring(dev, gnt_ref, &node->handle, addr); + if (err) + goto out_err; + + spin_lock(&xenbus_valloc_lock); + list_add(&node->next, &xenbus_valloc_pages); + spin_unlock(&xenbus_valloc_lock); + + *vaddr = addr; + return 0; + + out_err: + free_xenballooned_pages(1, &node->page); + kfree(node); + return err; +} + +/** + * xenbus_map_ring_valloc + * @dev: xenbus device + * @gnt_ref: grant reference + * @vaddr: pointer to address to be filled out by mapping + * + * Based on Rusty Russell''s skeleton driver''s map_page. + * Map a page of memory into this domain from another domain''s grant table. + * xenbus_map_ring_valloc allocates a page of virtual address space, maps the + * page to that address, and sets *vaddr to that address. + * Returns 0 on success, and GNTST_* (see xen/include/interface/grant_table.h) + * or -ENOMEM on error. If an error is returned, device will switch to + * XenbusStateClosing and the error message will be saved in XenStore. + */ +int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) +{ + if (xen_pv_domain()) + return xenbus_map_ring_valloc_pv(dev, gnt_ref, vaddr); + else + return xenbus_map_ring_valloc_hvm(dev, gnt_ref, vaddr); +} EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc); @@ -509,20 +565,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref, } EXPORT_SYMBOL_GPL(xenbus_map_ring); - -/** - * xenbus_unmap_ring_vfree - * @dev: xenbus device - * @vaddr: addr to unmap - * - * Based on Rusty Russell''s skeleton driver''s unmap_page. - * Unmap a page of memory in this domain that was imported from another domain. - * Use xenbus_unmap_ring_vfree if you mapped in your memory with - * xenbus_map_ring_valloc (it will free the virtual address space). - * Returns 0 on success and returns GNTST_* on error - * (see xen/include/interface/grant_table.h). - */ -int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) +static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr) { struct vm_struct *area; struct gnttab_unmap_grant_ref op = { @@ -565,8 +608,58 @@ int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) return op.status; } -EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); +static int xenbus_unmap_ring_vfree_hvm(struct xenbus_device *dev, void *vaddr) +{ + int rv; + struct xenbus_map_node *node; + void *addr; + + spin_lock(&xenbus_valloc_lock); + list_for_each_entry(node, &xenbus_valloc_pages, next) { + addr = pfn_to_kaddr(page_to_pfn(node->page)); + if (addr == vaddr) + goto found; + } + node = NULL; + found: + spin_unlock(&xenbus_valloc_lock); + + if (!node) { + xenbus_dev_error(dev, -ENOENT, + "can''t find mapped virtual address %p", vaddr); + return -ENOENT; + } + + rv = xenbus_unmap_ring(dev, node->handle, addr); + + if (!rv) + free_xenballooned_pages(1, &node->page); + + kfree(node); + return rv; +} + +/** + * xenbus_unmap_ring_vfree + * @dev: xenbus device + * @vaddr: addr to unmap + * + * Based on Rusty Russell''s skeleton driver''s unmap_page. + * Unmap a page of memory in this domain that was imported from another domain. + * Use xenbus_unmap_ring_vfree if you mapped in your memory with + * xenbus_map_ring_valloc (it will free the virtual address space). + * Returns 0 on success and returns GNTST_* on error + * (see xen/include/interface/grant_table.h). + */ +int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) +{ + if (xen_pv_domain()) + return xenbus_unmap_ring_vfree_pv(dev, vaddr); + else + return xenbus_unmap_ring_vfree_hvm(dev, vaddr); +} +EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); /** * xenbus_unmap_ring -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH 2/6] xenbus: Use grant-table wrapper functions
The gnttab_set_{map,unmap}_op functions should be used instead of directly populating the fields of gnttab_map_grant_ref. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/xen/xenbus/xenbus_client.c | 17 +++++++---------- 1 files changed, 7 insertions(+), 10 deletions(-) diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c index 534e744..ca7c287 100644 --- a/drivers/xen/xenbus/xenbus_client.c +++ b/drivers/xen/xenbus/xenbus_client.c @@ -544,12 +544,10 @@ EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc); int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref, grant_handle_t *handle, void *vaddr) { - struct gnttab_map_grant_ref op = { - .host_addr = (unsigned long)vaddr, - .flags = GNTMAP_host_map, - .ref = gnt_ref, - .dom = dev->otherend_id, - }; + struct gnttab_map_grant_ref op; + + gnttab_set_map_op(&op, (phys_addr_t)vaddr, GNTMAP_host_map, gnt_ref, + dev->otherend_id); if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) BUG(); @@ -674,10 +672,9 @@ EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); int xenbus_unmap_ring(struct xenbus_device *dev, grant_handle_t handle, void *vaddr) { - struct gnttab_unmap_grant_ref op = { - .host_addr = (unsigned long)vaddr, - .handle = handle, - }; + struct gnttab_unmap_grant_ref op; + + gnttab_set_unmap_op(&op, (phys_addr_t)vaddr, GNTMAP_host_map, handle); if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) BUG(); -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH 3/6] xen/grant-table: Support mappings required by blkback
Allow mappings without GNTMAP_contains_pte and allow unmapping to specify if the PTEs should be cleared. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/xen/gntdev.c | 3 ++- drivers/xen/grant-table.c | 23 ++++------------------- include/xen/grant_table.h | 2 +- 3 files changed, 7 insertions(+), 21 deletions(-) diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index 3987132..5227506 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -312,7 +312,8 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages) } } - err = gnttab_unmap_refs(map->unmap_ops + offset, map->pages + offset, pages); + err = gnttab_unmap_refs(map->unmap_ops + offset, map->pages + offset, + pages, true); if (err) return err; diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c index bf1c094..a02d139 100644 --- a/drivers/xen/grant-table.c +++ b/drivers/xen/grant-table.c @@ -472,24 +472,9 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, (map_ops[i].host_addr & ~PAGE_MASK)); mfn = pte_mfn(*pte); } else { - /* If you really wanted to do this: - * mfn = PFN_DOWN(map_ops[i].dev_bus_addr); - * - * The reason we do not implement it is b/c on the - * unmap path (gnttab_unmap_refs) we have no means of - * checking whether the page is !GNTMAP_contains_pte. - * - * That is without some extra data-structure to carry - * the struct page, bool clear_pte, and list_head next - * tuples and deal with allocation/delallocation, etc. - * - * The users of this API set the GNTMAP_contains_pte - * flag so lets just return not supported until it - * becomes neccessary to implement. - */ - return -EOPNOTSUPP; + mfn = PFN_DOWN(map_ops[i].dev_bus_addr); } - ret = m2p_add_override(mfn, pages[i], &kmap_ops[i]); + ret = m2p_add_override(mfn, pages[i], kmap_ops ? &kmap_ops[i] : NULL); if (ret) return ret; } @@ -499,7 +484,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, EXPORT_SYMBOL_GPL(gnttab_map_refs); int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, - struct page **pages, unsigned int count) + struct page **pages, unsigned int count, bool clear_pte) { int i, ret; @@ -511,7 +496,7 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, return ret; for (i = 0; i < count; i++) { - ret = m2p_remove_override(pages[i], true /* clear the PTE */); + ret = m2p_remove_override(pages[i], clear_pte); if (ret) return ret; } diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h index 11e2dfc..37da54d 100644 --- a/include/xen/grant_table.h +++ b/include/xen/grant_table.h @@ -158,6 +158,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, struct gnttab_map_grant_ref *kmap_ops, struct page **pages, unsigned int count); int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, - struct page **pages, unsigned int count); + struct page **pages, unsigned int count, bool clear_pte); #endif /* __ASM_GNTTAB_H__ */ -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH 4/6] xen/blkback: use grant-table.c hypercall wrappers
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/block/xen-blkback/blkback.c | 29 ++++------------------------- 1 files changed, 4 insertions(+), 25 deletions(-) diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 15ec4db..1e256dc 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -324,6 +324,7 @@ struct seg_buf { static void xen_blkbk_unmap(struct pending_req *req) { struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST]; + struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST]; unsigned int i, invcount = 0; grant_handle_t handle; int ret; @@ -335,25 +336,12 @@ static void xen_blkbk_unmap(struct pending_req *req) gnttab_set_unmap_op(&unmap[invcount], vaddr(req, i), GNTMAP_host_map, handle); pending_handle(req, i) = BLKBACK_INVALID_HANDLE; + pages[invcount] = virt_to_page(vaddr(req, i)); invcount++; } - ret = HYPERVISOR_grant_table_op( - GNTTABOP_unmap_grant_ref, unmap, invcount); + ret = gnttab_unmap_refs(unmap, pages, invcount, false); BUG_ON(ret); - /* - * Note, we use invcount, so nr->pages, so we can''t index - * using vaddr(req, i). - */ - for (i = 0; i < invcount; i++) { - ret = m2p_remove_override( - virt_to_page(unmap[i].host_addr), false); - if (ret) { - pr_alert(DRV_PFX "Failed to remove M2P override for %lx\n", - (unsigned long)unmap[i].host_addr); - continue; - } - } } static int xen_blkbk_map(struct blkif_request *req, @@ -381,7 +369,7 @@ static int xen_blkbk_map(struct blkif_request *req, pending_req->blkif->domid); } - ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map, nseg); + ret = gnttab_map_refs(map, NULL, &blkbk->pending_page(pending_req, 0), nseg); BUG_ON(ret); /* @@ -401,15 +389,6 @@ static int xen_blkbk_map(struct blkif_request *req, if (ret) continue; - ret = m2p_add_override(PFN_DOWN(map[i].dev_bus_addr), - blkbk->pending_page(pending_req, i), NULL); - if (ret) { - pr_alert(DRV_PFX "Failed to install M2P override for %lx (ret: %d)\n", - (unsigned long)map[i].dev_bus_addr, ret); - /* We could switch over to GNTTABOP_copy */ - continue; - } - seg[i].buf = map[i].dev_bus_addr | (req->u.rw.seg[i].first_sect << 9); } -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH 5/6] xen/netback: Enable netback on HVM guests
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/net/xen-netback/netback.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 3af2924..9d80f99 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -1626,7 +1626,7 @@ static int __init netback_init(void) int rc = 0; int group; - if (!xen_pv_domain()) + if (!xen_domain()) return -ENODEV; xen_netbk_group_nr = num_online_cpus(); -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 15:35 UTC
[Xen-devel] [PATCH 6/6] xen/blkback: Enable blkback on HVM guests
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/block/xen-blkback/blkback.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c index 1e256dc..fbffdf0 100644 --- a/drivers/block/xen-blkback/blkback.c +++ b/drivers/block/xen-blkback/blkback.c @@ -823,7 +823,7 @@ static int __init xen_blkif_init(void) int i, mmap_pages; int rc = 0; - if (!xen_pv_domain()) + if (!xen_domain()) return -ENODEV; blkbk = kzalloc(sizeof(struct xen_blkbk), GFP_KERNEL); -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-20 16:48 UTC
[Xen-devel] [PATCH 1/6 v2] xenbus: Support HVM backends
Initial version lacked the list_del in xenbus_unmap_ring_vfree_hvm -------------------------------------------------------->8 Add HVM implementations of xenbus_(map,unmap)_ring_v(alloc,free) so that ring mappings can be done without using GNTMAP_contains_pte which is not supported on HVM. Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> --- drivers/xen/xenbus/xenbus_client.c | 155 +++++++++++++++++++++++++++++------- 1 files changed, 125 insertions(+), 30 deletions(-) diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c index 52bc57f..4b2fbcc 100644 --- a/drivers/xen/xenbus/xenbus_client.c +++ b/drivers/xen/xenbus/xenbus_client.c @@ -32,15 +32,26 @@ #include <linux/slab.h> #include <linux/types.h> +#include <linux/spinlock.h> #include <linux/vmalloc.h> #include <asm/xen/hypervisor.h> #include <asm/xen/page.h> #include <xen/interface/xen.h> #include <xen/interface/event_channel.h> +#include <xen/balloon.h> #include <xen/events.h> #include <xen/grant_table.h> #include <xen/xenbus.h> +struct xenbus_map_node { + struct list_head next; + struct page *page; + grant_handle_t handle; +}; + +static DEFINE_SPINLOCK(xenbus_valloc_lock); +static LIST_HEAD(xenbus_valloc_pages); + const char *xenbus_strstate(enum xenbus_state state) { static const char *const name[] = { @@ -419,21 +430,8 @@ int xenbus_free_evtchn(struct xenbus_device *dev, int port) EXPORT_SYMBOL_GPL(xenbus_free_evtchn); -/** - * xenbus_map_ring_valloc - * @dev: xenbus device - * @gnt_ref: grant reference - * @vaddr: pointer to address to be filled out by mapping - * - * Based on Rusty Russell''s skeleton driver''s map_page. - * Map a page of memory into this domain from another domain''s grant table. - * xenbus_map_ring_valloc allocates a page of virtual address space, maps the - * page to that address, and sets *vaddr to that address. - * Returns 0 on success, and GNTST_* (see xen/include/interface/grant_table.h) - * or -ENOMEM on error. If an error is returned, device will switch to - * XenbusStateClosing and the error message will be saved in XenStore. - */ -int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) +static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev, + int gnt_ref, void **vaddr) { struct gnttab_map_grant_ref op = { .flags = GNTMAP_host_map | GNTMAP_contains_pte, @@ -468,6 +466,64 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) *vaddr = area->addr; return 0; } + +static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev, + int gnt_ref, void **vaddr) +{ + struct xenbus_map_node *node; + int err; + void *addr; + + *vaddr = NULL; + + node = kzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return -ENOMEM; + + err = alloc_xenballooned_pages(1, &node->page, false); + if (err) + goto out_err; + + addr = pfn_to_kaddr(page_to_pfn(node->page)); + + err = xenbus_map_ring(dev, gnt_ref, &node->handle, addr); + if (err) + goto out_err; + + spin_lock(&xenbus_valloc_lock); + list_add(&node->next, &xenbus_valloc_pages); + spin_unlock(&xenbus_valloc_lock); + + *vaddr = addr; + return 0; + + out_err: + free_xenballooned_pages(1, &node->page); + kfree(node); + return err; +} + +/** + * xenbus_map_ring_valloc + * @dev: xenbus device + * @gnt_ref: grant reference + * @vaddr: pointer to address to be filled out by mapping + * + * Based on Rusty Russell''s skeleton driver''s map_page. + * Map a page of memory into this domain from another domain''s grant table. + * xenbus_map_ring_valloc allocates a page of virtual address space, maps the + * page to that address, and sets *vaddr to that address. + * Returns 0 on success, and GNTST_* (see xen/include/interface/grant_table.h) + * or -ENOMEM on error. If an error is returned, device will switch to + * XenbusStateClosing and the error message will be saved in XenStore. + */ +int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) +{ + if (xen_pv_domain()) + return xenbus_map_ring_valloc_pv(dev, gnt_ref, vaddr); + else + return xenbus_map_ring_valloc_hvm(dev, gnt_ref, vaddr); +} EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc); @@ -509,20 +565,7 @@ int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref, } EXPORT_SYMBOL_GPL(xenbus_map_ring); - -/** - * xenbus_unmap_ring_vfree - * @dev: xenbus device - * @vaddr: addr to unmap - * - * Based on Rusty Russell''s skeleton driver''s unmap_page. - * Unmap a page of memory in this domain that was imported from another domain. - * Use xenbus_unmap_ring_vfree if you mapped in your memory with - * xenbus_map_ring_valloc (it will free the virtual address space). - * Returns 0 on success and returns GNTST_* on error - * (see xen/include/interface/grant_table.h). - */ -int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) +static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr) { struct vm_struct *area; struct gnttab_unmap_grant_ref op = { @@ -565,8 +608,60 @@ int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) return op.status; } -EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); +static int xenbus_unmap_ring_vfree_hvm(struct xenbus_device *dev, void *vaddr) +{ + int rv; + struct xenbus_map_node *node; + void *addr; + + spin_lock(&xenbus_valloc_lock); + list_for_each_entry(node, &xenbus_valloc_pages, next) { + addr = pfn_to_kaddr(page_to_pfn(node->page)); + if (addr == vaddr) { + list_del(&node->next); + goto found; + } + } + node = NULL; + found: + spin_unlock(&xenbus_valloc_lock); + + if (!node) { + xenbus_dev_error(dev, -ENOENT, + "can''t find mapped virtual address %p", vaddr); + return -ENOENT; + } + + rv = xenbus_unmap_ring(dev, node->handle, addr); + + if (!rv) + free_xenballooned_pages(1, &node->page); + + kfree(node); + return rv; +} + +/** + * xenbus_unmap_ring_vfree + * @dev: xenbus device + * @vaddr: addr to unmap + * + * Based on Rusty Russell''s skeleton driver''s unmap_page. + * Unmap a page of memory in this domain that was imported from another domain. + * Use xenbus_unmap_ring_vfree if you mapped in your memory with + * xenbus_map_ring_valloc (it will free the virtual address space). + * Returns 0 on success and returns GNTST_* on error + * (see xen/include/interface/grant_table.h). + */ +int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr) +{ + if (xen_pv_domain()) + return xenbus_unmap_ring_vfree_pv(dev, vaddr); + else + return xenbus_unmap_ring_vfree_hvm(dev, vaddr); +} +EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); /** * xenbus_unmap_ring -- 1.7.6.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2011-Oct-24 09:31 UTC
[Xen-devel] Re: [PATCH 5/6] xen/netback: Enable netback on HVM guests
On Thu, 2011-10-20 at 16:35 +0100, Daniel De Graaf wrote:> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>Acked-by: Ian Campbell <ian.campbell@citrix.com> Normally netback patches would go in via the networking subsystem maintainer''s tree but since this depends on core Xen patches from this series and is unlikely to conflict with anything in the net-next tree I suspect it would make more sense for Konrad to take this one. David (Miller) does that work for you? Ian.> --- > drivers/net/xen-netback/netback.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > index 3af2924..9d80f99 100644 > --- a/drivers/net/xen-netback/netback.c > +++ b/drivers/net/xen-netback/netback.c > @@ -1626,7 +1626,7 @@ static int __init netback_init(void) > int rc = 0; > int group; > > - if (!xen_pv_domain()) > + if (!xen_domain()) > return -ENODEV; > > xen_netbk_group_nr = num_online_cpus();_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
David Miller
2011-Oct-24 09:34 UTC
[Xen-devel] Re: [PATCH 5/6] xen/netback: Enable netback on HVM guests
From: Ian Campbell <Ian.Campbell@citrix.com> Date: Mon, 24 Oct 2011 10:31:08 +0100> On Thu, 2011-10-20 at 16:35 +0100, Daniel De Graaf wrote: >> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > > Acked-by: Ian Campbell <ian.campbell@citrix.com> > > Normally netback patches would go in via the networking subsystem > maintainer''s tree but since this depends on core Xen patches from this > series and is unlikely to conflict with anything in the net-next tree I > suspect it would make more sense for Konrad to take this one. > > David (Miller) does that work for you?Yes, it does. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-24 14:19 UTC
Re: [Xen-devel] Re: [PATCH 5/6] xen/netback: Enable netback on HVM guests
On Mon, Oct 24, 2011 at 05:34:19AM -0400, David Miller wrote:> From: Ian Campbell <Ian.Campbell@citrix.com> > Date: Mon, 24 Oct 2011 10:31:08 +0100 > > > On Thu, 2011-10-20 at 16:35 +0100, Daniel De Graaf wrote: > >> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > > > > Acked-by: Ian Campbell <ian.campbell@citrix.com> > > > > Normally netback patches would go in via the networking subsystem > > maintainer''s tree but since this depends on core Xen patches from this > > series and is unlikely to conflict with anything in the net-next tree I > > suspect it would make more sense for Konrad to take this one. > > > > David (Miller) does that work for you? > > Yes, it does.OK, Can I stick Acked-by: David Miller on that patch? Thank you. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-24 21:40 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
> (Will move to commit message). In PV guests, it is sufficient to only > reserve kernel address space for grant mappings because Xen modifies the > mappings directly. HVM guests require that Xen modify the GFN-to-MFN > mapping, so the pages being remapped must already be allocated. PagesBy allocated you mean the populate_physmap hypercall must happen before the grant operations are done? (When I see allocated I think alloc_page, which I believe is _not_ what you were saying).> obtained from alloc_xenballooned_pages have valid GFNs not currently > mapped to an MFN, so are available to be used in grant mappings. >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-24 21:47 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On 10/24/2011 05:40 PM, Konrad Rzeszutek Wilk wrote:>> (Will move to commit message). In PV guests, it is sufficient to only >> reserve kernel address space for grant mappings because Xen modifies the >> mappings directly. HVM guests require that Xen modify the GFN-to-MFN >> mapping, so the pages being remapped must already be allocated. Pages > > By allocated you mean the populate_physmap hypercall must happen before > the grant operations are done? > > (When I see allocated I think alloc_page, which I believe is _not_ what > you were saying).The pages must be valid kernel pages (with GFNs) which are actually obtained with alloc_page if the balloon doesn''t have any sitting around for us. They must also *not* be populated in the physmap, which is why we grab them from the balloon and not from alloc_page directly.> >> obtained from alloc_xenballooned_pages have valid GFNs not currently >> mapped to an MFN, so are available to be used in grant mappings. >> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-24 21:55 UTC
Re: [Xen-devel] [PATCH 1/6 v2] xenbus: Support HVM backends
On Thu, Oct 20, 2011 at 12:48:04PM -0400, Daniel De Graaf wrote:> Initial version lacked the list_del in xenbus_unmap_ring_vfree_hvm > > -------------------------------------------------------->8 > > Add HVM implementations of xenbus_(map,unmap)_ring_v(alloc,free) so > that ring mappings can be done without using GNTMAP_contains_pte which > is not supported on HVM. > > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > --- > drivers/xen/xenbus/xenbus_client.c | 155 +++++++++++++++++++++++++++++------- > 1 files changed, 125 insertions(+), 30 deletions(-) > > diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c > index 52bc57f..4b2fbcc 100644 > --- a/drivers/xen/xenbus/xenbus_client.c > +++ b/drivers/xen/xenbus/xenbus_client.c > @@ -32,15 +32,26 @@ > > #include <linux/slab.h> > #include <linux/types.h> > +#include <linux/spinlock.h> > #include <linux/vmalloc.h> > #include <asm/xen/hypervisor.h> > #include <asm/xen/page.h> > #include <xen/interface/xen.h> > #include <xen/interface/event_channel.h> > +#include <xen/balloon.h> > #include <xen/events.h> > #include <xen/grant_table.h> > #include <xen/xenbus.h> > > +struct xenbus_map_node { > + struct list_head next; > + struct page *page; > + grant_handle_t handle; > +}; > + > +static DEFINE_SPINLOCK(xenbus_valloc_lock); > +static LIST_HEAD(xenbus_valloc_pages); > + > const char *xenbus_strstate(enum xenbus_state state) > { > static const char *const name[] = { > @@ -419,21 +430,8 @@ int xenbus_free_evtchn(struct xenbus_device *dev, int port) > EXPORT_SYMBOL_GPL(xenbus_free_evtchn); > > > -/** > - * xenbus_map_ring_valloc > - * @dev: xenbus device > - * @gnt_ref: grant reference > - * @vaddr: pointer to address to be filled out by mapping > - * > - * Based on Rusty Russell''s skeleton driver''s map_page. > - * Map a page of memory into this domain from another domain''s grant table. > - * xenbus_map_ring_valloc allocates a page of virtual address space, maps the > - * page to that address, and sets *vaddr to that address. > - * Returns 0 on success, and GNTST_* (see xen/include/interface/grant_table.h) > - * or -ENOMEM on error. If an error is returned, device will switch to > - * XenbusStateClosing and the error message will be saved in XenStore. > - */ > -int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) > +static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev, > + int gnt_ref, void **vaddr) > { > struct gnttab_map_grant_ref op = { > .flags = GNTMAP_host_map | GNTMAP_contains_pte, > @@ -468,6 +466,64 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, int gnt_ref, void **vaddr) > *vaddr = area->addr; > return 0; > } > + > +static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev, > + int gnt_ref, void **vaddr) > +{ > + struct xenbus_map_node *node; > + int err; > + void *addr; > + > + *vaddr = NULL; > + > + node = kzalloc(sizeof(*node), GFP_KERNEL); > + if (!node) > + return -ENOMEM; > + > + err = alloc_xenballooned_pages(1, &node->page, false);Add /* lowmem */ on the ''false'' parameter. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-24 21:57 UTC
[Xen-devel] Re: [PATCH 2/6] xenbus: Use grant-table wrapper functions
On Thu, Oct 20, 2011 at 11:35:53AM -0400, Daniel De Graaf wrote:> The gnttab_set_{map,unmap}_op functions should be used instead of > directly populating the fields of gnttab_map_grant_ref.You could also mention that this has the side effect that under HVM, this happens automatically: op->host_addr = __pa(vaddr); while under PV it is unchanged.> > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > --- > drivers/xen/xenbus/xenbus_client.c | 17 +++++++---------- > 1 files changed, 7 insertions(+), 10 deletions(-) > > diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c > index 534e744..ca7c287 100644 > --- a/drivers/xen/xenbus/xenbus_client.c > +++ b/drivers/xen/xenbus/xenbus_client.c > @@ -544,12 +544,10 @@ EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc); > int xenbus_map_ring(struct xenbus_device *dev, int gnt_ref, > grant_handle_t *handle, void *vaddr) > { > - struct gnttab_map_grant_ref op = { > - .host_addr = (unsigned long)vaddr, > - .flags = GNTMAP_host_map, > - .ref = gnt_ref, > - .dom = dev->otherend_id, > - }; > + struct gnttab_map_grant_ref op; > + > + gnttab_set_map_op(&op, (phys_addr_t)vaddr, GNTMAP_host_map, gnt_ref, > + dev->otherend_id); > > if (HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, &op, 1)) > BUG(); > @@ -674,10 +672,9 @@ EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree); > int xenbus_unmap_ring(struct xenbus_device *dev, > grant_handle_t handle, void *vaddr) > { > - struct gnttab_unmap_grant_ref op = { > - .host_addr = (unsigned long)vaddr, > - .handle = handle, > - }; > + struct gnttab_unmap_grant_ref op; > + > + gnttab_set_unmap_op(&op, (phys_addr_t)vaddr, GNTMAP_host_map, handle); > > if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, &op, 1)) > BUG(); > -- > 1.7.6.4_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-24 22:00 UTC
Re: [Xen-devel] [PATCH 3/6] xen/grant-table: Support mappings required by blkback
On Thu, Oct 20, 2011 at 11:35:54AM -0400, Daniel De Graaf wrote:> Allow mappings without GNTMAP_contains_pte and allow unmapping to > specify if the PTEs should be cleared. > > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov> > --- > drivers/xen/gntdev.c | 3 ++- > drivers/xen/grant-table.c | 23 ++++------------------- > include/xen/grant_table.h | 2 +- > 3 files changed, 7 insertions(+), 21 deletions(-) > > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c > index 3987132..5227506 100644 > --- a/drivers/xen/gntdev.c > +++ b/drivers/xen/gntdev.c > @@ -312,7 +312,8 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages) > } > } > > - err = gnttab_unmap_refs(map->unmap_ops + offset, map->pages + offset, pages); > + err = gnttab_unmap_refs(map->unmap_ops + offset, map->pages + offset, > + pages, true); > if (err) > return err; > > diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c > index bf1c094..a02d139 100644 > --- a/drivers/xen/grant-table.c > +++ b/drivers/xen/grant-table.c > @@ -472,24 +472,9 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, > (map_ops[i].host_addr & ~PAGE_MASK)); > mfn = pte_mfn(*pte); > } else { > - /* If you really wanted to do this: > - * mfn = PFN_DOWN(map_ops[i].dev_bus_addr); > - * > - * The reason we do not implement it is b/c on the > - * unmap path (gnttab_unmap_refs) we have no means of > - * checking whether the page is !GNTMAP_contains_pte.Can you mention how you are addressing the !GNTMAP_contains_pte on unmap issue? (or how it is already addressed).> - * > - * That is without some extra data-structure to carry > - * the struct page, bool clear_pte, and list_head next > - * tuples and deal with allocation/delallocation, etc. > - * > - * The users of this API set the GNTMAP_contains_pte > - * flag so lets just return not supported until it > - * becomes neccessary to implement. > - */ > - return -EOPNOTSUPP; > + mfn = PFN_DOWN(map_ops[i].dev_bus_addr); > } > - ret = m2p_add_override(mfn, pages[i], &kmap_ops[i]); > + ret = m2p_add_override(mfn, pages[i], kmap_ops ? &kmap_ops[i] : NULL); > if (ret) > return ret; > } > @@ -499,7 +484,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, > EXPORT_SYMBOL_GPL(gnttab_map_refs); > > int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, > - struct page **pages, unsigned int count) > + struct page **pages, unsigned int count, bool clear_pte) > { > int i, ret; > > @@ -511,7 +496,7 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, > return ret; > > for (i = 0; i < count; i++) { > - ret = m2p_remove_override(pages[i], true /* clear the PTE */); > + ret = m2p_remove_override(pages[i], clear_pte); > if (ret) > return ret; > } > diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h > index 11e2dfc..37da54d 100644 > --- a/include/xen/grant_table.h > +++ b/include/xen/grant_table.h > @@ -158,6 +158,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, > struct gnttab_map_grant_ref *kmap_ops, > struct page **pages, unsigned int count); > int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, > - struct page **pages, unsigned int count); > + struct page **pages, unsigned int count, bool clear_pte); > > #endif /* __ASM_GNTTAB_H__ */ > -- > 1.7.6.4 > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Konrad Rzeszutek Wilk
2011-Oct-24 22:08 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On Mon, Oct 24, 2011 at 05:47:40PM -0400, Daniel De Graaf wrote:> On 10/24/2011 05:40 PM, Konrad Rzeszutek Wilk wrote: > >> (Will move to commit message). In PV guests, it is sufficient to only > >> reserve kernel address space for grant mappings because Xen modifies the > >> mappings directly. HVM guests require that Xen modify the GFN-to-MFN > >> mapping, so the pages being remapped must already be allocated. Pages > > > > By allocated you mean the populate_physmap hypercall must happen before > > the grant operations are done? > > > > (When I see allocated I think alloc_page, which I believe is _not_ what > > you were saying). > > The pages must be valid kernel pages (with GFNs) which are actually obtained > with alloc_page if the balloon doesn''t have any sitting around for us. They > must also *not* be populated in the physmap, which is why we grab them from > the balloon and not from alloc_page directly.Uh, aren''t pages from alloc_page ("if the balloon does not have any sitting around for us") obtained from normal memory that is allocated at startup. And at startup those swaths of memory are obtained by populate_physmap call? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Daniel De Graaf
2011-Oct-24 22:24 UTC
Re: [Xen-devel] [PATCH 1/5] xen/netback: Use xenballooned pages for comms
On 10/24/2011 06:08 PM, Konrad Rzeszutek Wilk wrote:> On Mon, Oct 24, 2011 at 05:47:40PM -0400, Daniel De Graaf wrote: >> On 10/24/2011 05:40 PM, Konrad Rzeszutek Wilk wrote: >>>> (Will move to commit message). In PV guests, it is sufficient to only >>>> reserve kernel address space for grant mappings because Xen modifies the >>>> mappings directly. HVM guests require that Xen modify the GFN-to-MFN >>>> mapping, so the pages being remapped must already be allocated. Pages >>> >>> By allocated you mean the populate_physmap hypercall must happen before >>> the grant operations are done? >>> >>> (When I see allocated I think alloc_page, which I believe is _not_ what >>> you were saying). >> >> The pages must be valid kernel pages (with GFNs) which are actually obtained >> with alloc_page if the balloon doesn''t have any sitting around for us. They >> must also *not* be populated in the physmap, which is why we grab them from >> the balloon and not from alloc_page directly. > > Uh, aren''t pages from alloc_page ("if the balloon does not have any sitting around > for us") obtained from normal memory that is allocated at startup. And at startup > those swaths of memory are obtained by populate_physmap call? >Yes, but alloc_xenballooned_pages calls XENMEM_decrease_reservation to remove the MFN mappings for these pages, so they are returned to the state where populate_physmap has not been called on them. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel