The following changes since commit 384a55c0ac12eabac9531ff3a433598d32a42a15: Jeremy Fitzhardinge (1): Merge branch ''xen-tip/core'' into xen-tip/master are available in the git repository at: git://xenbits.xensource.com/people/ianc/linux-2.6.git for-jeremy/privcmd Ian Campbell (1): privcmd: MMAPBATCH: Fix error handling/reporting drivers/xen/xenfs/privcmd.c | 56 +++++++++++++++++++++++++++++++----------- 1 files changed, 41 insertions(+), 15 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-May-20 14:45 UTC
[Xen-devel] [PATCH] privcmd: MMAPBATCH: Fix error handling/reporting
On error IOCTL_PRIVCMD_MMAPBATCH is expected to set the top nibble of the effected MFN and return 0. Currently it leaves the MFN unmodified and returns the number of failures. Therefore: - reimplement remap_domain_mfn_range() using direct HYPERVISOR_mmu_update() calls and small batches. The xen_set_domain_pte() interface does not report errors and since some failures are expected/normal using the multicall infrastructure is too noisy. - return 0 as expected - writeback the updated MFN list to mmapbatch->arr not over mmapbatch, smashing the caller''s stack. - remap_domain_mfn_range can be static. With this change I am able to start an HVM domain. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> --- drivers/xen/xenfs/privcmd.c | 56 +++++++++++++++++++++++++++++++----------- 1 files changed, 41 insertions(+), 15 deletions(-) diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c index 263f622..110b062 100644 --- a/drivers/xen/xenfs/privcmd.c +++ b/drivers/xen/xenfs/privcmd.c @@ -31,14 +31,16 @@ #include <xen/features.h> #include <xen/page.h> +#define REMAP_BATCH_SIZE 16 + #ifndef HAVE_ARCH_PRIVCMD_MMAP static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma); #endif struct remap_data { unsigned long mfn; - unsigned domid; pgprot_t prot; + struct mmu_update *mmu_update; }; static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, @@ -47,17 +49,23 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, struct remap_data *rmd = data; pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot)); - xen_set_domain_pte(ptep, pte, rmd->domid); + rmd->mmu_update->ptr = arbitrary_virt_to_machine(ptep).maddr; + rmd->mmu_update->val = pte_val_ma(pte); + rmd->mmu_update++; return 0; } -int remap_domain_mfn_range(struct vm_area_struct *vma, unsigned long addr, - unsigned long mfn, unsigned long size, - pgprot_t prot, unsigned domid) +static int remap_domain_mfn_range(struct vm_area_struct *vma, + unsigned long addr, + unsigned long mfn, int nr, + pgprot_t prot, unsigned domid) { struct remap_data rmd; - int err; + struct mmu_update mmu_update[REMAP_BATCH_SIZE]; + int batch; + unsigned long range; + int err = 0; prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP); @@ -65,10 +73,29 @@ int remap_domain_mfn_range(struct vm_area_struct *vma, unsigned long addr, rmd.mfn = mfn; rmd.prot = prot; - rmd.domid = domid; - err = apply_to_page_range(vma->vm_mm, addr, size, - remap_area_mfn_pte_fn, &rmd); + while (nr) { + batch = min(REMAP_BATCH_SIZE, nr); + range = (unsigned long)batch << PAGE_SHIFT; + + rmd.mmu_update = mmu_update; + err = apply_to_page_range(vma->vm_mm, addr, range, + remap_area_mfn_pte_fn, &rmd); + if (err) + goto out; + + err = -EFAULT; + if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0) + goto out; + + nr -= batch; + addr += range; + } + + err = 0; +out: + + flush_tlb_all(); return err; } @@ -157,7 +184,7 @@ static int traverse_pages(unsigned nelem, size_t size, { void *pagedata; unsigned pageidx; - int ret; + int ret = 0; BUG_ON(size > PAGE_SIZE); @@ -207,8 +234,7 @@ static int mmap_mfn_range(void *data, void *state) rc = remap_domain_mfn_range(vma, msg->va & PAGE_MASK, - msg->mfn, - msg->npages << PAGE_SHIFT, + msg->mfn, msg->npages, vma->vm_page_prot, st->domain); if (rc < 0) @@ -289,7 +315,7 @@ static int mmap_batch_fn(void *data, void *state) struct mmap_batch_state *st = state; if (remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, - *mfnp, PAGE_SIZE, + *mfnp, 1, st->vma->vm_page_prot, st->domain) < 0) { *mfnp |= 0xf0000000U; st->err++; @@ -361,9 +387,9 @@ static long privcmd_ioctl_mmap_batch(void __user *udata) up_write(&mm->mmap_sem); if (state.err > 0) { - ret = state.err; + ret = 0; - state.user = udata; + state.user = m.arr; traverse_pages(m.num, sizeof(xen_pfn_t), &pagelist, mmap_return_errors, &state); -- 1.5.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-20 19:33 UTC
Re: [Xen-devel] [PATCH] privcmd: MMAPBATCH: Fix error handling/reporting
Ian Campbell wrote:> On error IOCTL_PRIVCMD_MMAPBATCH is expected to set the top nibble of > the effected MFN and return 0. Currently it leaves the MFN unmodified > and returns the number of failures. Therefore: > > - reimplement remap_domain_mfn_range() using direct > HYPERVISOR_mmu_update() calls and small batches. The xen_set_domain_pte() > interface does not report errors and since some failures are > expected/normal using the multicall infrastructure is too noisy. >The noise is just for debugging; if failure is expected, then maybe we can extend it to be quiet about those cases.> - return 0 as expected > - writeback the updated MFN list to mmapbatch->arr not over mmapbatch, > smashing the caller''s stack. >Oops.> - remap_domain_mfn_range can be static. > > With this change I am able to start an HVM domain. >OK, good. I''ve pulled this into xen-tip/dom0/xenfs. Thanks, J> Signed-off-by: Ian Campbell <ian.campbell@citrix.com> > Cc: Jeremy Fitzhardinge <jeremy@goop.org> > --- > drivers/xen/xenfs/privcmd.c | 56 +++++++++++++++++++++++++++++++----------- > 1 files changed, 41 insertions(+), 15 deletions(-) > > diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c > index 263f622..110b062 100644 > --- a/drivers/xen/xenfs/privcmd.c > +++ b/drivers/xen/xenfs/privcmd.c > @@ -31,14 +31,16 @@ > #include <xen/features.h> > #include <xen/page.h> > > +#define REMAP_BATCH_SIZE 16 > + > #ifndef HAVE_ARCH_PRIVCMD_MMAP > static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma); > #endif > > struct remap_data { > unsigned long mfn; > - unsigned domid; > pgprot_t prot; > + struct mmu_update *mmu_update; > }; > > static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, > @@ -47,17 +49,23 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, > struct remap_data *rmd = data; > pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot)); > > - xen_set_domain_pte(ptep, pte, rmd->domid); > + rmd->mmu_update->ptr = arbitrary_virt_to_machine(ptep).maddr; > + rmd->mmu_update->val = pte_val_ma(pte); > + rmd->mmu_update++; > > return 0; > } > > -int remap_domain_mfn_range(struct vm_area_struct *vma, unsigned long addr, > - unsigned long mfn, unsigned long size, > - pgprot_t prot, unsigned domid) > +static int remap_domain_mfn_range(struct vm_area_struct *vma, > + unsigned long addr, > + unsigned long mfn, int nr, > + pgprot_t prot, unsigned domid) > { > struct remap_data rmd; > - int err; > + struct mmu_update mmu_update[REMAP_BATCH_SIZE]; > + int batch; > + unsigned long range; > + int err = 0; > > prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP); > > @@ -65,10 +73,29 @@ int remap_domain_mfn_range(struct vm_area_struct *vma, unsigned long addr, > > rmd.mfn = mfn; > rmd.prot = prot; > - rmd.domid = domid; > > - err = apply_to_page_range(vma->vm_mm, addr, size, > - remap_area_mfn_pte_fn, &rmd); > + while (nr) { > + batch = min(REMAP_BATCH_SIZE, nr); > + range = (unsigned long)batch << PAGE_SHIFT; > + > + rmd.mmu_update = mmu_update; > + err = apply_to_page_range(vma->vm_mm, addr, range, > + remap_area_mfn_pte_fn, &rmd); > + if (err) > + goto out; > + > + err = -EFAULT; > + if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0) > + goto out; > + > + nr -= batch; > + addr += range; > + } > + > + err = 0; > +out: > + > + flush_tlb_all(); > > return err; > } > @@ -157,7 +184,7 @@ static int traverse_pages(unsigned nelem, size_t size, > { > void *pagedata; > unsigned pageidx; > - int ret; > + int ret = 0; > > BUG_ON(size > PAGE_SIZE); > > @@ -207,8 +234,7 @@ static int mmap_mfn_range(void *data, void *state) > > rc = remap_domain_mfn_range(vma, > msg->va & PAGE_MASK, > - msg->mfn, > - msg->npages << PAGE_SHIFT, > + msg->mfn, msg->npages, > vma->vm_page_prot, > st->domain); > if (rc < 0) > @@ -289,7 +315,7 @@ static int mmap_batch_fn(void *data, void *state) > struct mmap_batch_state *st = state; > > if (remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, > - *mfnp, PAGE_SIZE, > + *mfnp, 1, > st->vma->vm_page_prot, st->domain) < 0) { > *mfnp |= 0xf0000000U; > st->err++; > @@ -361,9 +387,9 @@ static long privcmd_ioctl_mmap_batch(void __user *udata) > up_write(&mm->mmap_sem); > > if (state.err > 0) { > - ret = state.err; > + ret = 0; > > - state.user = udata; > + state.user = m.arr; > traverse_pages(m.num, sizeof(xen_pfn_t), > &pagelist, > mmap_return_errors, &state); >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-20 21:13 UTC
Re: [Xen-devel] [PATCH] privcmd: MMAPBATCH: Fix error handling/reporting
Ian Campbell wrote:> On error IOCTL_PRIVCMD_MMAPBATCH is expected to set the top nibble of > the effected MFN and return 0. Currently it leaves the MFN unmodified > and returns the number of failures. Therefore: > > - reimplement remap_domain_mfn_range() using direct > HYPERVISOR_mmu_update() calls and small batches. The xen_set_domain_pte() > interface does not report errors and since some failures are > expected/normal using the multicall infrastructure is too noisy. > - return 0 as expected > - writeback the updated MFN list to mmapbatch->arr not over mmapbatch, > smashing the caller''s stack. > - remap_domain_mfn_range can be static. > > With this change I am able to start an HVM domain. >This breaks compiling xenfs as a module; neither flush_tlb_all or arbitrary_virt_to_machine are exported, I think. J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-May-21 08:51 UTC
Re: [Xen-devel] [PATCH] privcmd: MMAPBATCH: Fix error handling/reporting
On Wed, 2009-05-20 at 15:33 -0400, Jeremy Fitzhardinge wrote:> Ian Campbell wrote: > > On error IOCTL_PRIVCMD_MMAPBATCH is expected to set the top nibble of > > the effected MFN and return 0. Currently it leaves the MFN unmodified > > and returns the number of failures. Therefore: > > > > - reimplement remap_domain_mfn_range() using direct > > HYPERVISOR_mmu_update() calls and small batches. The xen_set_domain_pte() > > interface does not report errors and since some failures are > > expected/normal using the multicall infrastructure is too noisy. > > > The noise is just for debugging; if failure is expected, then maybe we > can extend it to be quiet about those cases.In this specific instance going directly at the mmu_udpate interface is probably better since the propagation of errors from the multicall infrastructure is tricky (well, currently non-existent). I don''t think multicalls would buy us anything here anyhow since mmu_update is batched already. On Wed, 2009-05-20 at 17:13 -0400, Jeremy Fitzhardinge wrote:> This breaks compiling xenfs as a module; neither flush_tlb_all or > arbitrary_virt_to_machine are exported, I think.Rather than exporting those I think moving remap_domain_mfn_range() to core code (with xen_ on the front of the name) and exporting that would be cleaner. Thoughts? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-May-21 09:18 UTC
Re: [Xen-devel] [PATCH] privcmd: MMAPBATCH: Fix error handling/reporting
On Thu, 2009-05-21 at 04:51 -0400, Ian Campbell wrote:> > Rather than exporting those I think moving remap_domain_mfn_range() to > core code (with xen_ on the front of the name) and exporting that > would be cleaner. Thoughts?The following changes since commit 0fccc3aa1ef9fe0753e1f5b4caf7803ae87dd59c: Ian Campbell (1): privcmd: MMAPBATCH: Fix error handling/reporting are available in the git repository at: git://xenbits.xensource.com/people/ianc/linux-2.6.git for-jeremy/privcmd Ian Campbell (1): xen/privcmd: move remap_domain_mfn_range() to core xen code and export. arch/x86/xen/mmu.c | 66 +++++++++++++++++++++++++++++++++++ drivers/xen/xenfs/privcmd.c | 81 ++++-------------------------------------- include/xen/xen-ops.h | 5 +++ 3 files changed, 79 insertions(+), 73 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-May-21 09:18 UTC
[Xen-devel] [PATCH] xen/privcmd: move remap_domain_mfn_range() to core xen code and export.
This allows xenfs to be built as a module, previously it required flush_tlb_all and arbitrary_virt_to_machine to be exported. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> --- I''m not sure about the IA64 impact. Under 2.6.18 their direct_remap_pfn_range is very different to the x86 one. So I''ve assumed it should be specific here too. arch/x86/xen/mmu.c | 66 +++++++++++++++++++++++++++++++++++ drivers/xen/xenfs/privcmd.c | 81 ++++-------------------------------------- include/xen/xen-ops.h | 5 +++ 3 files changed, 79 insertions(+), 73 deletions(-) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index fefdeee..8c53fc9 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -2323,6 +2323,72 @@ void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order) } EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region); +#define REMAP_BATCH_SIZE 16 + +struct remap_data { + unsigned long mfn; + pgprot_t prot; + struct mmu_update *mmu_update; +}; + +static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, + unsigned long addr, void *data) +{ + struct remap_data *rmd = data; + pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot)); + + rmd->mmu_update->ptr = arbitrary_virt_to_machine(ptep).maddr; + rmd->mmu_update->val = pte_val_ma(pte); + rmd->mmu_update++; + + return 0; +} + +int xen_remap_domain_mfn_range(struct vm_area_struct *vma, + unsigned long addr, + unsigned long mfn, int nr, + pgprot_t prot, unsigned domid) +{ + struct remap_data rmd; + struct mmu_update mmu_update[REMAP_BATCH_SIZE]; + int batch; + unsigned long range; + int err = 0; + + prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP); + + vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP; + + rmd.mfn = mfn; + rmd.prot = prot; + + while (nr) { + batch = min(REMAP_BATCH_SIZE, nr); + range = (unsigned long)batch << PAGE_SHIFT; + + rmd.mmu_update = mmu_update; + err = apply_to_page_range(vma->vm_mm, addr, range, + remap_area_mfn_pte_fn, &rmd); + if (err) + goto out; + + err = -EFAULT; + if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0) + goto out; + + nr -= batch; + addr += range; + } + + err = 0; +out: + + flush_tlb_all(); + + return err; +} +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range); + #ifdef CONFIG_XEN_DEBUG_FS static struct dentry *d_mmu_debug; diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c index 110b062..a3fee58 100644 --- a/drivers/xen/xenfs/privcmd.c +++ b/drivers/xen/xenfs/privcmd.c @@ -30,76 +30,12 @@ #include <xen/interface/xen.h> #include <xen/features.h> #include <xen/page.h> - -#define REMAP_BATCH_SIZE 16 +#include <xen/xen-ops.h> #ifndef HAVE_ARCH_PRIVCMD_MMAP static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma); #endif -struct remap_data { - unsigned long mfn; - pgprot_t prot; - struct mmu_update *mmu_update; -}; - -static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, - unsigned long addr, void *data) -{ - struct remap_data *rmd = data; - pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot)); - - rmd->mmu_update->ptr = arbitrary_virt_to_machine(ptep).maddr; - rmd->mmu_update->val = pte_val_ma(pte); - rmd->mmu_update++; - - return 0; -} - -static int remap_domain_mfn_range(struct vm_area_struct *vma, - unsigned long addr, - unsigned long mfn, int nr, - pgprot_t prot, unsigned domid) -{ - struct remap_data rmd; - struct mmu_update mmu_update[REMAP_BATCH_SIZE]; - int batch; - unsigned long range; - int err = 0; - - prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP); - - vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP; - - rmd.mfn = mfn; - rmd.prot = prot; - - while (nr) { - batch = min(REMAP_BATCH_SIZE, nr); - range = (unsigned long)batch << PAGE_SHIFT; - - rmd.mmu_update = mmu_update; - err = apply_to_page_range(vma->vm_mm, addr, range, - remap_area_mfn_pte_fn, &rmd); - if (err) - goto out; - - err = -EFAULT; - if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0) - goto out; - - nr -= batch; - addr += range; - } - - err = 0; -out: - - flush_tlb_all(); - - return err; -} - static long privcmd_ioctl_hypercall(void __user *udata) { privcmd_hypercall_t hypercall; @@ -232,11 +168,11 @@ static int mmap_mfn_range(void *data, void *state) ((msg->va+(msg->npages<<PAGE_SHIFT)) > vma->vm_end)) return -EINVAL; - rc = remap_domain_mfn_range(vma, - msg->va & PAGE_MASK, - msg->mfn, msg->npages, - vma->vm_page_prot, - st->domain); + rc = xen_remap_domain_mfn_range(vma, + msg->va & PAGE_MASK, + msg->mfn, msg->npages, + vma->vm_page_prot, + st->domain); if (rc < 0) return rc; @@ -314,9 +250,8 @@ static int mmap_batch_fn(void *data, void *state) xen_pfn_t *mfnp = data; struct mmap_batch_state *st = state; - if (remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, - *mfnp, 1, - st->vma->vm_page_prot, st->domain) < 0) { + if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1, + st->vma->vm_page_prot, st->domain) < 0) { *mfnp |= 0xf0000000U; st->err++; } diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h index d789c93..4d8a23e 100644 --- a/include/xen/xen-ops.h +++ b/include/xen/xen-ops.h @@ -20,4 +20,9 @@ int xen_create_contiguous_region(unsigned long vstart, unsigned int order, void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order); +int xen_remap_domain_mfn_range(struct vm_area_struct *vma, + unsigned long addr, + unsigned long mfn, int nr, + pgprot_t prot, unsigned domid); + #endif /* INCLUDE_XEN_OPS_H */ -- 1.5.6.5 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Ian Campbell
2009-May-21 09:19 UTC
[Xen-devel] Re: [PATCH] xen/privcmd: move remap_domain_mfn_range() to core xen code and export.
You both should have got a CC of this but git-send-email is playing up. On Thu, 2009-05-21 at 05:18 -0400, Ian Campbell wrote:> This allows xenfs to be built as a module, previously it required flush_tlb_all > and arbitrary_virt_to_machine to be exported. > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com> > --- > > I''m not sure about the IA64 impact. Under 2.6.18 their > direct_remap_pfn_range is very different to the x86 one. So I''ve > assumed it should be specific here too. > > arch/x86/xen/mmu.c | 66 +++++++++++++++++++++++++++++++++++ > drivers/xen/xenfs/privcmd.c | 81 ++++-------------------------------------- > include/xen/xen-ops.h | 5 +++ > 3 files changed, 79 insertions(+), 73 deletions(-) > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c > index fefdeee..8c53fc9 100644 > --- a/arch/x86/xen/mmu.c > +++ b/arch/x86/xen/mmu.c > @@ -2323,6 +2323,72 @@ void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order) > } > EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region); > > +#define REMAP_BATCH_SIZE 16 > + > +struct remap_data { > + unsigned long mfn; > + pgprot_t prot; > + struct mmu_update *mmu_update; > +}; > + > +static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, > + unsigned long addr, void *data) > +{ > + struct remap_data *rmd = data; > + pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot)); > + > + rmd->mmu_update->ptr = arbitrary_virt_to_machine(ptep).maddr; > + rmd->mmu_update->val = pte_val_ma(pte); > + rmd->mmu_update++; > + > + return 0; > +} > + > +int xen_remap_domain_mfn_range(struct vm_area_struct *vma, > + unsigned long addr, > + unsigned long mfn, int nr, > + pgprot_t prot, unsigned domid) > +{ > + struct remap_data rmd; > + struct mmu_update mmu_update[REMAP_BATCH_SIZE]; > + int batch; > + unsigned long range; > + int err = 0; > + > + prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP); > + > + vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP; > + > + rmd.mfn = mfn; > + rmd.prot = prot; > + > + while (nr) { > + batch = min(REMAP_BATCH_SIZE, nr); > + range = (unsigned long)batch << PAGE_SHIFT; > + > + rmd.mmu_update = mmu_update; > + err = apply_to_page_range(vma->vm_mm, addr, range, > + remap_area_mfn_pte_fn, &rmd); > + if (err) > + goto out; > + > + err = -EFAULT; > + if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0) > + goto out; > + > + nr -= batch; > + addr += range; > + } > + > + err = 0; > +out: > + > + flush_tlb_all(); > + > + return err; > +} > +EXPORT_SYMBOL_GPL(xen_remap_domain_mfn_range); > + > #ifdef CONFIG_XEN_DEBUG_FS > > static struct dentry *d_mmu_debug; > diff --git a/drivers/xen/xenfs/privcmd.c b/drivers/xen/xenfs/privcmd.c > index 110b062..a3fee58 100644 > --- a/drivers/xen/xenfs/privcmd.c > +++ b/drivers/xen/xenfs/privcmd.c > @@ -30,76 +30,12 @@ > #include <xen/interface/xen.h> > #include <xen/features.h> > #include <xen/page.h> > - > -#define REMAP_BATCH_SIZE 16 > +#include <xen/xen-ops.h> > > #ifndef HAVE_ARCH_PRIVCMD_MMAP > static int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma); > #endif > > -struct remap_data { > - unsigned long mfn; > - pgprot_t prot; > - struct mmu_update *mmu_update; > -}; > - > -static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token, > - unsigned long addr, void *data) > -{ > - struct remap_data *rmd = data; > - pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot)); > - > - rmd->mmu_update->ptr = arbitrary_virt_to_machine(ptep).maddr; > - rmd->mmu_update->val = pte_val_ma(pte); > - rmd->mmu_update++; > - > - return 0; > -} > - > -static int remap_domain_mfn_range(struct vm_area_struct *vma, > - unsigned long addr, > - unsigned long mfn, int nr, > - pgprot_t prot, unsigned domid) > -{ > - struct remap_data rmd; > - struct mmu_update mmu_update[REMAP_BATCH_SIZE]; > - int batch; > - unsigned long range; > - int err = 0; > - > - prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP); > - > - vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP; > - > - rmd.mfn = mfn; > - rmd.prot = prot; > - > - while (nr) { > - batch = min(REMAP_BATCH_SIZE, nr); > - range = (unsigned long)batch << PAGE_SHIFT; > - > - rmd.mmu_update = mmu_update; > - err = apply_to_page_range(vma->vm_mm, addr, range, > - remap_area_mfn_pte_fn, &rmd); > - if (err) > - goto out; > - > - err = -EFAULT; > - if (HYPERVISOR_mmu_update(mmu_update, batch, NULL, domid) < 0) > - goto out; > - > - nr -= batch; > - addr += range; > - } > - > - err = 0; > -out: > - > - flush_tlb_all(); > - > - return err; > -} > - > static long privcmd_ioctl_hypercall(void __user *udata) > { > privcmd_hypercall_t hypercall; > @@ -232,11 +168,11 @@ static int mmap_mfn_range(void *data, void *state) > ((msg->va+(msg->npages<<PAGE_SHIFT)) > vma->vm_end)) > return -EINVAL; > > - rc = remap_domain_mfn_range(vma, > - msg->va & PAGE_MASK, > - msg->mfn, msg->npages, > - vma->vm_page_prot, > - st->domain); > + rc = xen_remap_domain_mfn_range(vma, > + msg->va & PAGE_MASK, > + msg->mfn, msg->npages, > + vma->vm_page_prot, > + st->domain); > if (rc < 0) > return rc; > > @@ -314,9 +250,8 @@ static int mmap_batch_fn(void *data, void *state) > xen_pfn_t *mfnp = data; > struct mmap_batch_state *st = state; > > - if (remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, > - *mfnp, 1, > - st->vma->vm_page_prot, st->domain) < 0) { > + if (xen_remap_domain_mfn_range(st->vma, st->va & PAGE_MASK, *mfnp, 1, > + st->vma->vm_page_prot, st->domain) < 0) { > *mfnp |= 0xf0000000U; > st->err++; > } > diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h > index d789c93..4d8a23e 100644 > --- a/include/xen/xen-ops.h > +++ b/include/xen/xen-ops.h > @@ -20,4 +20,9 @@ int xen_create_contiguous_region(unsigned long vstart, unsigned int order, > > void xen_destroy_contiguous_region(unsigned long vstart, unsigned int order); > > +int xen_remap_domain_mfn_range(struct vm_area_struct *vma, > + unsigned long addr, > + unsigned long mfn, int nr, > + pgprot_t prot, unsigned domid); > + > #endif /* INCLUDE_XEN_OPS_H */_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Jeremy Fitzhardinge
2009-May-21 17:14 UTC
Re: [Xen-devel] [PATCH] privcmd: MMAPBATCH: Fix error handling/reporting
Ian Campbell wrote:>> The noise is just for debugging; if failure is expected, then maybe we >> can extend it to be quiet about those cases. >> > > In this specific instance going directly at the mmu_udpate interface is > probably better since the propagation of errors from the multicall > infrastructure is tricky (well, currently non-existent). I don''t think > multicalls would buy us anything here anyhow since mmu_update is batched > already. >Yeah. The generic multicall stuff is (should be) tuned for the common case of no errors. I think this is the first instance of something where we expect errors back. The multicall path is fairly hot, and I suspect it''s going to need some trimming when the real performance work starts, so keeping it low-feature is a good idea. (Though we could make use of maybe-fail to deal with vmap aliases in pte-pinning...)>> This breaks compiling xenfs as a module; neither flush_tlb_all or >> arbitrary_virt_to_machine are exported, I think. >> > > Rather than exporting those I think moving remap_domain_mfn_range() to > core code (with xen_ on the front of the name) and exporting that would > be cleaner. Thoughts? >Yes, seems reasonable to me. Though if its arch-neutral, drivers/xen would be better. J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel