search for: unmap

Displaying 20 results from an estimated 2088 matches for "unmap".

Did you mean: munmap
2006 Nov 11
1
Help with newhidups subdriver for Dynex UPS
...am using the testing version of NUT from SVN at changeset 582. I have successfully created a stub for newhidups driver for my Dynex DX-800U UPS, but I am completely lost on the customization of it. I've followed the hid-subdrivers.txt instructions and I understand I need to map all the "unmapped.*" lines of my dynex-hid.c to NUT functions. However, I can't make heads or tails of what I'm supposed to change. Can someone take some time to help a struggling noob? Here's some info I hope helps: # /usr/src/nut-testing# pico /usr/local/ups/etc/ups.conf [mythbox]...
2008 Mar 10
7
[Bug 14941] New: ioremap leak in DRM
...VRAM, and has size 0x8000 or 32kB. My hardware is NV20 (gf3). DRM and DDX from git around 9th March. I found this while mmiotracing nouveau. I start the trace, load drm.ko and nouveau.ko, start X three times in a row, unload nouveau.ko and drm.ko. Now when I stop mmiotracing, it reports three non-unmapped io-mappings as described above. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug.
2017 Oct 23
3
[RFC] virtio-iommu version 0.5
This is version 0.5 of the virtio-iommu specification, the paravirtualized IOMMU. This version addresses feedback from v0.4 and adds an event virtqueue. Please find the specification, LaTeX sources and pdf, at: git://linux-arm.org/virtio-iommu.git viommu/v0.5 http://linux-arm.org/git?p=virtio-iommu.git;a=blob;f=dist/v0.5/virtio-iommu-v0.5.pdf A detailed changelog since v0.4 follows. You can find
2017 Oct 23
3
[RFC] virtio-iommu version 0.5
This is version 0.5 of the virtio-iommu specification, the paravirtualized IOMMU. This version addresses feedback from v0.4 and adds an event virtqueue. Please find the specification, LaTeX sources and pdf, at: git://linux-arm.org/virtio-iommu.git viommu/v0.5 http://linux-arm.org/git?p=virtio-iommu.git;a=blob;f=dist/v0.5/virtio-iommu-v0.5.pdf A detailed changelog since v0.4 follows. You can find
2006 Sep 15
2
Question: how to unmap memory mapped with direct_kernel_remap_pfn_range() ?
Hi, Xenoprof buffers are mapped into the kernel using direct_kernel_remap_pfn_range(). I need to unmap the buffer when it is not needed anymore. However, I could not find any function that unmaps pages previously mapped with direct_kernel_remap_pfn_range(). Any suggestion on how to do this? Thanks Renato _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenso...
2018 Oct 15
3
[PATCH v8] virtio_blk: add discard and write zeroes support
...write zeroes features > to specification" (https://github.com/oasis-tcs/virtio-spec), the virtio There is some issues in this spec. For one using the multiple ranges also for write zeroes is rather inefficient. Write zeroes really should use the same format as read and write. Second the unmap flag isn't properly specified at all, as nothing says the device may not unmap without the unmap flag. Please take a look at the SCSI or NVMe ?pec for some guidance. > +static inline int virtblk_setup_discard_write_zeroes(struct request *req, > + bool unmap) Why is this an inline...
2018 Oct 15
3
[PATCH v8] virtio_blk: add discard and write zeroes support
...write zeroes features > to specification" (https://github.com/oasis-tcs/virtio-spec), the virtio There is some issues in this spec. For one using the multiple ranges also for write zeroes is rather inefficient. Write zeroes really should use the same format as read and write. Second the unmap flag isn't properly specified at all, as nothing says the device may not unmap without the unmap flag. Please take a look at the SCSI or NVMe ?pec for some guidance. > +static inline int virtblk_setup_discard_write_zeroes(struct request *req, > + bool unmap) Why is this an inline...
2020 Oct 22
2
Why "discard":"unmap" is the default option for disks
Hello, I find "discard":"unmap" is defaultly enabled in qemu cmdline(libvirt v6.6, qemu v5.1): XML: <disk type="file" device="disk"> <driver name="qemu" type="qcow2"/> <source file="/var/lib/libvirt/images/new.qcow2" index="2"/>...
2017 Oct 17
0
Junda-tech
...> /* --------------------------------------------------------------- */ > /* HID2NUT lookup table */ > /* --------------------------------------------------------------- */ > > static hid_info_t jundatech_hid2nut[] = { > > { "unmapped.ups.powersummary.capacitymode", 0, 0, > "UPS.PowerSummary.CapacityMode", NULL, "%.0f", 0, NULL }, > { "unmapped.ups.powersummary.designcapacity", 0, 0, > "UPS.PowerSummary.DesignCapacity", NULL, "%.0f", 0, NULL }, > { "u...
2015 Apr 17
2
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
...@@ struct nvkm_vm { > u32 lpde; > }; > > +struct nvkm_vm_bp_list { > + struct list_head head; > + u32 pde; > + u32 pte; > + void *priv; > +}; > + Tracking the PDE and PTE of each memory chunk can probably be avoided if you change your unmapping strategy. Currently you are going through the list of nvkm_vm_bp_list, but you know your PDE and PTE are always going to be adjacent, since a nvkm_vma represents a contiguous block in the GPU VA. So when unmapping, you can simply check for each PTE entry whether the IOMMU bit is set, and unmap...
2015 Apr 20
3
[PATCH 3/6] mmu: map small pages into big pages(s) by IOMMU if possible
On Sat, Apr 18, 2015 at 12:37 AM, Terje Bergstrom <tbergstrom at nvidia.com> wrote: > > On 04/17/2015 02:11 AM, Alexandre Courbot wrote: >> >> Tracking the PDE and PTE of each memory chunk can probably be avoided >> if you change your unmapping strategy. Currently you are going through >> the list of nvkm_vm_bp_list, but you know your PDE and PTE are always >> going to be adjacent, since a nvkm_vma represents a contiguous block >> in the GPU VA. So when unmapping, you can simply check for each PTE >> entry whet...
2023 Jun 29
3
[PATCH drm-next v6 02/13] drm: manager to keep track of GPUs VA mappings
...connect GPU VA mappings to their backing buffers, in particular DRM GEM objects. 3) Provide a common implementation to perform more complex mapping operations on the GPU VA space. In particular splitting and merging of GPU VA mappings, e.g. for intersecting mapping requests or partial unmap requests. Tested-by: Donald Robson <donald.robson at imgtec.com> Reviewed-by: Boris Brezillon <boris.brezillon at collabora.com> Suggested-by: Dave Airlie <airlied at redhat.com> Signed-off-by: Danilo Krummrich <dakr at redhat.com> --- Documentation/gpu/drm-mm.rst | 3...
2020 Oct 22
0
Re: Why "discard":"unmap" is the default option for disks
On Thu, Oct 22, 2020 at 10:57:05 +0800, Han Han wrote: > Hello, > I find "discard":"unmap" is defaultly enabled in qemu cmdline(libvirt > v6.6, qemu v5.1): > XML: > <disk type="file" device="disk"> > <driver name="qemu" type="qcow2"/> > <source file="/var/lib/libvirt/images/new.qcow2" index...
2023 Jul 13
1
[PATCH drm-next v7 02/13] drm: manager to keep track of GPUs VA mappings
...connect GPU VA mappings to their backing buffers, in particular DRM GEM objects. 3) Provide a common implementation to perform more complex mapping operations on the GPU VA space. In particular splitting and merging of GPU VA mappings, e.g. for intersecting mapping requests or partial unmap requests. Acked-by: Thomas Hellstr?m <thomas.hellstrom at linux.intel.com> Acked-by: Matthew Brost <matthew.brost at intel.com> Reviewed-by: Boris Brezillon <boris.brezillon at collabora.com> Tested-by: Matthew Brost <matthew.brost at intel.com> Tested-by: Donald Robson &lt...
2013 May 16
5
xc_map_foreign_bulk() memory leak in ARM version?
Hi Xen folks! I''ve faced with one strange thing in ARM version of Xen: when I use xc_map_foreign_bulk() to map some memory from domU to dom0, after unmap() for previous returned address - memory is not freed at all. Let''s look at call stack: xc_map_foreign() -> linux_privcmd_map_foreign_bulk() -> { addr = mmap(fd); ioctl(fd, IOCTL_PRIVCMD_MMAPBATCH_V2 ); } -> alloc_empty_pages() -> alloc_xenbal...
2023 Jul 20
2
[PATCH drm-misc-next v8 01/12] drm: manager to keep track of GPUs VA mappings
...connect GPU VA mappings to their backing buffers, in particular DRM GEM objects. 3) Provide a common implementation to perform more complex mapping operations on the GPU VA space. In particular splitting and merging of GPU VA mappings, e.g. for intersecting mapping requests or partial unmap requests. Acked-by: Thomas Hellstr?m <thomas.hellstrom at linux.intel.com> Acked-by: Matthew Brost <matthew.brost at intel.com> Reviewed-by: Boris Brezillon <boris.brezillon at collabora.com> Tested-by: Matthew Brost <matthew.brost at intel.com> Tested-by: Donald Robson &lt...
2020 Sep 03
0
Fwd: is there a way to set discard=unmap when using guestmount? (#52)
----- Forwarded message from braindevices <notifications@github.com> ----- Subject: [libguestfs/libguestfs] is there a way to set discard=unmap when using guestmount? (#52) I cannot find anything in the document. This option is very important for dynamic qcow2 disks. Without out this the IO is super slow even on tmpfs only 3MB/s. Subject: Re: [libguestfs/libguestfs] is there a way to set discard=unmap when using guestmount? (#52) Not...
2018 May 31
2
Make discard='unmap' the default?
Is it possible to make discard='unmap' the default for virtio-scsi disks? (Related, is it possible to make virtio-scsi the default disk type, rather than virtio-blk?) Thanks! -- ======================================================================== Ian Pilcher arequipeno@gmail.com -----...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a full flush in the dma-iommu path. Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This patch is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way...
2020 Aug 18
3
[PATCH V2 1/2] Add new flush_iotlb_range and handle freelists when using iommu_unmap_fast
Add a flush_iotlb_range to allow flushing of an iova range instead of a full flush in the dma-iommu path. Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This patch is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way...