search for: kmem_caches

Displaying 20 results from an estimated 246 matches for "kmem_caches".

Did you mean: kmem_cache
2013 Jan 16
6
[PATCH V2] mm/slab: add a leak decoder callback
This adds a leak decoder callback so that slab destruction can use to generate debugging output for the allocated objects. Callers like btrfs are using their own leak tracking which will manage allocated objects in a list(or something else), this does indeed the same thing as what slab does. So adding a callback for leak tracking can avoid this as well as runtime overhead. (The idea is from
2017 Mar 01
2
[PATCH] drm: virtio: use kmem_cache
Just use kmem_cache instead of rolling our own, limited implementation. Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/virtio/virtgpu_drv.h | 4 +-- drivers/gpu/drm/virtio/virtgpu_vq.c | 57 +++++++----------------------------- 2 files changed, 11 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
2017 Mar 01
2
[PATCH] drm: virtio: use kmem_cache
Just use kmem_cache instead of rolling our own, limited implementation. Signed-off-by: Gerd Hoffmann <kraxel at redhat.com> --- drivers/gpu/drm/virtio/virtgpu_drv.h | 4 +-- drivers/gpu/drm/virtio/virtgpu_vq.c | 57 +++++++----------------------------- 2 files changed, 11 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
2019 May 16
3
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Sun, 12 May 2019, Halil Pasic wrote: > I've also got code that deals with AIRQ_IV_CACHELINE by turning the > kmem_cache into a dma_pool. > > Cornelia, Sebastian which approach do you prefer: > 1) get rid of cio_dma_pool and AIRQ_IV_CACHELINE, and waste a page per > vector, or > 2) go with the approach taken by the patch below? We only have a couple of users for
2019 May 16
3
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Sun, 12 May 2019, Halil Pasic wrote: > I've also got code that deals with AIRQ_IV_CACHELINE by turning the > kmem_cache into a dma_pool. > > Cornelia, Sebastian which approach do you prefer: > 1) get rid of cio_dma_pool and AIRQ_IV_CACHELINE, and waste a page per > vector, or > 2) go with the approach taken by the patch below? We only have a couple of users for
2013 Jan 14
5
[PATCH] mm/slab: add a leak decoder callback
This adds a leak decoder callback so that kmem_cache_destroy() can use to generate debugging output for the allocated objects. Callers like btrfs are using their own leak tracking which will manage allocated objects in a list(or something else), this does indeed the same thing as what slab does. So adding a callback for leak tracking can avoid this as well as runtime overhead. Signed-off-by:
2019 May 20
0
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Thu, 16 May 2019 15:59:22 +0200 (CEST) Sebastian Ott <sebott at linux.ibm.com> wrote: > On Sun, 12 May 2019, Halil Pasic wrote: > > I've also got code that deals with AIRQ_IV_CACHELINE by turning the > > kmem_cache into a dma_pool. > > > > Cornelia, Sebastian which approach do you prefer: > > 1) get rid of cio_dma_pool and AIRQ_IV_CACHELINE, and
2019 Jun 11
2
[PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts
On Thu, 6 Jun 2019 13:51:23 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > Protected virtualization guests have to use shared pages for airq > notifier bit vectors, because hypervisor needs to write these bits. > > Let us make sure we allocate DMA memory for the notifier bit vectors by > replacing the kmem_cache with a dma_cache and kalloc() with > cio_dma_zalloc().
2019 Jun 11
2
[PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts
On Thu, 6 Jun 2019 13:51:23 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > Protected virtualization guests have to use shared pages for airq > notifier bit vectors, because hypervisor needs to write these bits. > > Let us make sure we allocate DMA memory for the notifier bit vectors by > replacing the kmem_cache with a dma_cache and kalloc() with > cio_dma_zalloc().
2019 May 22
1
[PATCH 05/10] s390/cio: introduce DMA pools to cio
On Mon, 20 May 2019, Halil Pasic wrote: > On Thu, 16 May 2019 15:59:22 +0200 (CEST) > Sebastian Ott <sebott at linux.ibm.com> wrote: > > We only have a couple of users for airq_iv: > > > > virtio_ccw.c: 2K bits > > You mean a single allocation is 2k bits (VIRTIO_IV_BITS = 256 * 8)? My > understanding is that the upper bound is more like: > MAX_AIRQ_AREAS
2013 Mar 07
3
[PATCH 1/2] virtio-scsi: use pr_err() instead of printk()
Convert the virtio-scsi driver to use pr_err() instead of printk(). Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- drivers/scsi/virtio_scsi.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 612e320..f679b8c 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@
2013 Mar 07
3
[PATCH 1/2] virtio-scsi: use pr_err() instead of printk()
Convert the virtio-scsi driver to use pr_err() instead of printk(). Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- drivers/scsi/virtio_scsi.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 612e320..f679b8c 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@
2014 Jan 08
2
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Wed, 2014-01-08 at 21:18 +0200, Michael S. Tsirkin wrote: > On Wed, Jan 08, 2014 at 10:26:03AM -0800, Eric Dumazet wrote: > > On Wed, 2014-01-08 at 20:08 +0200, Michael S. Tsirkin wrote: > > > > > Eric said we also need a patch to add __GFP_NORETRY, right? > > > Probably before this one in series. > > > > Nope, this __GFP_NORETRY has nothing to do
2014 Jan 08
2
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Wed, 2014-01-08 at 21:18 +0200, Michael S. Tsirkin wrote: > On Wed, Jan 08, 2014 at 10:26:03AM -0800, Eric Dumazet wrote: > > On Wed, 2014-01-08 at 20:08 +0200, Michael S. Tsirkin wrote: > > > > > Eric said we also need a patch to add __GFP_NORETRY, right? > > > Probably before this one in series. > > > > Nope, this __GFP_NORETRY has nothing to do
2019 Jun 11
0
[PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts
On Tue, 11 Jun 2019 12:17:21 +0200 Cornelia Huck <cohuck at redhat.com> wrote: > On Thu, 6 Jun 2019 13:51:23 +0200 > Halil Pasic <pasic at linux.ibm.com> wrote: > > > Protected virtualization guests have to use shared pages for airq > > notifier bit vectors, because hypervisor needs to write these bits. > > > > Let us make sure we allocate DMA
2006 Mar 20
1
ARC cache issues with b35/b36; Bugs 6397610 / 6398177
> Bug ID: 6398177 > Synopsis: zfs: poor nightly build performance in 32-bit mode (high disk activity) Part of the problem appear to be these kmem_caches: # mdb -k ... > ::kmastat cache buf buf buf memory alloc alloc name size in use total in use succeed fail ------------------------- ------ ------ ------ --------- --------- ----- ... dmu_buf_impl_t 192 2029 104328...
2020 Feb 07
0
[RFC PATCH v7 47/78] KVM: introspection: add a jobs list to every introspected vCPU
Every vCPU has a lock-protected list in which (mostly) the receiving worker places the jobs that has to be done by the vCPU once it is kicked (KVM_REQ_INTROSPECTION) out of guest. A job is defined by a "do" function, a "free" function and a pointer (context). Co-developed-by: Nicu?or C??u <ncitu at bitdefender.com> Signed-off-by: Nicu?or C??u <ncitu at
2019 Jun 11
2
[PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts
On Tue, 11 Jun 2019 16:27:21 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > On Tue, 11 Jun 2019 12:17:21 +0200 > Cornelia Huck <cohuck at redhat.com> wrote: > > > On Thu, 6 Jun 2019 13:51:23 +0200 > > Halil Pasic <pasic at linux.ibm.com> wrote: > > > > > Protected virtualization guests have to use shared pages for airq > > >
2019 Jun 11
2
[PATCH v4 4/8] s390/airq: use DMA memory for adapter interrupts
On Tue, 11 Jun 2019 16:27:21 +0200 Halil Pasic <pasic at linux.ibm.com> wrote: > On Tue, 11 Jun 2019 12:17:21 +0200 > Cornelia Huck <cohuck at redhat.com> wrote: > > > On Thu, 6 Jun 2019 13:51:23 +0200 > > Halil Pasic <pasic at linux.ibm.com> wrote: > > > > > Protected virtualization guests have to use shared pages for airq > > >
2019 May 23
0
[PATCH v2 4/8] s390/airq: use DMA memory for adapter interrupts
From: Halil Pasic <pasic at linux.ibm.com> Protected virtualization guests have to use shared pages for airq notifier bit vectors, because hypervisor needs to write these bits. Let us make sure we allocate DMA memory for the notifier bit vectors by replacing the kmem_cache with a dma_cache and kalloc() with cio_dma_zalloc(). Signed-off-by: Halil Pasic <pasic at linux.ibm.com> ---