search for: kmalloc_node

Displaying 12 results from an estimated 12 matches for "kmalloc_node".

Did you mean: kvmalloc_node
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a performance improvement. I'm publishing them so they are archived in case someone picks up this work again in the future. The goal of these patches is to allocate virtqueues and driver state from the device's NUMA node for optimal memory access latency. Only guests with a vNUMA topology and virtio devices spread
2020 Jun 25
5
[RFC 0/3] virtio: NUMA-aware memory allocation
These patches are not ready to be merged because I was unable to measure a performance improvement. I'm publishing them so they are archived in case someone picks up this work again in the future. The goal of these patches is to allocate virtqueues and driver state from the device's NUMA node for optimal memory access latency. Only guests with a vNUMA topology and virtio devices spread
2018 Mar 09
7
[PATCH net 0/3] Several fixes for vhost_net ptr_ring usage
Hi: This small series try to fix several bugs of ptr_ring usage in vhost_net. Please review. Thanks Alexander Potapenko (1): vhost_net: initialize rx_ring in vhost_net_open() Jason Wang (2): vhost_net: keep private_data and rx_ring synced vhost_net: examine pointer types during un-producing drivers/net/tun.c | 3 ++- drivers/vhost/net.c | 8 +++++--- include/linux/if_tun.h | 4
2018 Mar 09
0
[PATCH net 1/3] vhost_net: initialize rx_ring in vhost_net_open()
...entry/common.c:292 ... origin: kmsan_save_stack_with_flags mm/kmsan/kmsan.c:303 [inline] kmsan_internal_poison_shadow+0xb8/0x1b0 mm/kmsan/kmsan.c:213 kmsan_kmalloc_large+0x6f/0xd0 mm/kmsan/kmsan.c:392 kmalloc_large_node_hook mm/slub.c:1366 [inline] kmalloc_large_node mm/slub.c:3808 [inline] __kmalloc_node+0x100e/0x1290 mm/slub.c:3818 kmalloc_node include/linux/slab.h:554 [inline] kvmalloc_node+0x1a5/0x2e0 mm/util.c:419 kvmalloc include/linux/mm.h:541 [inline] vhost_net_open+0x64/0x5f0 drivers/vhost/net.c:921 misc_open+0x7b5/0x8b0 drivers/char/misc.c:154 chrdev_open+0xc28/0xd90 fs/char_dev.c:41...
2018 Jul 31
1
KASAN: use-after-free Read in vhost_transport_send_pkt
...00000000000246 R12: 00000000ffffffff > R13: 00000000004ca838 R14: 00000000004c25fb R15: 0000000000000000 > > Allocated by task 28583: > save_stack+0x43/0xd0 mm/kasan/kasan.c:448 > set_track mm/kasan/kasan.c:460 [inline] > kasan_kmalloc+0xc4/0xe0 mm/kasan/kasan.c:553 > __do_kmalloc_node mm/slab.c:3682 [inline] > __kmalloc_node+0x47/0x70 mm/slab.c:3689 > kmalloc_node include/linux/slab.h:555 [inline] > kvmalloc_node+0xb9/0xf0 mm/util.c:423 > kvmalloc include/linux/mm.h:573 [inline] > vhost_vsock_dev_open+0xa2/0x5a0 drivers/vhost/vsock.c:511 > misc_open+0x3ca...
2007 Apr 18
3
Per-cpu patches on top of PDA stuff...
Hi Jeremy, all, Sorry this took so long, spent last week in Japan at OSDL conf then netconf. After several false starts, I ended up with a very simple implementation, which clashes significantly with your work since then 8(. I've pushed the patches anyway, but it's going to be significant work for me to re-merge them, so I wanted your feedback first. The first patch simply changes
2007 Apr 18
3
Per-cpu patches on top of PDA stuff...
Hi Jeremy, all, Sorry this took so long, spent last week in Japan at OSDL conf then netconf. After several false starts, I ended up with a very simple implementation, which clashes significantly with your work since then 8(. I've pushed the patches anyway, but it's going to be significant work for me to re-merge them, so I wanted your feedback first. The first patch simply changes
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...int node_id) + +struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, + unsigned int nr_queues) { struct request_queue *q; int err; q = kmem_cache_alloc_node(blk_requestq_cachep, gfp_mask | __GFP_ZERO, node_id); if (!q) return NULL; + q->queue_ctx = kmalloc_node(nr_queues * sizeof(struct blk_queue_ctx), + GFP_KERNEL, node_id); + if (!q->queue_ctx) { + kmem_cache_free(blk_requestq_cachep, q); + return NULL; + } + + blk_init_queue_ctx(q, nr_queues); + q->id = ida_simple_get(&blk_queue_ida, 0, 0, gfp_mask); if (q->id < 0) goto fai...
2012 Apr 20
1
[PATCH] multiqueue: a hodge podge of things
...int node_id) + +struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id, + unsigned int nr_queues) { struct request_queue *q; int err; q = kmem_cache_alloc_node(blk_requestq_cachep, gfp_mask | __GFP_ZERO, node_id); if (!q) return NULL; + q->queue_ctx = kmalloc_node(nr_queues * sizeof(struct blk_queue_ctx), + GFP_KERNEL, node_id); + if (!q->queue_ctx) { + kmem_cache_free(blk_requestq_cachep, q); + return NULL; + } + + blk_init_queue_ctx(q, nr_queues); + q->id = ida_simple_get(&blk_queue_ida, 0, 0, gfp_mask); if (q->id < 0) goto fai...
2008 Nov 12
15
[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
This patchset expands traditional CFQ scheduler in order to support cgroups, and improves old version. Improvements are as following. * Modularizing our new CFQ scheduler. The expanded CFQ scheduler is registered/unregistered as new I/O elevator scheduler called "cfq-cgroups". By this, the traditional CFQ scheduler, which does not handle cgroups, and our new CFQ
2008 Nov 12
15
[PATCH][RFC][12+2][v3] A expanded CFQ scheduler for cgroups
This patchset expands traditional CFQ scheduler in order to support cgroups, and improves old version. Improvements are as following. * Modularizing our new CFQ scheduler. The expanded CFQ scheduler is registered/unregistered as new I/O elevator scheduler called "cfq-cgroups". By this, the traditional CFQ scheduler, which does not handle cgroups, and our new CFQ
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a