similar to: [RFC PATCH 0/2] use larger max_request_size for virtio_blk

Displaying 20 results from an estimated 500 matches similar to: "[RFC PATCH 0/2] use larger max_request_size for virtio_blk"

2018 Apr 05
0
[RFC PATCH 0/2] use larger max_request_size for virtio_blk
On 4/5/18 4:09 AM, Weiping Zhang wrote: > Hi, > > For virtio block device, actually there is no a hard limit for max request > size, and virtio_blk driver set -1 to blk_queue_max_hw_sectors(q, -1U);. > But it doesn't work, because there is a default upper limitation > BLK_DEF_MAX_SECTORS (1280 sectors). So this series want to add a new helper >
2018 Apr 05
0
[RFC PATCH 2/2] virtio_blk: add new module parameter to set max request size
Actually there is no upper limitation, so add new module parameter to provide a way to set a proper max request size for virtio block. Using a larger request size can improve sequence performance in theory, and reduce the interaction between guest and hypervisor. Signed-off-by: Weiping Zhang <zhangweiping at didichuxing.com> --- drivers/block/virtio_blk.c | 32
2011 Aug 08
7
“bio too big” regression and silent data corruption in 3.0
tl;dr version: 3.0 produces “bio too big” dmesg entries and silently corrupts data in “meta-raid1/data-single” configurations on disks with different max_hw_sectors, where 2.6.38 worked fine. tl;dr side-issue: on-line removal of partitions holding “single” data attempts to create raid0 (rather than single) block groups. If it can''t get enough room for raid0 over all remaining disks, it
2019 May 09
1
[nbdkit PATCH] plugins: Use static buffer for plugin_zeroes
No need to calloc/free a buffer every time NBD_CMD_WRITE_ZEROES has to fall back to a .pread call. Just reserve the maximum buffer up front in our bss. Signed-off-by: Eric Blake <eblake@redhat.com> --- I noticed that buf was a candidate for CLEANUP_FREE, but then further noticed that we can avoid the calloc/free altogether if we don't mind the bss being 64M larger. server/plugins.c |
2019 Apr 01
1
[PATCH nbdkit] Add readahead filter.
A suggested readahead filter. I've only lightly tested this, but it seems to work fine with qemu-img convert. The commit needs proper tests. Rich.
2017 Jan 09
3
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
On 06/01/17 21:51, Andy Lutomirski wrote: > On Fri, Jan 6, 2017 at 10:32 AM, Robin Murphy <robin.murphy at arm.com> wrote: >> On 06/01/17 17:48, Jean-Philippe Brucker wrote: >>> Hi Will, >>> >>> On 20/12/16 15:14, Will Deacon wrote: >>>> Booting Linux on an ARM fastmodel containing an SMMU emulation results >>>> in an unexpected I/O
2017 Jan 09
3
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
On 06/01/17 21:51, Andy Lutomirski wrote: > On Fri, Jan 6, 2017 at 10:32 AM, Robin Murphy <robin.murphy at arm.com> wrote: >> On 06/01/17 17:48, Jean-Philippe Brucker wrote: >>> Hi Will, >>> >>> On 20/12/16 15:14, Will Deacon wrote: >>>> Booting Linux on an ARM fastmodel containing an SMMU emulation results >>>> in an unexpected I/O
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics
2019 Apr 01
1
[PATCH nbdkit v2] Add readahead filter.
Simpler, and including tests. Rich.
2016 Dec 20
4
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
Booting Linux on an ARM fastmodel containing an SMMU emulation results in an unexpected I/O page fault from the legacy virtio-blk PCI device: [ 1.211721] arm-smmu-v3 2b400000.smmu: event 0x10 received: [ 1.211800] arm-smmu-v3 2b400000.smmu: 0x00000000fffff010 [ 1.211880] arm-smmu-v3 2b400000.smmu: 0x0000020800000000 [ 1.211959] arm-smmu-v3 2b400000.smmu: 0x00000008fa081002 [
2016 Dec 20
4
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
Booting Linux on an ARM fastmodel containing an SMMU emulation results in an unexpected I/O page fault from the legacy virtio-blk PCI device: [ 1.211721] arm-smmu-v3 2b400000.smmu: event 0x10 received: [ 1.211800] arm-smmu-v3 2b400000.smmu: 0x00000000fffff010 [ 1.211880] arm-smmu-v3 2b400000.smmu: 0x0000020800000000 [ 1.211959] arm-smmu-v3 2b400000.smmu: 0x00000008fa081002 [
2017 Jan 06
2
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
On 06/01/17 17:48, Jean-Philippe Brucker wrote: > Hi Will, > > On 20/12/16 15:14, Will Deacon wrote: >> Booting Linux on an ARM fastmodel containing an SMMU emulation results >> in an unexpected I/O page fault from the legacy virtio-blk PCI device: >> >> [ 1.211721] arm-smmu-v3 2b400000.smmu: event 0x10 received: >> [ 1.211800] arm-smmu-v3
2017 Jan 06
2
[RFC PATCH] vring: Force use of DMA API for ARM-based systems
On 06/01/17 17:48, Jean-Philippe Brucker wrote: > Hi Will, > > On 20/12/16 15:14, Will Deacon wrote: >> Booting Linux on an ARM fastmodel containing an SMMU emulation results >> in an unexpected I/O page fault from the legacy virtio-blk PCI device: >> >> [ 1.211721] arm-smmu-v3 2b400000.smmu: event 0x10 received: >> [ 1.211800] arm-smmu-v3
2017 Jan 20
7
[nbdkit PATCH 0/5] Add WRITE_ZEROES support
The upstream protocol recently promoted NBD_CMD_WRITE_ZEROES from experimental to a documented extension. Exposing support for this allows plugin writers to create sparse files when driven by a client that knows how to use the extension; meanwhile, even if a plugin does not support this extension, the server benefits from less network traffic from the client. Eric Blake (5): protocol: Support
2017 Nov 15
3
[nbdkit PATCH 0/2] Better response to bogus NBD_CMD_READ
When facing a malicious client that is sending bogus NBD_CMD_READ, we should make sure that we never end up in a situation where we could try to treat the tail from a command that we diagnosed as bad as being further commands. Eric Blake (2): connections: Report mid-message EOF as fatal connections: Hang up early on insanely large WRITE requests src/connections.c | 35
2023 Sep 03
5
[PATCH libnbd 0/5] copy: Allow human sizes for --queue-size, etc
See companion patch: Subject: [PATCH nbdkit] server: Move size parsing code (nbdkit_parse_size) to common/include This is the second part of the patch. It adds the new human_size_parse function to libnbd and then uses it for parsing --queue-size, --request-size and --sparse. The main complication here is that there was already a common/utils/human-size.h header which ends up (eventually)
2011 Sep 01
9
[PATCH V4 0/3] xen-blkfront/blkback discard support
Dear list, This is the V4 of the trim support for xen-blkfront/blkback, Now we move BLKIF_OP_TRIM to BLKIF_OP_DISCARD, and dropped all "trim" stuffs in the patches, and use "discard" instead. Also we updated the helpers of blkif_x86_{32|64}_request or we will meet problems using a non-native protocol. And this patch has been tested with both SSD and raw file, with SSD we will
2013 May 13
22
[PATCH] xen-blk(front|back): Handle large physical sector disks
I accidentally realized today that any domU''s using the paravirt disk driver potentially suffer from poor performance when they get handed in a physical volume and partitioning is done inside the guest. The physical volume passed in has to be one that has the compat 512 logical sector size but hints its real sector size (eg. 4096) as physical sector size. In dom0 handling is correct and
2020 Aug 06
2
[PATCH nbdkit] Experiment with parallel python plugin
This is a quick hack to experiment with parallel threading model in the python plugin. Changes: - Use aligned buffers to make it possible to use O_DIRECT. Using parallel I/O does not buy us much when using buffered I/O. pwrite() copies data to the page cache, and pread() reads data from the page cache. - Disable extents in the file plugin. This way we can compare it with the python