search for: vritio

Displaying 11 results from an estimated 11 matches for "vritio".

Did you mean: virtio
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For
2015 Dec 01
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
...t-net, the answer is that more > zerocopy can be done if you put the data path in the kernel. > > But qemu-nvme is already using io_submit for the data path, perhaps > there's not much to gain from vhost-nvme... What do you think about virtio-nvme+vhost-nvme? I also have patch for vritio-nvme: https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-split/virtio Just need to change vhost-nvme to work with it. > > Paolo > > > Still tuning.
2018 Sep 20
0
[RFC PATCH 2/2] virtio/s390: fix race in ccw_io_helper()
...t fine. > * virtio_config_ops does not document these requirements if any. > * So it's up to the devices to use the stuff without shooting > themselves in the foot. > * virtio-pci does not seem to do more to avoid such problems that > we do. > > Back then when learning vritio-ccw I did ask myself such questions > and based on vrito-pci and I was like looks similar, should be > good enough. Yep, I agree. If there's nothing obvious, I think we should just leave it as it is now.
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For
2019 Oct 04
0
CentOS-8 QEMU guest won't bring virtio_net interface up.
...u a CentOS-8 initrd, but despite I set rd.neednet=1, the virtio_net module is not loaded and the interface is not brought up. If I modprobe it by hand, it brings an eth0 interface. My initrd was created using dracut that includes network, qemu and qemu-net modules Qemu is started with EFI bios and vritio ethernet emulation Questions are: * Is it a known bug/ issue? * What Am I missing? * Lspci does see the Red Hat, Inc. Virtio network device, but nothing triggers the module load (virtio_net) * After doing modprobe virtio_net, the interface is named eth0 (it seems that it doesn?t f...
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
...ally a very good choice for a cloud provider that wants to support Windows guests (together with e.g. a fast SAS emulated controller to replace virtio-scsi, and emulated igb or ixgbe to replace virtio-net). Which features are supported by NVMe and not virtio-blk? Paolo > I also have patch for vritio-nvme: > https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-split/virtio > > Just need to change vhost-nvme to work with it. > > > > > Paolo > > > > > Still tuning. > > >
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
...ally a very good choice for a cloud provider that wants to support Windows guests (together with e.g. a fast SAS emulated controller to replace virtio-scsi, and emulated igb or ixgbe to replace virtio-net). Which features are supported by NVMe and not virtio-blk? Paolo > I also have patch for vritio-nvme: > https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-split/virtio > > Just need to change vhost-nvme to work with it. > > > > > Paolo > > > > > Still tuning. > > >
2012 Oct 31
5
[RFC virtio-next 0/4] Introduce CAIF Virtio and reversed Vrings
...e're primarily looking for review comments related to the structure of the Virtio code. There are several options on how to structure this, and feedback is welcomed. Thanks, Sjur Sjur Br?ndeland (4): virtio: Move definitions to header file vring.h include/vring.h: Add support for reversed vritio rings. virtio_ring: Call callback function even when used ring is empty caif_virtio: Add CAIF over virtio drivers/net/caif/Kconfig | 9 + drivers/net/caif/Makefile | 3 + drivers/net/caif/caif_virtio.c | 627 ++++++++++++++++++++++++++++++++ drivers/r...
2012 Oct 31
5
[RFC virtio-next 0/4] Introduce CAIF Virtio and reversed Vrings
...e're primarily looking for review comments related to the structure of the Virtio code. There are several options on how to structure this, and feedback is welcomed. Thanks, Sjur Sjur Br?ndeland (4): virtio: Move definitions to header file vring.h include/vring.h: Add support for reversed vritio rings. virtio_ring: Call callback function even when used ring is empty caif_virtio: Add CAIF over virtio drivers/net/caif/Kconfig | 9 + drivers/net/caif/Makefile | 3 + drivers/net/caif/caif_virtio.c | 627 ++++++++++++++++++++++++++++++++ drivers/r...
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as