Displaying 11 results from an estimated 11 matches for "rd_mcp".
Did you mean:
md_map
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
..._write_cache=1 to expose
WCE=1 via virtio-scsi to SCSI core.
Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x
random I/O now look like:
workload | jobs | 25% write / 75% read | 75% write / 25% read
-----------------|------|----------------------|---------------------
1x rd_mcp LUN | 8 | ~155K IOPs | ~145K IOPs
16x rd_mcp LUNs | 16 | ~315K IOPs | ~305K IOPs
32x rd_mcp LUNs | 16 | ~425K IOPs | ~410K IOPs
The full fio randrw results for the six test cases are attached below.
Also, using a workload of fio numjobs >...
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
..._write_cache=1 to expose
WCE=1 via virtio-scsi to SCSI core.
Using a KVM guest with 32x vCPUs and 4G memory, the results for 4x
random I/O now look like:
workload | jobs | 25% write / 75% read | 75% write / 25% read
-----------------|------|----------------------|---------------------
1x rd_mcp LUN | 8 | ~155K IOPs | ~145K IOPs
16x rd_mcp LUNs | 16 | ~315K IOPs | ~305K IOPs
32x rd_mcp LUNs | 16 | ~425K IOPs | ~410K IOPs
The full fio randrw results for the six test cases are attached below.
Also, using a workload of fio numjobs >...
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
> On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > Hi Ming & Co,
> > >
> > > On Thu, 2015-09-10 at 10:28 -0700, Ming Lin wrote:
> > > > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> >
2015 Sep 18
3
[RFC PATCH 0/2] virtio nvme
On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
> On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote:
> > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote:
> > > Hi Ming & Co,
> > >
> > > On Thu, 2015-09-10 at 10:28 -0700, Ming Lin wrote:
> > > > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> >
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
...host-scsi etc) talk to LIO
> > backend driver(fileio, iblock etc) with SCSI commands.
> >
> > Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
> > commands to SCSI commands and then submit to backend driver?
> >
>
> IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with
> LBA + length based on SGL memory or pass along a FLUSH with LBA +
> length.
>
> So once the 'tcm_eventfd_nvme' driver on KVM host receives a nvme host
> hardware frame via eventfd, it would decode the frame and send along...
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
...host-scsi etc) talk to LIO
> > backend driver(fileio, iblock etc) with SCSI commands.
> >
> > Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
> > commands to SCSI commands and then submit to backend driver?
> >
>
> IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with
> LBA + length based on SGL memory or pass along a FLUSH with LBA +
> length.
>
> So once the 'tcm_eventfd_nvme' driver on KVM host receives a nvme host
> hardware frame via eventfd, it would decode the frame and send along...
2015 Sep 17
0
[RFC PATCH 0/2] virtio nvme
...Me target code needs to function in at least two different modes:
- Direct mapping of nvme backend driver provided hw queues to nvme
fabric driver provided hw queues.
- Decoding of NVMe command set for basic Read/Write/Flush I/O for
submission to existing backend drivers (eg: iblock, fileio, rd_mcp)
With the former case, it's safe to assumes there to be anywhere from a
very small amount of code involved, to no code involved for fast-path
operation.
For more involved logic like PR, ALUA, and EXTENDED_COPY, I think both
modes will still mostly likely handle some aspects of this in softwar...
2015 Sep 18
0
[RFC PATCH 0/2] virtio nvme
...y, LIO frontend driver(iscsi, fc, vhost-scsi etc) talk to LIO
> backend driver(fileio, iblock etc) with SCSI commands.
>
> Did you mean the "tcm_eventfd_nvme" driver need to translate NVMe
> commands to SCSI commands and then submit to backend driver?
>
IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with
LBA + length based on SGL memory or pass along a FLUSH with LBA +
length.
So once the 'tcm_eventfd_nvme' driver on KVM host receives a nvme host
hardware frame via eventfd, it would decode the frame and send along the
Read/Write/Flush whe...
2015 Sep 27
0
[RFC PATCH 0/2] virtio nvme
...Wed, 2015-09-23 at 15:58 -0700, Ming Lin wrote:
> On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote:
> > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote:
> > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote:
<SNIP>
> > IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with
> > LBA + length based on SGL memory or pass along a FLUSH with LBA +
> > length.
> >
> > So once the 'tcm_eventfd_nvme' driver on KVM host receives a nvme host
> > hardware frame via eventfd, it would decode...
2015 Sep 10
5
[RFC PATCH 0/2] virtio nvme
On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin at kernel.org> wrote:
> > These 2 patches added virtio-nvme to kernel and qemu,
> > basically modified from virtio-blk and nvme code.
> >
> > As title said, request for your comments.
> >
> > Play it in Qemu with:
> > -drive
2015 Sep 10
5
[RFC PATCH 0/2] virtio nvme
On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote:
> On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin at kernel.org> wrote:
> > These 2 patches added virtio-nvme to kernel and qemu,
> > basically modified from virtio-blk and nvme code.
> >
> > As title said, request for your comments.
> >
> > Play it in Qemu with:
> > -drive