search for: prp1

Displaying 12 results from an estimated 12 matches for "prp1".

Did you mean: pre1
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote: > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > } > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > - cq->head = new_head; > + /* When the mapped pointer memory area is setup, we don't rely on > + * the MMIO written values to update the head pointer. */ >
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote: > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > } > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > - cq->head = new_head; > + /* When the mapped pointer memory area is setup, we don't rely on > + * the MMIO written values to update the head pointer. */ >
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...r_set_handler(&sq->notifier, nvme_sq_notifier); + memory_region_add_eventfd(&n->iomem, + 0x1000 + offset, 4, true, sq->sqid * 2, &sq->notifier); +} + static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) { uint64_t db_addr = le64_to_cpu(cmd->prp1); @@ -565,6 +603,7 @@ static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) /* Submission queue tail pointer location, 2 * QID * stride. */ sq->db_addr = db_addr + 2 * i * 4; sq->eventidx_addr = eventidx_addr + 2 * i * 4; + nvme_...
2020 Aug 19
0
[PATCH 28/28] nvme-pci: use dma_alloc_pages backed dmapools
...mp;prp_dma); if (!prp_list) return BLK_STS_RESOURCE; @@ -653,6 +658,8 @@ static blk_status_t nvme_pci_setup_prps(struct nvme_dev *dev, dma_len = sg_dma_len(sg); } + dma_sync_single_for_device(dev->dev, prp_dma, i * sizeof(*prp_list), + DMA_TO_DEVICE); done: cmnd->dptr.prp1 = cpu_to_le64(sg_dma_address(iod->sg)); cmnd->dptr.prp2 = cpu_to_le64(iod->first_dma); @@ -706,10 +713,10 @@ static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev, } if (entries <= (256 / sizeof(struct nvme_sgl_desc))) { - pool = dev->prp_small_pool; + pool = &de...
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq); @@ -528,6 +543,40 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_SUCCESS; } +static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) +{ + uint64_t db_addr = le64_to_cpu(cmd->prp1); + uint64_t eventidx_addr = le64_to_cpu(cmd->prp2); + int i; + + /* Addresses should not be NULL and should be page aligned. */ + if (db_addr == 0 || db_addr & (n->page_size - 1) || + eventidx_addr == 0 || eventidx_addr & (n->page_size - 1)) { + return NV...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...4, true, sq->sqid * 2, &sq->notifier); likewise should be 0x1000 + offset, 4, false, 0, &sq->notifier Otherwise looks good! Paolo > +} > + > static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > { > uint64_t db_addr = le64_to_cpu(cmd->prp1); > @@ -565,6 +603,7 @@ static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > /* Submission queue tail pointer location, 2 * QID * stride. */ > sq->db_addr = db_addr + 2 * i * 4; > sq->eventidx_addr = eventidx_addr + 2 * i * 4...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...4, true, sq->sqid * 2, &sq->notifier); likewise should be 0x1000 + offset, 4, false, 0, &sq->notifier Otherwise looks good! Paolo > +} > + > static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > { > uint64_t db_addr = le64_to_cpu(cmd->prp1); > @@ -565,6 +603,7 @@ static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > /* Submission queue tail pointer location, 2 * QID * stride. */ > sq->db_addr = db_addr + 2 * i * 4; > sq->eventidx_addr = eventidx_addr + 2 * i * 4...
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a