similar to: [PATCH 247/493] drivers/block: remove use of __devinit

Displaying 20 results from an estimated 100 matches similar to: "[PATCH 247/493] drivers/block: remove use of __devinit"

2020 Aug 19
0
[PATCH 28/28] nvme-pci: use dma_alloc_pages backed dmapools
Switch from coherent DMA pools to those backed by dma_alloc_pages. This helps device with non-coherent DMA to avoid host accesses to uncached memory for every submission of a larger than single entry I/O. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/nvme/host/pci.c | 80 ++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git
2014 Jul 26
0
[RFC PATCH 03/11] PCI/MSI: Refactor pci_dev_msi_enabled()
Pci_dev_msi_enabled() is used to check whether device MSI/MSIX enabled. Refactor this function to suuport checking only device MSI or MSIX enabled. Signed-off-by: Yijing Wang <wangyijing at huawei.com> --- arch/cris/arch-v32/drivers/pci/bios.c | 2 +- arch/frv/mb93090-mb00/pci-vdk.c | 2 +- arch/ia64/pci/pci.c | 4 ++--
2015 Sep 27
0
[RFC PATCH 0/2] virtio nvme
On Wed, 2015-09-23 at 15:58 -0700, Ming Lin wrote: > On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote: > > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote: > > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote: <SNIP> > > IBLOCK + FILEIO + RD_MCP don't speak SCSI, they simply process I/Os with > > LBA + length based on SGL
2014 Aug 20
1
[RFC PATCH 03/11] PCI/MSI: Refactor pci_dev_msi_enabled()
> -----Original Message----- > From: linux-pci-owner at vger.kernel.org [mailto:linux-pci-owner at vger.kernel.org] > On Behalf Of Yijing Wang > Sent: Saturday, July 26, 2014 8:39 AM > To: linux-kernel at vger.kernel.org > Cc: Xinwei Hu; Wuyun; Bjorn Helgaas; linux-pci at vger.kernel.org; > Paul.Mundt at huawei.com; James E.J. Bottomley; Marc Zyngier; linux-arm- > kernel at
2014 Aug 20
1
[RFC PATCH 03/11] PCI/MSI: Refactor pci_dev_msi_enabled()
> -----Original Message----- > From: linux-pci-owner at vger.kernel.org [mailto:linux-pci-owner at vger.kernel.org] > On Behalf Of Yijing Wang > Sent: Saturday, July 26, 2014 8:39 AM > To: linux-kernel at vger.kernel.org > Cc: Xinwei Hu; Wuyun; Bjorn Helgaas; linux-pci at vger.kernel.org; > Paul.Mundt at huawei.com; James E.J. Bottomley; Marc Zyngier; linux-arm- > kernel at
2020 Mar 11
0
[PATCH RFC v2 12/24] hpsa: use reserved commands
On Wed, Mar 11, 2020 at 12:25:38AM +0800, John Garry wrote: > From: Hannes Reinecke <hare at suse.com> > > Enable the use of reserved commands, and drop the hand-crafted > command allocation. > > Signed-off-by: Hannes Reinecke <hare at suse.com> > --- > drivers/scsi/hpsa.c | 147 ++++++++++++++------------------------------ > drivers/scsi/hpsa.h | 1 -
2016 Aug 17
0
[PATCH 15/15] block: Add FIXME comment to handle device_add_disk error
Done with coccinelle: @@ expression e1, e2, e3; identifier rc; @@ ( rc = device_add_disk(e1, e2, e3); | + /* FIXME: handle error. */ device_add_disk(e1, e2, e3); ) Signed-off-by: Fam Zheng <famz at redhat.com> --- arch/m68k/emu/nfblock.c | 1 + arch/um/drivers/ubd_kern.c | 1 + arch/xtensa/platforms/iss/simdisk.c | 1 + drivers/block/DAC960.c
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers won't be able to use the biovec directly, they'll need to use helpers that take into account bio->bi_iter.bi_bvec_done. This updates callers for the new usage without changing the implementation yet. Signed-off-by: Kent Overstreet <koverstreet at google.com> Cc: Jens Axboe <axboe at kernel.dk> Cc: Geert
2013 Mar 27
0
[PATCH 04/22] block: Convert bio_for_each_segment() to bvec_iter
More prep work for immutable biovecs - with immutable bvecs drivers won't be able to use the biovec directly, they'll need to use helpers that take into account bio->bi_iter.bi_bvec_done. This updates callers for the new usage without changing the implementation yet. Signed-off-by: Kent Overstreet <koverstreet at google.com> Cc: Jens Axboe <axboe at kernel.dk> Cc: Geert
2020 Mar 11
0
[PATCH RFC v2 01/24] scsi: add 'nr_reserved_cmds' field to the SCSI host template
On 3/11/20 12:08 AM, Ming Lei wrote: > On Wed, Mar 11, 2020 at 12:25:27AM +0800, John Garry wrote: >> From: Hannes Reinecke <hare at suse.com> >> >> Add a new field 'nr_reserved_cmds' to the SCSI host template to >> instruct the block layer to set aside a tag space for reserved >> commands. >> >> Signed-off-by: Hannes Reinecke <hare at
2020 Aug 19
39
a saner API for allocating DMA addressable pages
Hi all, this series replaced the DMA_ATTR_NON_CONSISTENT flag to dma_alloc_attrs with a separate new dma_alloc_pages API, which is available on all platforms. In addition to cleaning up the convoluted code path, this ensures that other drivers that have asked for better support for non-coherent DMA to pages with incurring bounce buffering over can finally be properly supported. I'm still a
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote: > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote: > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote: > > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote: > > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote: > > > > > Hi Ming & Co, > >
2015 Sep 23
3
[RFC PATCH 0/2] virtio nvme
On Fri, 2015-09-18 at 14:09 -0700, Nicholas A. Bellinger wrote: > On Fri, 2015-09-18 at 11:12 -0700, Ming Lin wrote: > > On Thu, 2015-09-17 at 17:55 -0700, Nicholas A. Bellinger wrote: > > > On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote: > > > > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote: > > > > > Hi Ming & Co, > >
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
Hi Jason, On 3/18/19 3:47 PM, Jason Wang wrote: > > On 2019/3/15 ??8:41, Cornelia Huck wrote: >> On Fri, 15 Mar 2019 12:50:11 +0800 >> Jason Wang <jasowang at redhat.com> wrote: >> >>> Or something like I proposed several years ago? >>> https://do-db2.lkml.org/lkml/2014/12/25/169 >>> >>> Btw, for virtio-net, I think we actually
2019 Mar 19
3
virtio-blk: should num_vqs be limited by num_possible_cpus()?
Hi Jason, On 3/18/19 3:47 PM, Jason Wang wrote: > > On 2019/3/15 ??8:41, Cornelia Huck wrote: >> On Fri, 15 Mar 2019 12:50:11 +0800 >> Jason Wang <jasowang at redhat.com> wrote: >> >>> Or something like I proposed several years ago? >>> https://do-db2.lkml.org/lkml/2014/12/25/169 >>> >>> Btw, for virtio-net, I think we actually
2006 Jul 26
5
linux-2.6-xen.hg
Hi, Is the http://xenbits.xensource.com/linux-2.6-xen.hg tree still being updated? if not, what''s the preferred Linux tree to track that has all of the Xen bits? Thanks, Muli _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics