similar to: [RFC 0/3] VirtIO RDMA

Displaying 20 results from an estimated 600 matches similar to: "[RFC 0/3] VirtIO RDMA"

2019 Apr 11
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com> --- drivers/infiniband/Kconfig | 1 + drivers/infiniband/hw/Makefile | 1 + drivers/infiniband/hw/virtio/Kconfig | 6 + drivers/infiniband/hw/virtio/Makefile | 4 + drivers/infiniband/hw/virtio/virtio_rdma.h | 40 + .../infiniband/hw/virtio/virtio_rdma_device.c | 59 ++
2019 Apr 13
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
On 2019/4/11 19:01, Yuval Shaia wrote: > Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com> > --- > drivers/infiniband/Kconfig | 1 + > drivers/infiniband/hw/Makefile | 1 + > drivers/infiniband/hw/virtio/Kconfig | 6 + > drivers/infiniband/hw/virtio/Makefile | 4 + >
2020 Nov 01
12
[PATCH mlx5-next v1 00/11] Convert mlx5 to use auxiliary bus
From: Leon Romanovsky <leonro at nvidia.com> Changelog: v1: * Renamed _mlx5_rescan_driver to be mlx5_rescan_driver_locked like in other parts of the mlx5 driver. * Renamed MLX5_INTERFACE_PROTOCOL_VDPA to tbe MLX5_INTERFACE_PROTOCOL_VNET as a preparation to coming series from Eli C. * Some small naming renames in mlx5_vdpa. * Refactored adev index code to make Parav's SF series
2019 Apr 11
1
[RFC 2/3] hw/virtio-rdma: VirtIO rdma device
Signed-off-by: Yuval Shaia <yuval.shaia at oracle.com> --- hw/Kconfig | 1 + hw/rdma/Kconfig | 4 + hw/rdma/Makefile.objs | 2 + hw/rdma/virtio/virtio-rdma-ib.c | 287 ++++++++++++++++++++ hw/rdma/virtio/virtio-rdma-ib.h | 93 +++++++ hw/rdma/virtio/virtio-rdma-main.c
2019 Apr 16
0
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
On 4/11/19 4:01 AM, Yuval Shaia wrote: > +++ b/drivers/infiniband/hw/virtio/Kconfig > @@ -0,0 +1,6 @@ > +config INFINIBAND_VIRTIO_RDMA > + tristate "VirtIO Paravirtualized RDMA Driver" > + depends on NETDEVICES && ETHERNET && PCI && INET > + ---help--- > + This driver provides low-level support for VirtIO Paravirtual > + RDMA adapter.
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi, This is the first attempt to add a new qemu nvme backend using in-kernel nvme target. Most code are ported from qemu-nvme and also borrow code from Hannes Reinecke's rts-megasas. It's similar as vhost-scsi, but doesn't use virtio. The advantage is guest can run unmodified NVMe driver. So guest can be any OS that has a NVMe driver. The goal is to get as good performance as
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2000 May 07
1
FW: Browsing issues NT WS 4.0 and Samba
> -----Original Message----- > From: Kurt Heinrich > Sent: Thursday, 4 May 2000 14:29 > To: 'samba@samba.org' > Subject: Browsing issues NT WS 4.0 and Samba > > Hi Guys/Girls, > > I have recently been trying to implement samba into our environment here > as a replacement for ftp clients. > > What I am finding is that windows explorer (NT W/S 4.0)
2011 Jul 25
3
gluster client performance
Hi- I'm new to Gluster, but am trying to get it set up on a new compute cluster we're building. We picked Gluster for one of our cluster file systems (we're also using Lustre for fast scratch space), but the Gluster performance has been so bad that I think maybe we have a configuration problem -- perhaps we're missing a tuning parameter that would help, but I can't find
2008 Feb 18
5
kernel-2.6.18-8.1.14 + lustre 1.6.4.2 + OFED 1.2
We seemed to have it a stumbling block when building with the above (supported) versions. Our process... 1. Start with stock rhel5 2.6.18-8.1.14 source tree 2. Configure InfiniBand support out of the the kernel (we will build OFED separately). 3. Apply the 1.6.4.2 kernel patches to the kernel source. 4. Build the kernel. 5. Build OFED 1.2 against the patched kernel 6. Build Lustre using
2011 Mar 16
5
Xen and the InfiniBand
Hi, all, Is the Xen currently compatible with the InfiniBand? I found some information about the Smart I/O module, but it was posted in 2006. Is the module still maintained? Or, are there any up-to-date alternatives for that? Many thanks, Chiu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2011 May 19
15
Are there source codes for Xen-IB?
Hi, I found a link: http://xenbits.xensource.com/ext/xen-smartio.hg for Xen-IB source codes, but it did''t exist anymore. Are there other ways to get the Xen-IB source codes? Thanks. Regards, Yi-Man _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2005 Mar 20
2
memdisk and winPE diskless boot
Hi, I am working on the diskless booting over Infiniband networks. At the moment we have created and implemented architecture which allow diskless booting of Linux nodes on x86, x86_64 and Itanium platforms. The system works by booting first small custom made Linux kernel of the Infiniband adapter ROM. The kernel has enough of infiniband stack to setup TCP/IP networking over infiniband. Then it
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
We have a new install of CentOS 6.7 with infiniband support installed. We can see the card in hardware and we can see the mlx4 drivers loaded in the kernel but cannot see the card as an ethernet interface, using ifconfig -a. Can you recommend an install procedure to see this as an ethernet interface? Thanks On 05/25/2016 07:32 AM, Fabian Arrotin wrote: > On 25/05/16 03:08, Pat Haley
2016 May 25
3
Recommendations for Infiniband with CentOS 6.7
Hi All, We looking for suggestions on dealing with mellanox drivers in CentOS 6.7 We tried installing mellanox drivers (MLNX_OFED_LINUX-3.2-2.0.0.0-rhel6.7-x86_64) on a Quanta Cirrascale server running Centos 6.7 - 2.6.32-573.22.1.el6.x86_64. When we rebooted the machine after installing the drivers, it went into a kernel panic for every installed kernel except for Centos 6.7
2011 Apr 04
1
rdma or tcp?
Is there a document with some guidelines for setting up bricks with tcp or rdma transport? I'm looking at a new deployment where the storage cluster hosts connect via 10GigE, but clients are on 1GigE. Over time, there will be 10GigE clients, but the majority will remain on 1GigE. In this setup, should the storage bricks use tcp or rdma? If tcp is the better choice, and at some point in the
2008 Oct 07
4
gluster over infiniband....
Hey guys, I am running gluster over infiniband, and I have a couple of questions. We have four servers, each with 1 disk that I am trying to access over infiniband using gluster. The servers look like they start okay, here are the last 10 or so lines of a client log (they are all identical): 2008-10-07 07:18:40 D [spec.y:196:section_sub] parser: child:stripe0->remote1 2008-10-07 07:18:40 D
2009 Feb 23
3
Infiniband Drivers under Xen kernel?
Hello folks, I am running Xen 2.6 under Centos 5.2 installation, and am trying to port Xen migration abilities to Infiniband. But, the InfiniBand drivers do not work under Xen kernel, only under non-Xen kernels. It looks like I need to install some patches to rectify the situation. Has anyone ever worked with Infiniband and Xen and would be able to give me inputs about the same? I would really