Displaying 20 results from an estimated 78 matches for "connectx".
Did you mean:
connect
2020 Aug 03
0
[PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices
On Wed, Jul 29, 2020 at 08:54:52AM +0300, Eli Cohen wrote:
> On Tue, Jul 28, 2020 at 02:53:34PM +0800, Jason Wang wrote:
> >
> > Just notice Michael's vhost branch can not compile due to this commit:
> >
> > commit fee8fe6bd8ccacd27e963b71b4f943be3721779e
> > Author: Michael S. Tsirkin <mst at redhat.com>
> > Date:???? Mon Jul 27 10:51:55 2020
2020 Aug 05
0
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
On 2020/8/5 ??12:20, Eli Cohen wrote:
> Hi Michael,
> please note that this series depends on mlx5 core device driver patches
> in mlx5-next branch in
> git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git.
>
> git pull git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mlx5-next
>
> They also depend Jason Wang's
2020 Aug 05
0
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
On Wed, 2020-08-05 at 09:12 -0400, Michael S. Tsirkin wrote:
> On Wed, Aug 05, 2020 at 04:01:58PM +0300, Eli Cohen wrote:
> > On Wed, Aug 05, 2020 at 08:48:52AM -0400, Michael S. Tsirkin wrote:
> > > > Did you merge this?:
> > > > git pull
> > > > git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.gi
> > > > t mlx5-next
> >
2020 Aug 05
1
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
On Wed, Aug 05, 2020 at 07:01:52PM +0000, Saeed Mahameed wrote:
> On Wed, 2020-08-05 at 09:12 -0400, Michael S. Tsirkin wrote:
> > On Wed, Aug 05, 2020 at 04:01:58PM +0300, Eli Cohen wrote:
> > > On Wed, Aug 05, 2020 at 08:48:52AM -0400, Michael S. Tsirkin wrote:
> > > > > Did you merge this?:
> > > > > git pull
> > > > >
2013 Jun 10
1
Mellanox SR-IOV IB PCI passthrough in Xen - MSI-X pciback issue
Greetings Xen user community,
I am interested in using Mellanox ConnectX cards with SR-IOV capabilities to passthrough pci-e Virtual Functions (VFs) to Xen guests. The hope is to allow for the use of InfiniBand directly within virtual machines and thereby enable a plethora of high performance computing applications that already leverage InfiniBand interconnects. However...
2020 Jul 28
0
[PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices
...Talked with
Michael, and it's better for you to merge the new version in this series.
Sorry for not spotting this before.
[1] https://lkml.org/lkml/2020/7/1/301
Thanks
>
>
> The following series of patches provide VDPA support for Mellanox
> devices. The supported devices are ConnectX6 DX and newer.
>
> Currently, only a network driver is implemented; future patches will
> introduce a block device driver. iperf performance on a single queue is
> around 12 Gbps. Future patches will introduce multi queue support.
>
> The files are organized in such a way that co...
2020 Aug 03
0
[PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices
...between Jason's patches and what's in my tree also exist.
How big is the dependency? Can I pick it up with your ack?
Also, mips build failures need to be dealt with.
>
>
> The following series of patches provide VDPA support for Mellanox
> devices. The supported devices are ConnectX6 DX and newer.
>
> Currently, only a network driver is implemented; future patches will
> introduce a block device driver. iperf performance on a single queue is
> around 12 Gbps. Future patches will introduce multi queue support.
>
> The files are organized in such a way that...
2020 Aug 04
0
[PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices
...gt;
> Will look into it.
Thanks!
I'd like to have everything ready by end of week if possible,
send pull next Monday/Tuesday.
> > >
> > >
> > > The following series of patches provide VDPA support for Mellanox
> > > devices. The supported devices are ConnectX6 DX and newer.
> > >
> > > Currently, only a network driver is implemented; future patches will
> > > introduce a block device driver. iperf performance on a single queue is
> > > around 12 Gbps. Future patches will introduce multi queue support.
> > >...
2020 Aug 05
2
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
On Wed, Aug 05, 2020 at 04:01:58PM +0300, Eli Cohen wrote:
> On Wed, Aug 05, 2020 at 08:48:52AM -0400, Michael S. Tsirkin wrote:
> > >
> > > Did you merge this?:
> > > git pull git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mlx5-next
> >
> >
> > I can only merge this tree if no one else will. Linus does not like
> > getting
2020 Aug 05
2
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
On Wed, Aug 05, 2020 at 04:01:58PM +0300, Eli Cohen wrote:
> On Wed, Aug 05, 2020 at 08:48:52AM -0400, Michael S. Tsirkin wrote:
> > >
> > > Did you merge this?:
> > > git pull git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mlx5-next
> >
> >
> > I can only merge this tree if no one else will. Linus does not like
> > getting
2020 Aug 04
0
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
...;s patches: https://lkml.org/lkml/2020/7/1/301
The ones you included, right?
> Jason, I had to resolve some conflicts so I would appreciate of you can verify
> that it is ok.
>
> The following series of patches provide VDPA support for Mellanox
> devices. The supported devices are ConnectX6 DX and newer.
>
> Currently, only a network driver is implemented; future patches will
> introduce a block device driver. iperf performance on a single queue is
> around 12 Gbps. Future patches will introduce multi queue support.
>
> The files are organized in such a way that...
2020 Aug 05
0
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
...ed, right?
>>
> Right.
>
>>> Jason, I had to resolve some conflicts so I would appreciate of you can verify
>>> that it is ok.
>>>
>>> The following series of patches provide VDPA support for Mellanox
>>> devices. The supported devices are ConnectX6 DX and newer.
>>>
>>> Currently, only a network driver is implemented; future patches will
>>> introduce a block device driver. iperf performance on a single queue is
>>> around 12 Gbps. Future patches will introduce multi queue support.
>>>
>>&g...
2020 Sep 24
4
[PATCH v3 -next] vdpa: mlx5: change Kconfig depends to fix build errors
...kin wrote:
>>>> --- linux-next-20200917.orig/drivers/vdpa/Kconfig
>>>> +++ linux-next-20200917/drivers/vdpa/Kconfig
>>>> @@ -31,7 +31,7 @@ config IFCVF
>>>>
>>>> config MLX5_VDPA
>>>> bool "MLX5 VDPA support library for ConnectX devices"
>>>> - depends on MLX5_CORE
>>>> + depends on VHOST_IOTLB && MLX5_CORE
>>>> default n
>>>
>>> While we are here, can anyone who apply this patch delete the "default n" line?
>>> It is by default "n...
2020 Sep 24
4
[PATCH v3 -next] vdpa: mlx5: change Kconfig depends to fix build errors
...kin wrote:
>>>> --- linux-next-20200917.orig/drivers/vdpa/Kconfig
>>>> +++ linux-next-20200917/drivers/vdpa/Kconfig
>>>> @@ -31,7 +31,7 @@ config IFCVF
>>>>
>>>> config MLX5_VDPA
>>>> bool "MLX5 VDPA support library for ConnectX devices"
>>>> - depends on MLX5_CORE
>>>> + depends on VHOST_IOTLB && MLX5_CORE
>>>> default n
>>>
>>> While we are here, can anyone who apply this patch delete the "default n" line?
>>> It is by default "n...
2014 Nov 16
6
vhost + multiqueue + RSS question.
On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> Hi Michael,
>
> I am playing with vhost multiqueue capability and have a question about
> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> ConnectX-3 NIC which supports multiqueue and RSS. Network related
> parameters for qemu are:
>
> -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
>
> In a guest I ran "ethtool -L eth0 combined 4" to enab...
2014 Nov 16
6
vhost + multiqueue + RSS question.
On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote:
> Hi Michael,
>
> I am playing with vhost multiqueue capability and have a question about
> vhost multiqueue and RSS (receive side steering). My setup has Mellanox
> ConnectX-3 NIC which supports multiqueue and RSS. Network related
> parameters for qemu are:
>
> -netdev tap,id=hn0,script=qemu-ifup.sh,vhost=on,queues=4
> -device virtio-net-pci,netdev=hn0,id=nic1,mq=on,vectors=10
>
> In a guest I ran "ethtool -L eth0 combined 4" to enab...
2020 Jul 28
2
[PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices
...Thanks
>
>> Sorry for not spotting this before.
>>
>> [1] https://lkml.org/lkml/2020/7/1/301
>>
>> Thanks
>>
>>
>>>
>>> The following series of patches provide VDPA support for Mellanox
>>> devices. The supported devices are ConnectX6 DX and newer.
>>>
>>> Currently, only a network driver is implemented; future patches will
>>> introduce a block device driver. iperf performance on a single queue is
>>> around 12 Gbps. Future patches will introduce multi queue support.
>>>
>>&g...
2020 Jul 28
2
[PATCH V3 vhost next 00/10] VDPA support for Mellanox ConnectX devices
...Thanks
>
>> Sorry for not spotting this before.
>>
>> [1] https://lkml.org/lkml/2020/7/1/301
>>
>> Thanks
>>
>>
>>>
>>> The following series of patches provide VDPA support for Mellanox
>>> devices. The supported devices are ConnectX6 DX and newer.
>>>
>>> Currently, only a network driver is implemented; future patches will
>>> introduce a block device driver. iperf performance on a single queue is
>>> around 12 Gbps. Future patches will introduce multi queue support.
>>>
>>&g...
2012 Dec 18
1
Infiniband performance issues answered?
In IRC today, someone who was hitting that same IB performance ceiling
that occasionally gets reported had this to say
[11:50] <nissim> first, I ran fedora which is not supported by Mellanox
OFED distro
[11:50] <nissim> so I moved to CentOS 6.3
[11:51] <nissim> next I removed all distibution related infiniband rpms
and build the latest OFED package
[11:52] <nissim>
2020 Aug 05
0
[PATCH V4 linux-next 00/12] VDPA support for Mellanox ConnectX devices
On Tue, Aug 04, 2020 at 07:20:36PM +0300, Eli Cohen wrote:
> Hi Michael,
> please note that this series depends on mlx5 core device driver patches
> in mlx5-next branch in
> git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git.
>
> git pull git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git mlx5-next
>
> They also depend Jason Wang's