search for: ibdev

Displaying 9 results from an estimated 9 matches for "ibdev".

Did you mean: xbdev
2019 Apr 11
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
...b_verbs.h> + +struct virtio_rdma_info { + struct ib_device ib_dev; + struct virtio_device *vdev; + struct virtqueue *ctrl_vq; + wait_queue_head_t acked; /* arm on send to host, release on recv */ + struct net_device *netdev; +}; + +static inline struct virtio_rdma_info *to_vdev(struct ib_device *ibdev) +{ + return container_of(ibdev, struct virtio_rdma_info, ib_dev); +} + +#endif diff --git a/drivers/infiniband/hw/virtio/virtio_rdma_device.c b/drivers/infiniband/hw/virtio/virtio_rdma_device.c new file mode 100644 index 000000000000..ae41e530644f --- /dev/null +++ b/drivers/infiniband/hw/virtio/v...
2019 Apr 13
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
...{ > + struct ib_device ib_dev; > + struct virtio_device *vdev; > + struct virtqueue *ctrl_vq; > + wait_queue_head_t acked; /* arm on send to host, release on recv */ > + struct net_device *netdev; > +}; > + > +static inline struct virtio_rdma_info *to_vdev(struct ib_device *ibdev) > +{ > + return container_of(ibdev, struct virtio_rdma_info, ib_dev); > +} > + > +#endif > diff --git a/drivers/infiniband/hw/virtio/virtio_rdma_device.c b/drivers/infiniband/hw/virtio/virtio_rdma_device.c > new file mode 100644 > index 000000000000..ae41e530644f > --- /...
2019 Apr 11
9
[RFC 0/3] VirtIO RDMA
Data center backends use more and more RDMA or RoCE devices and more and more software runs in virtualized environment. There is a need for a standard to enable RDMA/RoCE on Virtual Machines. Virtio is the optimal solution since is the de-facto para-virtualizaton technology and also because the Virtio specification allows Hardware Vendors to support Virtio protocol natively in order to achieve
2019 Apr 11
9
[RFC 0/3] VirtIO RDMA
Data center backends use more and more RDMA or RoCE devices and more and more software runs in virtualized environment. There is a need for a standard to enable RDMA/RoCE on Virtual Machines. Virtio is the optimal solution since is the de-facto para-virtualizaton technology and also because the Virtio specification allows Hardware Vendors to support Virtio protocol natively in order to achieve
2019 Apr 16
0
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
...I && INET > + ---help--- > + This driver provides low-level support for VirtIO Paravirtual > + RDMA adapter. Does this driver really depend on Ethernet, or does it also work with Ethernet support disabled? > +static inline struct virtio_rdma_info *to_vdev(struct ib_device *ibdev) > +{ > + return container_of(ibdev, struct virtio_rdma_info, ib_dev); > +} Is it really worth to introduce this function? Have you considered to use container_of(ibdev, struct virtio_rdma_info, ib_dev) directly instead of to_vdev()? > +static void rdma_ctrl_ack(struct virtqueue *vq)...
2020 Nov 01
12
[PATCH mlx5-next v1 00/11] Convert mlx5 to use auxiliary bus
From: Leon Romanovsky <leonro at nvidia.com> Changelog: v1: * Renamed _mlx5_rescan_driver to be mlx5_rescan_driver_locked like in other parts of the mlx5 driver. * Renamed MLX5_INTERFACE_PROTOCOL_VDPA to tbe MLX5_INTERFACE_PROTOCOL_VNET as a preparation to coming series from Eli C. * Some small naming renames in mlx5_vdpa. * Refactored adev index code to make Parav's SF series
2019 Apr 11
1
[RFC 2/3] hw/virtio-rdma: VirtIO rdma device
.../* virtio_add_feature(&features, VIRTIO_NET_F_MAC); */ + + vdev->backend_features = features; + + return features; +} + + +static Property virtio_rdma_dev_properties[] = { + DEFINE_PROP_STRING("netdev", VirtIORdma, backend_eth_device_name), + DEFINE_PROP_STRING("ibdev",VirtIORdma, backend_device_name), + DEFINE_PROP_UINT8("ibport", VirtIORdma, backend_port_num, 1), + DEFINE_PROP_UINT64("dev-caps-max-mr-size", VirtIORdma, dev_attr.max_mr_size, + MAX_MR_SIZE), + DEFINE_PROP_INT32("dev-caps-max-qp",...
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others