This patchset implements a new module for the AMD/Pensando DSC that supports vDPA services on PDS Core VF devices. This code is based on and depends on include files from the pds_core driver described here[0]. The pds_core driver creates the auxiliary_bus devices that this module connects to, and this creates vdpa devices for use by the vdpa module. The first version of this driver was a part of the original pds_core RFC [1] but has since been reworked to pull out the PCI driver and to make better use of the virtio and virtio_net configuration spaces made available by the DSC's PCI configuration. As the device development has progressed, the ability to rely on the virtio config spaces has grown. This patchset includes a modification to the existing vp_modern_probe() which implements overrides for the PCI device id check and the DMA mask. These are intended to be used with vendor vDPA devices that implement enough of the virtio config space to be used directly, but don't use the virtio device id. To use this module, enable the VFs and turn on the vDPA services in the pds_core PF, then use the 'vdpa' utility to create devices for use by virtio_vdpa or vhost_vdpa: echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs devlink dev param set pci/$PF_BDF name enable_vnet value true cmode runtime PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1` vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55 [0] Link: https://lore.kernel.org/netdev/20230322185626.38758-1-shannon.nelson at amd.com/ [1] Link: https://lore.kernel.org/netdev/20221118225656.48309-1-snelson at pensando.io/ Changes: v4: - rename device_id_check_override() to device_id_check() - make device_id_check() return the device_id found and checked - removed pds_vdpa.h, put its adminq changes into pds_adminq.h - added a patch to separate out the adminq changes - added a patch to move an adminq enum from pds_common.h to pds_adminq.h - moved adminq calls for get/set_vq_state into cmds.c - limit max_vqs by number of msix available - don't increment nintrs for CVQ, it should already be covered from max_vqs - pds_core API related rework following pds_core inclusion to net-next - use non-debugfs method to find PF pci address in pds_vdpa.rst instructions v3: Link: https://lore.kernel.org/netdev/20230330192313.62018-1-shannon.nelson at amd.com/ - added a patch to modify vp_modern_probe() such that specific device id and DMA mask overrides can be used - add pds_vdpa.rst into index file - dev_dbg instead of dev_err on most of the adminq commands - rework use of pds_vdpa_cmd_reset() and pds_vdpa_init_hw() for better firmware setup in start-stop-start scenarios - removed unused pds_vdpa_cmd_set_features(), we can rely on vp_modern_set_features() - remove unused hw_qtype and hw_qindex from pds_vdpa_vq_info - reworked debugfs print_feature_bits to also print unknown bits - changed use of PAGE_SIZE to local PDS_PAGE_SIZE to keep with FW layout needs without regard to kernel PAGE_SIZE configuration v2: https://lore.kernel.org/netdev/20230309013046.23523-1-shannon.nelson at amd.com/ - removed PCI driver code - replaced home-grown event listener with notifier - replaced many adminq uses with direct virtio_net config access - reworked irqs to follow virtio layout - removed local_mac_bit logic - replaced uses of devm_ interfaces as suggested in pds_core reviews - updated copyright strings to reflect the new owner Shannon Nelson (10): virtio: allow caller to override device id and DMA mask pds_vdpa: Add new vDPA driver for AMD/Pensando DSC pds_vdpa: move enum from common to adminq header pds_vdpa: new adminq entries pds_vdpa: get vdpa management info pds_vdpa: virtio bar setup for vdpa pds_vdpa: add vdpa config client commands pds_vdpa: add support for vdpa and vdpamgmt interfaces pds_vdpa: subscribe to the pds_core events pds_vdpa: pds_vdps.rst and Kconfig .../device_drivers/ethernet/amd/pds_vdpa.rst | 85 +++ .../device_drivers/ethernet/index.rst | 1 + MAINTAINERS | 4 + drivers/vdpa/Kconfig | 8 + drivers/vdpa/Makefile | 1 + drivers/vdpa/pds/Makefile | 10 + drivers/vdpa/pds/aux_drv.c | 140 ++++ drivers/vdpa/pds/aux_drv.h | 26 + drivers/vdpa/pds/cmds.c | 207 +++++ drivers/vdpa/pds/cmds.h | 20 + drivers/vdpa/pds/debugfs.c | 287 +++++++ drivers/vdpa/pds/debugfs.h | 17 + drivers/vdpa/pds/vdpa_dev.c | 704 ++++++++++++++++++ drivers/vdpa/pds/vdpa_dev.h | 47 ++ drivers/virtio/virtio_pci_modern_dev.c | 37 +- include/linux/pds/pds_adminq.h | 287 +++++++ include/linux/pds/pds_common.h | 21 +- include/linux/virtio_pci_modern.h | 6 + 18 files changed, 1876 insertions(+), 32 deletions(-) create mode 100644 Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst create mode 100644 drivers/vdpa/pds/Makefile create mode 100644 drivers/vdpa/pds/aux_drv.c create mode 100644 drivers/vdpa/pds/aux_drv.h create mode 100644 drivers/vdpa/pds/cmds.c create mode 100644 drivers/vdpa/pds/cmds.h create mode 100644 drivers/vdpa/pds/debugfs.c create mode 100644 drivers/vdpa/pds/debugfs.h create mode 100644 drivers/vdpa/pds/vdpa_dev.c create mode 100644 drivers/vdpa/pds/vdpa_dev.h -- 2.17.1
Shannon Nelson
2023-Apr-25 21:25 UTC
[PATCH v4 virtio 01/10] virtio: allow caller to override device id and DMA mask
To add a bit of flexibility with various virtio based devices, allow the caller to specify a different device id and DMA mask. This adds fields to struct virtio_pci_modern_device to specify an override device id check and a DMA mask. int (*device_id_check)(struct pci_dev *pdev); If defined by the driver, this function will be called to check that the PCI device is the vendor's expected device, and will return the found device id to be stored in mdev->id.device. This allows vendors with alternative vendor device ids to use this library on their own device BAR. u64 dma_mask; If defined by the driver, this mask will be used in a call to dma_set_mask_and_coherent() instead of the traditional DMA_BIT_MASK(64). This allows limiting the DMA space on vendor devices with address limitations. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- drivers/virtio/virtio_pci_modern_dev.c | 37 +++++++++++++++++--------- include/linux/virtio_pci_modern.h | 6 +++++ 2 files changed, 31 insertions(+), 12 deletions(-) diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c index 869cb46bef96..1f2db76e8f91 100644 --- a/drivers/virtio/virtio_pci_modern_dev.c +++ b/drivers/virtio/virtio_pci_modern_dev.c @@ -218,21 +218,29 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev) int err, common, isr, notify, device; u32 notify_length; u32 notify_offset; + int devid; check_offsets(); - /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */ - if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f) - return -ENODEV; - - if (pci_dev->device < 0x1040) { - /* Transitional devices: use the PCI subsystem device id as - * virtio device id, same as legacy driver always did. - */ - mdev->id.device = pci_dev->subsystem_device; + if (mdev->device_id_check) { + devid = mdev->device_id_check(pci_dev); + if (devid < 0) + return devid; + mdev->id.device = devid; } else { - /* Modern devices: simply use PCI device id, but start from 0x1040. */ - mdev->id.device = pci_dev->device - 0x1040; + /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */ + if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f) + return -ENODEV; + + if (pci_dev->device < 0x1040) { + /* Transitional devices: use the PCI subsystem device id as + * virtio device id, same as legacy driver always did. + */ + mdev->id.device = pci_dev->subsystem_device; + } else { + /* Modern devices: simply use PCI device id, but start from 0x1040. */ + mdev->id.device = pci_dev->device - 0x1040; + } } mdev->id.vendor = pci_dev->subsystem_vendor; @@ -260,7 +268,12 @@ int vp_modern_probe(struct virtio_pci_modern_device *mdev) return -EINVAL; } - err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64)); + if (mdev->dma_mask) + err = dma_set_mask_and_coherent(&pci_dev->dev, + mdev->dma_mask); + else + err = dma_set_mask_and_coherent(&pci_dev->dev, + DMA_BIT_MASK(64)); if (err) err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)); diff --git a/include/linux/virtio_pci_modern.h b/include/linux/virtio_pci_modern.h index c4eeb79b0139..067ac1d789bc 100644 --- a/include/linux/virtio_pci_modern.h +++ b/include/linux/virtio_pci_modern.h @@ -38,6 +38,12 @@ struct virtio_pci_modern_device { int modern_bars; struct virtio_device_id id; + + /* optional check for vendor virtio device, returns dev_id or -ERRNO */ + int (*device_id_check)(struct pci_dev *pdev); + + /* optional mask for devices with limited DMA space */ + u64 dma_mask; }; /* -- 2.17.1
Shannon Nelson
2023-Apr-25 21:25 UTC
[PATCH v4 virtio 02/10] pds_vdpa: Add new vDPA driver for AMD/Pensando DSC
This is the initial auxiliary driver framework for a new vDPA device driver, an auxiliary_bus client of the pds_core driver. The pds_core driver supplies the PCI services for the VF device and for accessing the adminq in the PF device. This patch adds the very basics of registering for the auxiliary device and setting up debugfs entries. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/vdpa/Makefile | 1 + drivers/vdpa/pds/Makefile | 8 ++++ drivers/vdpa/pds/aux_drv.c | 83 ++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/aux_drv.h | 15 ++++++ drivers/vdpa/pds/debugfs.c | 25 ++++++++++ drivers/vdpa/pds/debugfs.h | 12 +++++ include/linux/pds/pds_common.h | 2 + 7 files changed, 146 insertions(+) create mode 100644 drivers/vdpa/pds/Makefile create mode 100644 drivers/vdpa/pds/aux_drv.c create mode 100644 drivers/vdpa/pds/aux_drv.h create mode 100644 drivers/vdpa/pds/debugfs.c create mode 100644 drivers/vdpa/pds/debugfs.h diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile index 59396ff2a318..8f53c6f3cca7 100644 --- a/drivers/vdpa/Makefile +++ b/drivers/vdpa/Makefile @@ -7,3 +7,4 @@ obj-$(CONFIG_MLX5_VDPA) += mlx5/ obj-$(CONFIG_VP_VDPA) += virtio_pci/ obj-$(CONFIG_ALIBABA_ENI_VDPA) += alibaba/ obj-$(CONFIG_SNET_VDPA) += solidrun/ +obj-$(CONFIG_PDS_VDPA) += pds/ diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile new file mode 100644 index 000000000000..a9cd2f450ae1 --- /dev/null +++ b/drivers/vdpa/pds/Makefile @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +# Copyright(c) 2023 Advanced Micro Devices, Inc + +obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o + +pds_vdpa-y := aux_drv.o + +pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c new file mode 100644 index 000000000000..e4a0ad61ea22 --- /dev/null +++ b/drivers/vdpa/pds/aux_drv.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include <linux/auxiliary_bus.h> +#include <linux/pci.h> + +#include <linux/pds/pds_common.h> +#include <linux/pds/pds_core_if.h> +#include <linux/pds/pds_adminq.h> +#include <linux/pds/pds_auxbus.h> + +#include "aux_drv.h" +#include "debugfs.h" + +static const struct auxiliary_device_id pds_vdpa_id_table[] = { + { .name = PDS_VDPA_DEV_NAME, }, + {}, +}; + +static int pds_vdpa_probe(struct auxiliary_device *aux_dev, + const struct auxiliary_device_id *id) + +{ + struct pds_auxiliary_dev *padev + container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); + struct pds_vdpa_aux *vdpa_aux; + + vdpa_aux = kzalloc(sizeof(*vdpa_aux), GFP_KERNEL); + if (!vdpa_aux) + return -ENOMEM; + + vdpa_aux->padev = padev; + auxiliary_set_drvdata(aux_dev, vdpa_aux); + + return 0; +} + +static void pds_vdpa_remove(struct auxiliary_device *aux_dev) +{ + struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); + struct device *dev = &aux_dev->dev; + + kfree(vdpa_aux); + auxiliary_set_drvdata(aux_dev, NULL); + + dev_info(dev, "Removed\n"); +} + +static struct auxiliary_driver pds_vdpa_driver = { + .name = PDS_DEV_TYPE_VDPA_STR, + .probe = pds_vdpa_probe, + .remove = pds_vdpa_remove, + .id_table = pds_vdpa_id_table, +}; + +static void __exit pds_vdpa_cleanup(void) +{ + auxiliary_driver_unregister(&pds_vdpa_driver); + + pds_vdpa_debugfs_destroy(); +} +module_exit(pds_vdpa_cleanup); + +static int __init pds_vdpa_init(void) +{ + int err; + + pds_vdpa_debugfs_create(); + + err = auxiliary_driver_register(&pds_vdpa_driver); + if (err) { + pr_err("%s: aux driver register failed: %pe\n", + PDS_VDPA_DRV_NAME, ERR_PTR(err)); + pds_vdpa_debugfs_destroy(); + } + + return err; +} +module_init(pds_vdpa_init); + +MODULE_DESCRIPTION(PDS_VDPA_DRV_DESCRIPTION); +MODULE_AUTHOR("Advanced Micro Devices, Inc"); +MODULE_LICENSE("GPL"); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h new file mode 100644 index 000000000000..f1e99359424e --- /dev/null +++ b/drivers/vdpa/pds/aux_drv.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _AUX_DRV_H_ +#define _AUX_DRV_H_ + +#define PDS_VDPA_DRV_DESCRIPTION "AMD/Pensando vDPA VF Device Driver" +#define PDS_VDPA_DRV_NAME KBUILD_MODNAME + +struct pds_vdpa_aux { + struct pds_auxiliary_dev *padev; + + struct dentry *dentry; +}; +#endif /* _AUX_DRV_H_ */ diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c new file mode 100644 index 000000000000..5be22fb7a76a --- /dev/null +++ b/drivers/vdpa/pds/debugfs.c @@ -0,0 +1,25 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include <linux/pci.h> + +#include <linux/pds/pds_common.h> +#include <linux/pds/pds_core_if.h> +#include <linux/pds/pds_adminq.h> +#include <linux/pds/pds_auxbus.h> + +#include "aux_drv.h" +#include "debugfs.h" + +static struct dentry *dbfs_dir; + +void pds_vdpa_debugfs_create(void) +{ + dbfs_dir = debugfs_create_dir(PDS_VDPA_DRV_NAME, NULL); +} + +void pds_vdpa_debugfs_destroy(void) +{ + debugfs_remove_recursive(dbfs_dir); + dbfs_dir = NULL; +} diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h new file mode 100644 index 000000000000..658849591a99 --- /dev/null +++ b/drivers/vdpa/pds/debugfs.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _PDS_VDPA_DEBUGFS_H_ +#define _PDS_VDPA_DEBUGFS_H_ + +#include <linux/debugfs.h> + +void pds_vdpa_debugfs_create(void); +void pds_vdpa_debugfs_destroy(void); + +#endif /* _PDS_VDPA_DEBUGFS_H_ */ diff --git a/include/linux/pds/pds_common.h b/include/linux/pds/pds_common.h index 060331486d50..2a0d1669cfd0 100644 --- a/include/linux/pds/pds_common.h +++ b/include/linux/pds/pds_common.h @@ -39,6 +39,8 @@ enum pds_core_vif_types { #define PDS_DEV_TYPE_RDMA_STR "RDMA" #define PDS_DEV_TYPE_LM_STR "LM" +#define PDS_VDPA_DEV_NAME PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR + #define PDS_CORE_IFNAMSIZ 16 /** -- 2.17.1
Shannon Nelson
2023-Apr-25 21:25 UTC
[PATCH v4 virtio 03/10] pds_vdpa: move enum from common to adminq header
The pds_core_logical_qtype enum and IFNAMSIZ are not needed in the common PDS header, only needed when working with the adminq, so move them to the adminq header. Note: This patch might conflict with pds_vfio patches that are in review, depending on which patchset gets pulled first. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- include/linux/pds/pds_adminq.h | 21 +++++++++++++++++++++ include/linux/pds/pds_common.h | 21 --------------------- 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/include/linux/pds/pds_adminq.h b/include/linux/pds/pds_adminq.h index 98a60ce87b92..61b0a8634e1a 100644 --- a/include/linux/pds/pds_adminq.h +++ b/include/linux/pds/pds_adminq.h @@ -222,6 +222,27 @@ enum pds_core_lif_type { PDS_CORE_LIF_TYPE_DEFAULT = 0, }; +#define PDS_CORE_IFNAMSIZ 16 + +/** + * enum pds_core_logical_qtype - Logical Queue Types + * @PDS_CORE_QTYPE_ADMINQ: Administrative Queue + * @PDS_CORE_QTYPE_NOTIFYQ: Notify Queue + * @PDS_CORE_QTYPE_RXQ: Receive Queue + * @PDS_CORE_QTYPE_TXQ: Transmit Queue + * @PDS_CORE_QTYPE_EQ: Event Queue + * @PDS_CORE_QTYPE_MAX: Max queue type supported + */ +enum pds_core_logical_qtype { + PDS_CORE_QTYPE_ADMINQ = 0, + PDS_CORE_QTYPE_NOTIFYQ = 1, + PDS_CORE_QTYPE_RXQ = 2, + PDS_CORE_QTYPE_TXQ = 3, + PDS_CORE_QTYPE_EQ = 4, + + PDS_CORE_QTYPE_MAX = 16 /* don't change - used in struct size */ +}; + /** * union pds_core_lif_config - LIF configuration * @state: LIF state (enum pds_core_lif_state) diff --git a/include/linux/pds/pds_common.h b/include/linux/pds/pds_common.h index 2a0d1669cfd0..435c8e8161c2 100644 --- a/include/linux/pds/pds_common.h +++ b/include/linux/pds/pds_common.h @@ -41,27 +41,6 @@ enum pds_core_vif_types { #define PDS_VDPA_DEV_NAME PDS_CORE_DRV_NAME "." PDS_DEV_TYPE_VDPA_STR -#define PDS_CORE_IFNAMSIZ 16 - -/** - * enum pds_core_logical_qtype - Logical Queue Types - * @PDS_CORE_QTYPE_ADMINQ: Administrative Queue - * @PDS_CORE_QTYPE_NOTIFYQ: Notify Queue - * @PDS_CORE_QTYPE_RXQ: Receive Queue - * @PDS_CORE_QTYPE_TXQ: Transmit Queue - * @PDS_CORE_QTYPE_EQ: Event Queue - * @PDS_CORE_QTYPE_MAX: Max queue type supported - */ -enum pds_core_logical_qtype { - PDS_CORE_QTYPE_ADMINQ = 0, - PDS_CORE_QTYPE_NOTIFYQ = 1, - PDS_CORE_QTYPE_RXQ = 2, - PDS_CORE_QTYPE_TXQ = 3, - PDS_CORE_QTYPE_EQ = 4, - - PDS_CORE_QTYPE_MAX = 16 /* don't change - used in struct size */ -}; - int pdsc_register_notify(struct notifier_block *nb); void pdsc_unregister_notify(struct notifier_block *nb); void *pdsc_get_pf_struct(struct pci_dev *vf_pdev); -- 2.17.1
Add new adminq definitions in support for vDPA operations. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- include/linux/pds/pds_adminq.h | 266 +++++++++++++++++++++++++++++++++ 1 file changed, 266 insertions(+) diff --git a/include/linux/pds/pds_adminq.h b/include/linux/pds/pds_adminq.h index 61b0a8634e1a..c66ead725434 100644 --- a/include/linux/pds/pds_adminq.h +++ b/include/linux/pds/pds_adminq.h @@ -605,6 +605,257 @@ struct pds_core_q_init_comp { u8 color; }; +/* + * enum pds_vdpa_cmd_opcode - vDPA Device commands + */ +enum pds_vdpa_cmd_opcode { + PDS_VDPA_CMD_INIT = 48, + PDS_VDPA_CMD_IDENT = 49, + PDS_VDPA_CMD_RESET = 51, + PDS_VDPA_CMD_VQ_RESET = 52, + PDS_VDPA_CMD_VQ_INIT = 53, + PDS_VDPA_CMD_STATUS_UPDATE = 54, + PDS_VDPA_CMD_SET_FEATURES = 55, + PDS_VDPA_CMD_SET_ATTR = 56, + PDS_VDPA_CMD_VQ_SET_STATE = 57, + PDS_VDPA_CMD_VQ_GET_STATE = 58, +}; + +/** + * struct pds_vdpa_cmd - generic command + * @opcode: Opcode + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + */ +struct pds_vdpa_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; +}; + +/** + * struct pds_vdpa_init_cmd - INIT command + * @opcode: Opcode PDS_VDPA_CMD_INIT + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + */ +struct pds_vdpa_init_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; +}; + +/** + * struct pds_vdpa_ident - vDPA identification data + * @hw_features: vDPA features supported by device + * @max_vqs: max queues available (2 queues for a single queuepair) + * @max_qlen: log(2) of maximum number of descriptors + * @min_qlen: log(2) of minimum number of descriptors + * + * This struct is used in a DMA block that is set up for the PDS_VDPA_CMD_IDENT + * transaction. Set up the DMA block and send the address in the IDENT cmd + * data, the DSC will write the ident information, then we can remove the DMA + * block after reading the answer. If the completion status is 0, then there + * is valid information, else there was an error and the data should be invalid. + */ +struct pds_vdpa_ident { + __le64 hw_features; + __le16 max_vqs; + __le16 max_qlen; + __le16 min_qlen; +}; + +/** + * struct pds_vdpa_ident_cmd - IDENT command + * @opcode: Opcode PDS_VDPA_CMD_IDENT + * @rsvd: Word boundary padding + * @vf_id: VF id + * @len: length of ident info DMA space + * @ident_pa: address for DMA of ident info (struct pds_vdpa_ident) + * only used for this transaction, then forgotten by DSC + */ +struct pds_vdpa_ident_cmd { + u8 opcode; + u8 rsvd; + __le16 vf_id; + __le32 len; + __le64 ident_pa; +}; + +/** + * struct pds_vdpa_status_cmd - STATUS_UPDATE command + * @opcode: Opcode PDS_VDPA_CMD_STATUS_UPDATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @status: new status bits + */ +struct pds_vdpa_status_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + u8 status; +}; + +/** + * enum pds_vdpa_attr - List of VDPA device attributes + * @PDS_VDPA_ATTR_MAC: MAC address + * @PDS_VDPA_ATTR_MAX_VQ_PAIRS: Max virtqueue pairs + */ +enum pds_vdpa_attr { + PDS_VDPA_ATTR_MAC = 1, + PDS_VDPA_ATTR_MAX_VQ_PAIRS = 2, +}; + +/** + * struct pds_vdpa_setattr_cmd - SET_ATTR command + * @opcode: Opcode PDS_VDPA_CMD_SET_ATTR + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @attr: attribute to be changed (enum pds_vdpa_attr) + * @pad: Word boundary padding + * @mac: new mac address to be assigned as vdpa device address + * @max_vq_pairs: new limit of virtqueue pairs + */ +struct pds_vdpa_setattr_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + u8 attr; + u8 pad[3]; + union { + u8 mac[6]; + __le16 max_vq_pairs; + } __packed; +}; + +/** + * struct pds_vdpa_vq_init_cmd - queue init command + * @opcode: Opcode PDS_VDPA_CMD_VQ_INIT + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id (bit0 clear = rx, bit0 set = tx, qid=N is ctrlq) + * @len: log(2) of max descriptor count + * @desc_addr: DMA address of descriptor area + * @avail_addr: DMA address of available descriptors (aka driver area) + * @used_addr: DMA address of used descriptors (aka device area) + * @intr_index: interrupt index + */ +struct pds_vdpa_vq_init_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; + __le16 len; + __le64 desc_addr; + __le64 avail_addr; + __le64 used_addr; + __le16 intr_index; +}; + +/** + * struct pds_vdpa_vq_init_comp - queue init completion + * @status: Status of the command (enum pds_core_status_code) + * @hw_qtype: HW queue type, used in doorbell selection + * @hw_qindex: HW queue index, used in doorbell selection + * @rsvd: Word boundary padding + * @color: Color bit + */ +struct pds_vdpa_vq_init_comp { + u8 status; + u8 hw_qtype; + __le16 hw_qindex; + u8 rsvd[11]; + u8 color; +}; + +/** + * struct pds_vdpa_vq_reset_cmd - queue reset command + * @opcode: Opcode PDS_VDPA_CMD_VQ_RESET + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + */ +struct pds_vdpa_vq_reset_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; +}; + +/** + * struct pds_vdpa_set_features_cmd - set hw features + * @opcode: Opcode PDS_VDPA_CMD_SET_FEATURES + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @rsvd: Word boundary padding + * @features: Feature bit mask + */ +struct pds_vdpa_set_features_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le32 rsvd; + __le64 features; +}; + +/** + * struct pds_vdpa_vq_set_state_cmd - set vq state + * @opcode: Opcode PDS_VDPA_CMD_VQ_SET_STATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + * @avail: Device avail index. + * @used: Device used index. + * + * If the virtqueue uses packed descriptor format, then the avail and used + * index must have a wrap count. The bits should be arranged like the upper + * 16 bits in the device available notification data: 15 bit index, 1 bit wrap. + */ +struct pds_vdpa_vq_set_state_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; + __le16 avail; + __le16 used; +}; + +/** + * struct pds_vdpa_vq_get_state_cmd - get vq state + * @opcode: Opcode PDS_VDPA_CMD_VQ_GET_STATE + * @vdpa_index: Index for vdpa subdevice + * @vf_id: VF id + * @qid: Queue id + */ +struct pds_vdpa_vq_get_state_cmd { + u8 opcode; + u8 vdpa_index; + __le16 vf_id; + __le16 qid; +}; + +/** + * struct pds_vdpa_vq_get_state_comp - get vq state completion + * @status: Status of the command (enum pds_core_status_code) + * @rsvd0: Word boundary padding + * @avail: Device avail index. + * @used: Device used index. + * @rsvd: Word boundary padding + * @color: Color bit + * + * If the virtqueue uses packed descriptor format, then the avail and used + * index will have a wrap count. The bits will be arranged like the "next" + * part of device available notification data: 15 bit index, 1 bit wrap. + */ +struct pds_vdpa_vq_get_state_comp { + u8 status; + u8 rsvd0; + __le16 avail; + __le16 used; + u8 rsvd[9]; + u8 color; +}; + union pds_core_adminq_cmd { u8 opcode; u8 bytes[64]; @@ -621,6 +872,18 @@ union pds_core_adminq_cmd { struct pds_core_q_identify_cmd q_ident; struct pds_core_q_init_cmd q_init; + + struct pds_vdpa_cmd vdpa; + struct pds_vdpa_init_cmd vdpa_init; + struct pds_vdpa_ident_cmd vdpa_ident; + struct pds_vdpa_status_cmd vdpa_status; + struct pds_vdpa_setattr_cmd vdpa_setattr; + struct pds_vdpa_set_features_cmd vdpa_set_features; + struct pds_vdpa_vq_init_cmd vdpa_vq_init; + struct pds_vdpa_vq_reset_cmd vdpa_vq_reset; + struct pds_vdpa_vq_set_state_cmd vdpa_vq_set_state; + struct pds_vdpa_vq_get_state_cmd vdpa_vq_get_state; + }; union pds_core_adminq_comp { @@ -642,6 +905,9 @@ union pds_core_adminq_comp { struct pds_core_q_identify_comp q_ident; struct pds_core_q_init_comp q_init; + + struct pds_vdpa_vq_init_comp vdpa_vq_init; + struct pds_vdpa_vq_get_state_comp vdpa_vq_get_state; }; #ifndef __CHECKER__ -- 2.17.1
Shannon Nelson
2023-Apr-25 21:25 UTC
[PATCH v4 virtio 05/10] pds_vdpa: get vdpa management info
Find the vDPA management information from the DSC in order to advertise it to the vdpa subsystem. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> Acked-by: Jason Wang <jasowang at redhat.com> --- drivers/vdpa/pds/Makefile | 3 +- drivers/vdpa/pds/aux_drv.c | 17 ++++++ drivers/vdpa/pds/aux_drv.h | 7 +++ drivers/vdpa/pds/debugfs.c | 1 + drivers/vdpa/pds/vdpa_dev.c | 108 ++++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/vdpa_dev.h | 15 +++++ 6 files changed, 150 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/pds/vdpa_dev.c create mode 100644 drivers/vdpa/pds/vdpa_dev.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index a9cd2f450ae1..13b50394ec64 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -3,6 +3,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o -pds_vdpa-y := aux_drv.o +pds_vdpa-y := aux_drv.o \ + vdpa_dev.o pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index e4a0ad61ea22..aa748cf55d2b 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -3,6 +3,7 @@ #include <linux/auxiliary_bus.h> #include <linux/pci.h> +#include <linux/vdpa.h> #include <linux/pds/pds_common.h> #include <linux/pds/pds_core_if.h> @@ -11,6 +12,7 @@ #include "aux_drv.h" #include "debugfs.h" +#include "vdpa_dev.h" static const struct auxiliary_device_id pds_vdpa_id_table[] = { { .name = PDS_VDPA_DEV_NAME, }, @@ -24,15 +26,28 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, struct pds_auxiliary_dev *padev container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); struct pds_vdpa_aux *vdpa_aux; + int err; vdpa_aux = kzalloc(sizeof(*vdpa_aux), GFP_KERNEL); if (!vdpa_aux) return -ENOMEM; vdpa_aux->padev = padev; + vdpa_aux->vf_id = pci_iov_vf_id(padev->vf_pdev); auxiliary_set_drvdata(aux_dev, vdpa_aux); + /* Get device ident info and set up the vdpa_mgmt_dev */ + err = pds_vdpa_get_mgmt_info(vdpa_aux); + if (err) + goto err_free_mem; + return 0; + +err_free_mem: + kfree(vdpa_aux); + auxiliary_set_drvdata(aux_dev, NULL); + + return err; } static void pds_vdpa_remove(struct auxiliary_device *aux_dev) @@ -40,6 +55,8 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + pci_free_irq_vectors(vdpa_aux->padev->vf_pdev); + kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index f1e99359424e..dcec782e79eb 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -10,6 +10,13 @@ struct pds_vdpa_aux { struct pds_auxiliary_dev *padev; + struct vdpa_mgmt_dev vdpa_mdev; + + struct pds_vdpa_ident ident; + + int vf_id; struct dentry *dentry; + + int nintrs; }; #endif /* _AUX_DRV_H_ */ diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index 5be22fb7a76a..d91dceb07380 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -2,6 +2,7 @@ /* Copyright(c) 2023 Advanced Micro Devices, Inc */ #include <linux/pci.h> +#include <linux/vdpa.h> #include <linux/pds/pds_common.h> #include <linux/pds/pds_core_if.h> diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c new file mode 100644 index 000000000000..0f0f0ab8b811 --- /dev/null +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -0,0 +1,108 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include <linux/pci.h> +#include <linux/vdpa.h> +#include <uapi/linux/vdpa.h> + +#include <linux/pds/pds_common.h> +#include <linux/pds/pds_core_if.h> +#include <linux/pds/pds_adminq.h> +#include <linux/pds/pds_auxbus.h> + +#include "vdpa_dev.h" +#include "aux_drv.h" + +static struct virtio_device_id pds_vdpa_id_table[] = { + {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, + {0}, +}; + +static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, + const struct vdpa_dev_set_config *add_config) +{ + return -EOPNOTSUPP; +} + +static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, + struct vdpa_device *vdpa_dev) +{ +} + +static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = { + .dev_add = pds_vdpa_dev_add, + .dev_del = pds_vdpa_dev_del +}; + +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux) +{ + union pds_core_adminq_cmd cmd = { + .vdpa_ident.opcode = PDS_VDPA_CMD_IDENT, + .vdpa_ident.vf_id = cpu_to_le16(vdpa_aux->vf_id), + }; + union pds_core_adminq_comp comp = {}; + struct vdpa_mgmt_dev *mgmt; + struct pci_dev *pf_pdev; + struct device *pf_dev; + struct pci_dev *pdev; + dma_addr_t ident_pa; + struct device *dev; + u16 dev_intrs; + u16 max_vqs; + int err; + + dev = &vdpa_aux->padev->aux_dev.dev; + pdev = vdpa_aux->padev->vf_pdev; + mgmt = &vdpa_aux->vdpa_mdev; + + /* Get resource info through the PF's adminq. It is a block of info, + * so we need to map some memory for PF to make available to the + * firmware for writing the data. + */ + pf_pdev = pci_physfn(vdpa_aux->padev->vf_pdev); + pf_dev = &pf_pdev->dev; + ident_pa = dma_map_single(pf_dev, &vdpa_aux->ident, + sizeof(vdpa_aux->ident), DMA_FROM_DEVICE); + if (dma_mapping_error(pf_dev, ident_pa)) { + dev_err(dev, "Failed to map ident space\n"); + return -ENOMEM; + } + + cmd.vdpa_ident.ident_pa = cpu_to_le64(ident_pa); + cmd.vdpa_ident.len = cpu_to_le32(sizeof(vdpa_aux->ident)); + err = pds_client_adminq_cmd(vdpa_aux->padev, &cmd, + sizeof(cmd.vdpa_ident), &comp, 0); + dma_unmap_single(pf_dev, ident_pa, + sizeof(vdpa_aux->ident), DMA_FROM_DEVICE); + if (err) { + dev_err(dev, "Failed to ident hw, status %d: %pe\n", + comp.status, ERR_PTR(err)); + return err; + } + + max_vqs = le16_to_cpu(vdpa_aux->ident.max_vqs); + dev_intrs = pci_msix_vec_count(pdev); + dev_dbg(dev, "ident.max_vqs %d dev_intrs %d\n", max_vqs, dev_intrs); + + max_vqs = min_t(u16, dev_intrs, max_vqs); + mgmt->max_supported_vqs = min_t(u16, PDS_VDPA_MAX_QUEUES, max_vqs); + vdpa_aux->nintrs = mgmt->max_supported_vqs; + + mgmt->ops = &pds_vdpa_mgmt_dev_ops; + mgmt->id_table = pds_vdpa_id_table; + mgmt->device = dev; + mgmt->supported_features = le64_to_cpu(vdpa_aux->ident.hw_features); + mgmt->config_attr_mask = BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR); + mgmt->config_attr_mask |= BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP); + + err = pci_alloc_irq_vectors(pdev, vdpa_aux->nintrs, vdpa_aux->nintrs, + PCI_IRQ_MSIX); + if (err < 0) { + dev_err(dev, "Couldn't get %d msix vectors: %pe\n", + vdpa_aux->nintrs, ERR_PTR(err)); + return err; + } + vdpa_aux->nintrs = err; + + return 0; +} diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h new file mode 100644 index 000000000000..97fab833a0aa --- /dev/null +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _VDPA_DEV_H_ +#define _VDPA_DEV_H_ + +#define PDS_VDPA_MAX_QUEUES 65 + +struct pds_vdpa_device { + struct vdpa_device vdpa_dev; + struct pds_vdpa_aux *vdpa_aux; +}; + +int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); +#endif /* _VDPA_DEV_H_ */ -- 2.17.1
Shannon Nelson
2023-Apr-25 21:25 UTC
[PATCH v4 virtio 06/10] pds_vdpa: virtio bar setup for vdpa
Prep and use the "modern" virtio bar utilities to get our virtio config space ready. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- drivers/vdpa/pds/aux_drv.c | 25 +++++++++++++++++++++++++ drivers/vdpa/pds/aux_drv.h | 3 +++ 2 files changed, 28 insertions(+) diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index aa748cf55d2b..0c4a135b1484 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -4,6 +4,7 @@ #include <linux/auxiliary_bus.h> #include <linux/pci.h> #include <linux/vdpa.h> +#include <linux/virtio_pci_modern.h> #include <linux/pds/pds_common.h> #include <linux/pds/pds_core_if.h> @@ -19,12 +20,22 @@ static const struct auxiliary_device_id pds_vdpa_id_table[] = { {}, }; +static int pds_vdpa_device_id_check(struct pci_dev *pdev) +{ + if (pdev->device != PCI_DEVICE_ID_PENSANDO_VDPA_VF || + pdev->vendor != PCI_VENDOR_ID_PENSANDO) + return -ENODEV; + + return PCI_DEVICE_ID_PENSANDO_VDPA_VF; +} + static int pds_vdpa_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *id) { struct pds_auxiliary_dev *padev container_of(aux_dev, struct pds_auxiliary_dev, aux_dev); + struct device *dev = &aux_dev->dev; struct pds_vdpa_aux *vdpa_aux; int err; @@ -41,8 +52,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, if (err) goto err_free_mem; + /* Find the virtio configuration */ + vdpa_aux->vd_mdev.pci_dev = padev->vf_pdev; + vdpa_aux->vd_mdev.device_id_check = pds_vdpa_device_id_check; + vdpa_aux->vd_mdev.dma_mask = DMA_BIT_MASK(PDS_CORE_ADDR_LEN); + err = vp_modern_probe(&vdpa_aux->vd_mdev); + if (err) { + dev_err(dev, "Unable to probe for virtio configuration: %pe\n", + ERR_PTR(err)); + goto err_free_mgmt_info; + } + return 0; +err_free_mgmt_info: + pci_free_irq_vectors(padev->vf_pdev); err_free_mem: kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); @@ -55,6 +79,7 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + vp_modern_remove(&vdpa_aux->vd_mdev); pci_free_irq_vectors(vdpa_aux->padev->vf_pdev); kfree(vdpa_aux); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index dcec782e79eb..99e0ff340bfa 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -4,6 +4,8 @@ #ifndef _AUX_DRV_H_ #define _AUX_DRV_H_ +#include <linux/virtio_pci_modern.h> + #define PDS_VDPA_DRV_DESCRIPTION "AMD/Pensando vDPA VF Device Driver" #define PDS_VDPA_DRV_NAME KBUILD_MODNAME @@ -16,6 +18,7 @@ struct pds_vdpa_aux { int vf_id; struct dentry *dentry; + struct virtio_pci_modern_device vd_mdev; int nintrs; }; -- 2.17.1
Shannon Nelson
2023-Apr-25 21:25 UTC
[PATCH v4 virtio 07/10] pds_vdpa: add vdpa config client commands
These are the adminq commands that will be needed for setting up and using the vDPA device. There are a number of commands defined in the FW's API, but by making use of the FW's virtio BAR we only need a few of these commands for vDPA support. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- Note: the previous version was Acked-by Jason Wang, but this has gone through some rework of the adminq command API usage and added two new commands. It is still essentially the same code, but I've dropped the Acked-by for now. drivers/vdpa/pds/Makefile | 1 + drivers/vdpa/pds/cmds.c | 207 ++++++++++++++++++++++++++++++++++++ drivers/vdpa/pds/cmds.h | 20 ++++ drivers/vdpa/pds/vdpa_dev.h | 33 +++++- 4 files changed, 260 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/pds/cmds.c create mode 100644 drivers/vdpa/pds/cmds.h diff --git a/drivers/vdpa/pds/Makefile b/drivers/vdpa/pds/Makefile index 13b50394ec64..2e22418e3ab3 100644 --- a/drivers/vdpa/pds/Makefile +++ b/drivers/vdpa/pds/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_PDS_VDPA) := pds_vdpa.o pds_vdpa-y := aux_drv.o \ + cmds.o \ vdpa_dev.o pds_vdpa-$(CONFIG_DEBUG_FS) += debugfs.o diff --git a/drivers/vdpa/pds/cmds.c b/drivers/vdpa/pds/cmds.c new file mode 100644 index 000000000000..405711a0a0f8 --- /dev/null +++ b/drivers/vdpa/pds/cmds.c @@ -0,0 +1,207 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#include <linux/vdpa.h> +#include <linux/virtio_pci_modern.h> + +#include <linux/pds/pds_common.h> +#include <linux/pds/pds_core_if.h> +#include <linux/pds/pds_adminq.h> +#include <linux/pds/pds_auxbus.h> + +#include "vdpa_dev.h" +#include "aux_drv.h" +#include "cmds.h" + +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_init.opcode = PDS_VDPA_CMD_INIT, + .vdpa_init.vdpa_index = pdsv->vdpa_index, + .vdpa_init.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + }; + union pds_core_adminq_comp comp = {}; + int err; + + /* Initialize the vdpa/virtio device */ + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_init), + &comp, 0); + if (err) + dev_dbg(dev, "Failed to init hw, status %d: %pe\n", + comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa.opcode = PDS_VDPA_CMD_RESET, + .vdpa.vdpa_index = pdsv->vdpa_index, + .vdpa.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + }; + union pds_core_adminq_comp comp = {}; + int err; + + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa), &comp, 0); + if (err) + dev_dbg(dev, "Failed to reset hw, status %d: %pe\n", + comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_setattr.opcode = PDS_VDPA_CMD_SET_ATTR, + .vdpa_setattr.vdpa_index = pdsv->vdpa_index, + .vdpa_setattr.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .vdpa_setattr.attr = PDS_VDPA_ATTR_MAC, + }; + union pds_core_adminq_comp comp = {}; + int err; + + ether_addr_copy(cmd.vdpa_setattr.mac, mac); + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_setattr), + &comp, 0); + if (err) + dev_dbg(dev, "Failed to set mac address %pM, status %d: %pe\n", + mac, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_setattr.opcode = PDS_VDPA_CMD_SET_ATTR, + .vdpa_setattr.vdpa_index = pdsv->vdpa_index, + .vdpa_setattr.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .vdpa_setattr.attr = PDS_VDPA_ATTR_MAX_VQ_PAIRS, + .vdpa_setattr.max_vq_pairs = cpu_to_le16(max_vqp), + }; + union pds_core_adminq_comp comp = {}; + int err; + + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_setattr), + &comp, 0); + if (err) + dev_dbg(dev, "Failed to set max vq pairs %u, status %d: %pe\n", + max_vqp, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid, + struct pds_vdpa_vq_info *vq_info) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_vq_init.opcode = PDS_VDPA_CMD_VQ_INIT, + .vdpa_vq_init.vdpa_index = pdsv->vdpa_index, + .vdpa_vq_init.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .vdpa_vq_init.qid = cpu_to_le16(qid), + .vdpa_vq_init.len = cpu_to_le16(ilog2(vq_info->q_len)), + .vdpa_vq_init.desc_addr = cpu_to_le64(vq_info->desc_addr), + .vdpa_vq_init.avail_addr = cpu_to_le64(vq_info->avail_addr), + .vdpa_vq_init.used_addr = cpu_to_le64(vq_info->used_addr), + .vdpa_vq_init.intr_index = cpu_to_le16(qid), + }; + union pds_core_adminq_comp comp = {}; + int err; + + dev_dbg(dev, "%s: qid %d len %d desc_addr %#llx avail_addr %#llx used_addr %#llx\n", + __func__, qid, ilog2(vq_info->q_len), + vq_info->desc_addr, vq_info->avail_addr, vq_info->used_addr); + + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_vq_init), + &comp, 0); + if (err) + dev_dbg(dev, "Failed to init vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid) +{ + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_vq_reset.opcode = PDS_VDPA_CMD_VQ_RESET, + .vdpa_vq_reset.vdpa_index = pdsv->vdpa_index, + .vdpa_vq_reset.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .vdpa_vq_reset.qid = cpu_to_le16(qid), + }; + union pds_core_adminq_comp comp = {}; + int err; + + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_vq_reset), + &comp, 0); + if (err) + dev_dbg(dev, "Failed to reset vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_set_vq_state(struct pds_vdpa_device *pdsv, + u16 qid, u16 avail, u16 used) +{ struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_vq_set_state.opcode = PDS_VDPA_CMD_VQ_SET_STATE, + .vdpa_vq_set_state.vdpa_index = pdsv->vdpa_index, + .vdpa_vq_set_state.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .vdpa_vq_set_state.qid = cpu_to_le16(qid), + .vdpa_vq_set_state.avail = cpu_to_le16(avail), + .vdpa_vq_set_state.used = cpu_to_le16(used), + }; + union pds_core_adminq_comp comp = {}; + int err; + + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_vq_set_state), + &comp, 0); + if (err) + dev_dbg(dev, "Failed to set state vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + + return err; +} + +int pds_vdpa_cmd_get_vq_state(struct pds_vdpa_device *pdsv, + u16 qid, u16 *avail, u16 *used) +{ struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + union pds_core_adminq_cmd cmd = { + .vdpa_vq_get_state.opcode = PDS_VDPA_CMD_VQ_SET_STATE, + .vdpa_vq_get_state.vdpa_index = pdsv->vdpa_index, + .vdpa_vq_get_state.vf_id = cpu_to_le16(pdsv->vdpa_aux->vf_id), + .vdpa_vq_get_state.qid = cpu_to_le16(qid), + }; + union pds_core_adminq_comp comp = {}; + int err; + + err = pds_client_adminq_cmd(padev, &cmd, sizeof(cmd.vdpa_vq_set_state), + &comp, 0); + if (err) { + dev_dbg(dev, "Failed to set state vq %d, status %d: %pe\n", + qid, comp.status, ERR_PTR(err)); + return err; + } + + *avail = le16_to_cpu(comp.vdpa_vq_get_state.avail); + *used = le16_to_cpu(comp.vdpa_vq_get_state.used); + + return 0; +} diff --git a/drivers/vdpa/pds/cmds.h b/drivers/vdpa/pds/cmds.h new file mode 100644 index 000000000000..cf4f8764e73c --- /dev/null +++ b/drivers/vdpa/pds/cmds.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Advanced Micro Devices, Inc */ + +#ifndef _VDPA_CMDS_H_ +#define _VDPA_CMDS_H_ + +int pds_vdpa_init_hw(struct pds_vdpa_device *pdsv); + +int pds_vdpa_cmd_reset(struct pds_vdpa_device *pdsv); +int pds_vdpa_cmd_set_mac(struct pds_vdpa_device *pdsv, u8 *mac); +int pds_vdpa_cmd_set_max_vq_pairs(struct pds_vdpa_device *pdsv, u16 max_vqp); +int pds_vdpa_cmd_init_vq(struct pds_vdpa_device *pdsv, u16 qid, + struct pds_vdpa_vq_info *vq_info); +int pds_vdpa_cmd_reset_vq(struct pds_vdpa_device *pdsv, u16 qid); +int pds_vdpa_cmd_set_features(struct pds_vdpa_device *pdsv, u64 features); +int pds_vdpa_cmd_set_vq_state(struct pds_vdpa_device *pdsv, + u16 qid, u16 avail, u16 used); +int pds_vdpa_cmd_get_vq_state(struct pds_vdpa_device *pdsv, + u16 qid, u16 *avail, u16 *used); +#endif /* _VDPA_CMDS_H_ */ diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h index 97fab833a0aa..a21596f438c1 100644 --- a/drivers/vdpa/pds/vdpa_dev.h +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -4,11 +4,42 @@ #ifndef _VDPA_DEV_H_ #define _VDPA_DEV_H_ -#define PDS_VDPA_MAX_QUEUES 65 +#include <linux/pci.h> +#include <linux/vdpa.h> + +struct pds_vdpa_vq_info { + bool ready; + u64 desc_addr; + u64 avail_addr; + u64 used_addr; + u32 q_len; + u16 qid; + int irq; + char irq_name[32]; + + void __iomem *notify; + dma_addr_t notify_pa; + + u64 doorbell; + u16 avail_idx; + u16 used_idx; + struct vdpa_callback event_cb; + struct pds_vdpa_device *pdsv; +}; + +#define PDS_VDPA_MAX_QUEUES 65 +#define PDS_VDPA_MAX_QLEN 32768 struct pds_vdpa_device { struct vdpa_device vdpa_dev; struct pds_vdpa_aux *vdpa_aux; + + struct pds_vdpa_vq_info vqs[PDS_VDPA_MAX_QUEUES]; + u64 req_features; /* features requested by vdpa */ + u64 actual_features; /* features negotiated and in use */ + u8 vdpa_index; /* rsvd for future subdevice use */ + u8 num_vqs; /* num vqs in use */ + struct vdpa_callback config_cb; }; int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); -- 2.17.1
Shannon Nelson
2023-Apr-25 21:26 UTC
[PATCH v4 virtio 08/10] pds_vdpa: add support for vdpa and vdpamgmt interfaces
This is the vDPA device support, where we advertise that we can support the virtio queues and deal with the configuration work through the pds_core's adminq. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- drivers/vdpa/pds/aux_drv.c | 15 + drivers/vdpa/pds/aux_drv.h | 1 + drivers/vdpa/pds/debugfs.c | 261 ++++++++++++++++++ drivers/vdpa/pds/debugfs.h | 5 + drivers/vdpa/pds/vdpa_dev.c | 532 +++++++++++++++++++++++++++++++++++- 5 files changed, 813 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/pds/aux_drv.c b/drivers/vdpa/pds/aux_drv.c index 0c4a135b1484..186e9ee22eb1 100644 --- a/drivers/vdpa/pds/aux_drv.c +++ b/drivers/vdpa/pds/aux_drv.c @@ -63,8 +63,21 @@ static int pds_vdpa_probe(struct auxiliary_device *aux_dev, goto err_free_mgmt_info; } + /* Let vdpa know that we can provide devices */ + err = vdpa_mgmtdev_register(&vdpa_aux->vdpa_mdev); + if (err) { + dev_err(dev, "%s: Failed to initialize vdpa_mgmt interface: %pe\n", + __func__, ERR_PTR(err)); + goto err_free_virtio; + } + + pds_vdpa_debugfs_add_pcidev(vdpa_aux); + pds_vdpa_debugfs_add_ident(vdpa_aux); + return 0; +err_free_virtio: + vp_modern_remove(&vdpa_aux->vd_mdev); err_free_mgmt_info: pci_free_irq_vectors(padev->vf_pdev); err_free_mem: @@ -79,9 +92,11 @@ static void pds_vdpa_remove(struct auxiliary_device *aux_dev) struct pds_vdpa_aux *vdpa_aux = auxiliary_get_drvdata(aux_dev); struct device *dev = &aux_dev->dev; + vdpa_mgmtdev_unregister(&vdpa_aux->vdpa_mdev); vp_modern_remove(&vdpa_aux->vd_mdev); pci_free_irq_vectors(vdpa_aux->padev->vf_pdev); + pds_vdpa_debugfs_del_vdpadev(vdpa_aux); kfree(vdpa_aux); auxiliary_set_drvdata(aux_dev, NULL); diff --git a/drivers/vdpa/pds/aux_drv.h b/drivers/vdpa/pds/aux_drv.h index 99e0ff340bfa..26b75344156e 100644 --- a/drivers/vdpa/pds/aux_drv.h +++ b/drivers/vdpa/pds/aux_drv.h @@ -13,6 +13,7 @@ struct pds_vdpa_aux { struct pds_auxiliary_dev *padev; struct vdpa_mgmt_dev vdpa_mdev; + struct pds_vdpa_device *pdsv; struct pds_vdpa_ident ident; diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c index d91dceb07380..0ecd0e2ec6b9 100644 --- a/drivers/vdpa/pds/debugfs.c +++ b/drivers/vdpa/pds/debugfs.c @@ -10,6 +10,7 @@ #include <linux/pds/pds_auxbus.h> #include "aux_drv.h" +#include "vdpa_dev.h" #include "debugfs.h" static struct dentry *dbfs_dir; @@ -24,3 +25,263 @@ void pds_vdpa_debugfs_destroy(void) debugfs_remove_recursive(dbfs_dir); dbfs_dir = NULL; } + +#define PRINT_SBIT_NAME(__seq, __f, __name) \ + do { \ + if ((__f) & (__name)) \ + seq_printf(__seq, " %s", &#__name[16]); \ + } while (0) + +static void print_status_bits(struct seq_file *seq, u8 status) +{ + seq_puts(seq, "status:"); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_ACKNOWLEDGE); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_DRIVER_OK); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FEATURES_OK); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_NEEDS_RESET); + PRINT_SBIT_NAME(seq, status, VIRTIO_CONFIG_S_FAILED); + seq_puts(seq, "\n"); +} + +static void print_feature_bits_all(struct seq_file *seq, u64 features) +{ + int i; + + seq_puts(seq, "features:"); + + for (i = 0; i < (sizeof(u64) * 8); i++) { + u64 mask = BIT_ULL(i); + + switch (features & mask) { + case BIT_ULL(VIRTIO_NET_F_CSUM): + seq_puts(seq, " VIRTIO_NET_F_CSUM"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_CSUM): + seq_puts(seq, " VIRTIO_NET_F_GUEST_CSUM"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_GUEST_OFFLOADS): + seq_puts(seq, " VIRTIO_NET_F_CTRL_GUEST_OFFLOADS"); + break; + case BIT_ULL(VIRTIO_NET_F_MTU): + seq_puts(seq, " VIRTIO_NET_F_MTU"); + break; + case BIT_ULL(VIRTIO_NET_F_MAC): + seq_puts(seq, " VIRTIO_NET_F_MAC"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_TSO4): + seq_puts(seq, " VIRTIO_NET_F_GUEST_TSO4"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_TSO6): + seq_puts(seq, " VIRTIO_NET_F_GUEST_TSO6"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_ECN): + seq_puts(seq, " VIRTIO_NET_F_GUEST_ECN"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_UFO): + seq_puts(seq, " VIRTIO_NET_F_GUEST_UFO"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_TSO4): + seq_puts(seq, " VIRTIO_NET_F_HOST_TSO4"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_TSO6): + seq_puts(seq, " VIRTIO_NET_F_HOST_TSO6"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_ECN): + seq_puts(seq, " VIRTIO_NET_F_HOST_ECN"); + break; + case BIT_ULL(VIRTIO_NET_F_HOST_UFO): + seq_puts(seq, " VIRTIO_NET_F_HOST_UFO"); + break; + case BIT_ULL(VIRTIO_NET_F_MRG_RXBUF): + seq_puts(seq, " VIRTIO_NET_F_MRG_RXBUF"); + break; + case BIT_ULL(VIRTIO_NET_F_STATUS): + seq_puts(seq, " VIRTIO_NET_F_STATUS"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_VQ): + seq_puts(seq, " VIRTIO_NET_F_CTRL_VQ"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_RX): + seq_puts(seq, " VIRTIO_NET_F_CTRL_RX"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_VLAN): + seq_puts(seq, " VIRTIO_NET_F_CTRL_VLAN"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_RX_EXTRA): + seq_puts(seq, " VIRTIO_NET_F_CTRL_RX_EXTRA"); + break; + case BIT_ULL(VIRTIO_NET_F_GUEST_ANNOUNCE): + seq_puts(seq, " VIRTIO_NET_F_GUEST_ANNOUNCE"); + break; + case BIT_ULL(VIRTIO_NET_F_MQ): + seq_puts(seq, " VIRTIO_NET_F_MQ"); + break; + case BIT_ULL(VIRTIO_NET_F_CTRL_MAC_ADDR): + seq_puts(seq, " VIRTIO_NET_F_CTRL_MAC_ADDR"); + break; + case BIT_ULL(VIRTIO_NET_F_HASH_REPORT): + seq_puts(seq, " VIRTIO_NET_F_HASH_REPORT"); + break; + case BIT_ULL(VIRTIO_NET_F_RSS): + seq_puts(seq, " VIRTIO_NET_F_RSS"); + break; + case BIT_ULL(VIRTIO_NET_F_RSC_EXT): + seq_puts(seq, " VIRTIO_NET_F_RSC_EXT"); + break; + case BIT_ULL(VIRTIO_NET_F_STANDBY): + seq_puts(seq, " VIRTIO_NET_F_STANDBY"); + break; + case BIT_ULL(VIRTIO_NET_F_SPEED_DUPLEX): + seq_puts(seq, " VIRTIO_NET_F_SPEED_DUPLEX"); + break; + case BIT_ULL(VIRTIO_F_NOTIFY_ON_EMPTY): + seq_puts(seq, " VIRTIO_F_NOTIFY_ON_EMPTY"); + break; + case BIT_ULL(VIRTIO_F_ANY_LAYOUT): + seq_puts(seq, " VIRTIO_F_ANY_LAYOUT"); + break; + case BIT_ULL(VIRTIO_F_VERSION_1): + seq_puts(seq, " VIRTIO_F_VERSION_1"); + break; + case BIT_ULL(VIRTIO_F_ACCESS_PLATFORM): + seq_puts(seq, " VIRTIO_F_ACCESS_PLATFORM"); + break; + case BIT_ULL(VIRTIO_F_RING_PACKED): + seq_puts(seq, " VIRTIO_F_RING_PACKED"); + break; + case BIT_ULL(VIRTIO_F_ORDER_PLATFORM): + seq_puts(seq, " VIRTIO_F_ORDER_PLATFORM"); + break; + case BIT_ULL(VIRTIO_F_SR_IOV): + seq_puts(seq, " VIRTIO_F_SR_IOV"); + break; + case 0: + break; + default: + seq_printf(seq, " bit_%d", i); + break; + } + } + + seq_puts(seq, "\n"); +} + +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux) +{ + vdpa_aux->dentry = debugfs_create_dir(pci_name(vdpa_aux->padev->vf_pdev), dbfs_dir); +} + +static int identity_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_aux *vdpa_aux = seq->private; + struct vdpa_mgmt_dev *mgmt; + + seq_printf(seq, "aux_dev: %s\n", + dev_name(&vdpa_aux->padev->aux_dev.dev)); + + mgmt = &vdpa_aux->vdpa_mdev; + seq_printf(seq, "max_vqs: %d\n", mgmt->max_supported_vqs); + seq_printf(seq, "config_attr_mask: %#llx\n", mgmt->config_attr_mask); + seq_printf(seq, "supported_features: %#llx\n", mgmt->supported_features); + print_feature_bits_all(seq, mgmt->supported_features); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(identity); + +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux) +{ + debugfs_create_file("identity", 0400, vdpa_aux->dentry, + vdpa_aux, &identity_fops); +} + +static int config_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_device *pdsv = seq->private; + struct virtio_net_config vc; + u8 status; + + memcpy_fromio(&vc, pdsv->vdpa_aux->vd_mdev.device, + sizeof(struct virtio_net_config)); + + seq_printf(seq, "mac: %pM\n", vc.mac); + seq_printf(seq, "max_virtqueue_pairs: %d\n", + __virtio16_to_cpu(true, vc.max_virtqueue_pairs)); + seq_printf(seq, "mtu: %d\n", __virtio16_to_cpu(true, vc.mtu)); + seq_printf(seq, "speed: %d\n", le32_to_cpu(vc.speed)); + seq_printf(seq, "duplex: %d\n", vc.duplex); + seq_printf(seq, "rss_max_key_size: %d\n", vc.rss_max_key_size); + seq_printf(seq, "rss_max_indirection_table_length: %d\n", + le16_to_cpu(vc.rss_max_indirection_table_length)); + seq_printf(seq, "supported_hash_types: %#x\n", + le32_to_cpu(vc.supported_hash_types)); + seq_printf(seq, "vn_status: %#x\n", + __virtio16_to_cpu(true, vc.status)); + + status = vp_modern_get_status(&pdsv->vdpa_aux->vd_mdev); + seq_printf(seq, "dev_status: %#x\n", status); + print_status_bits(seq, status); + + seq_printf(seq, "req_features: %#llx\n", pdsv->req_features); + print_feature_bits_all(seq, pdsv->req_features); + seq_printf(seq, "actual_features: %#llx\n", pdsv->actual_features); + print_feature_bits_all(seq, pdsv->actual_features); + seq_printf(seq, "vdpa_index: %d\n", pdsv->vdpa_index); + seq_printf(seq, "num_vqs: %d\n", pdsv->num_vqs); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(config); + +static int vq_show(struct seq_file *seq, void *v) +{ + struct pds_vdpa_vq_info *vq = seq->private; + + seq_printf(seq, "ready: %d\n", vq->ready); + seq_printf(seq, "desc_addr: %#llx\n", vq->desc_addr); + seq_printf(seq, "avail_addr: %#llx\n", vq->avail_addr); + seq_printf(seq, "used_addr: %#llx\n", vq->used_addr); + seq_printf(seq, "q_len: %d\n", vq->q_len); + seq_printf(seq, "qid: %d\n", vq->qid); + + seq_printf(seq, "doorbell: %#llx\n", vq->doorbell); + seq_printf(seq, "avail_idx: %d\n", vq->avail_idx); + seq_printf(seq, "used_idx: %d\n", vq->used_idx); + seq_printf(seq, "irq: %d\n", vq->irq); + seq_printf(seq, "irq-name: %s\n", vq->irq_name); + + return 0; +} +DEFINE_SHOW_ATTRIBUTE(vq); + +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + int i; + + debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, &config_fops); + + for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) { + char name[8]; + + snprintf(name, sizeof(name), "vq%02d", i); + debugfs_create_file(name, 0400, vdpa_aux->dentry, + &vdpa_aux->pdsv->vqs[i], &vq_fops); + } +} + +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + debugfs_remove_recursive(vdpa_aux->dentry); + vdpa_aux->dentry = NULL; +} + +void pds_vdpa_debugfs_reset_vdpadev(struct pds_vdpa_aux *vdpa_aux) +{ + /* we don't keep track of the entries, so remove it all + * then rebuild the basics + */ + pds_vdpa_debugfs_del_vdpadev(vdpa_aux); + pds_vdpa_debugfs_add_pcidev(vdpa_aux); + pds_vdpa_debugfs_add_ident(vdpa_aux); +} diff --git a/drivers/vdpa/pds/debugfs.h b/drivers/vdpa/pds/debugfs.h index 658849591a99..c088a4e8f1e9 100644 --- a/drivers/vdpa/pds/debugfs.h +++ b/drivers/vdpa/pds/debugfs.h @@ -8,5 +8,10 @@ void pds_vdpa_debugfs_create(void); void pds_vdpa_debugfs_destroy(void); +void pds_vdpa_debugfs_add_pcidev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_add_ident(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_del_vdpadev(struct pds_vdpa_aux *vdpa_aux); +void pds_vdpa_debugfs_reset_vdpadev(struct pds_vdpa_aux *vdpa_aux); #endif /* _PDS_VDPA_DEBUGFS_H_ */ diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index 0f0f0ab8b811..c3316f0faa0c 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -4,6 +4,7 @@ #include <linux/pci.h> #include <linux/vdpa.h> #include <uapi/linux/vdpa.h> +#include <linux/virtio_pci_modern.h> #include <linux/pds/pds_common.h> #include <linux/pds/pds_core_if.h> @@ -12,7 +13,406 @@ #include "vdpa_dev.h" #include "aux_drv.h" +#include "cmds.h" +#include "debugfs.h" +static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev) +{ + return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev); +} + +static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, + u64 desc_addr, u64 driver_addr, u64 device_addr) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].desc_addr = desc_addr; + pdsv->vqs[qid].avail_addr = driver_addr; + pdsv->vqs[qid].used_addr = device_addr; + + return 0; +} + +static void pds_vdpa_set_vq_num(struct vdpa_device *vdpa_dev, u16 qid, u32 num) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].q_len = num; +} + +static void pds_vdpa_kick_vq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + iowrite16(qid, pdsv->vqs[qid].notify); +} + +static void pds_vdpa_set_vq_cb(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_callback *cb) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->vqs[qid].event_cb = *cb; +} + +static irqreturn_t pds_vdpa_isr(int irq, void *data) +{ + struct pds_vdpa_vq_info *vq; + + vq = data; + if (vq->event_cb.callback) + vq->event_cb.callback(vq->event_cb.private); + + return IRQ_HANDLED; +} + +static void pds_vdpa_release_irq(struct pds_vdpa_device *pdsv, int qid) +{ + if (pdsv->vqs[qid].irq == VIRTIO_MSI_NO_VECTOR) + return; + + free_irq(pdsv->vqs[qid].irq, &pdsv->vqs[qid]); + pdsv->vqs[qid].irq = VIRTIO_MSI_NO_VECTOR; +} + +static void pds_vdpa_set_vq_ready(struct vdpa_device *vdpa_dev, u16 qid, bool ready) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pci_dev *pdev = pdsv->vdpa_aux->padev->vf_pdev; + struct device *dev = &pdsv->vdpa_dev.dev; + int irq; + int err; + + dev_dbg(dev, "%s: qid %d ready %d => %d\n", + __func__, qid, pdsv->vqs[qid].ready, ready); + if (ready == pdsv->vqs[qid].ready) + return; + + if (ready) { + irq = pci_irq_vector(pdev, qid); + snprintf(pdsv->vqs[qid].irq_name, sizeof(pdsv->vqs[qid].irq_name), + "vdpa-%s-%d", dev_name(dev), qid); + + err = request_irq(irq, pds_vdpa_isr, 0, + pdsv->vqs[qid].irq_name, &pdsv->vqs[qid]); + if (err) { + dev_err(dev, "%s: no irq for qid %d: %pe\n", + __func__, qid, ERR_PTR(err)); + return; + } + pdsv->vqs[qid].irq = irq; + + /* Pass vq setup info to DSC using adminq to gather up and + * send all info at once so FW can do its full set up in + * one easy operation + */ + err = pds_vdpa_cmd_init_vq(pdsv, qid, &pdsv->vqs[qid]); + if (err) { + dev_err(dev, "Failed to init vq %d: %pe\n", + qid, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, qid); + ready = false; + } + } else { + err = pds_vdpa_cmd_reset_vq(pdsv, qid); + if (err) + dev_err(dev, "%s: reset_vq failed qid %d: %pe\n", + __func__, qid, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, qid); + } + + pdsv->vqs[qid].ready = ready; +} + +static bool pds_vdpa_get_vq_ready(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->vqs[qid].ready; +} + +static int pds_vdpa_set_vq_state(struct vdpa_device *vdpa_dev, u16 qid, + const struct vdpa_vq_state *state) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + u16 avail; + u16 used; + + dev_dbg(dev, "%s: qid %d avail %#x\n", + __func__, qid, state->packed.last_avail_idx); + + if (pdsv->actual_features & VIRTIO_F_RING_PACKED) { + avail = state->packed.last_avail_idx | + (state->packed.last_avail_counter << 15); + used = state->packed.last_used_idx | + (state->packed.last_used_counter << 15); + } else { + avail = state->split.avail_index; + /* state->split does not provide a used_index: + * the vq will be set to "empty" here, and the vq will read + * the current used index the next time the vq is kicked. + */ + used = state->split.avail_index; + } + + return pds_vdpa_cmd_set_vq_state(pdsv, qid, avail, used); +} + +static int pds_vdpa_get_vq_state(struct vdpa_device *vdpa_dev, u16 qid, + struct vdpa_vq_state *state) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct pds_auxiliary_dev *padev = pdsv->vdpa_aux->padev; + struct device *dev = &padev->aux_dev.dev; + u16 avail; + u16 used; + int err; + + dev_dbg(dev, "%s: qid %d\n", __func__, qid); + + err = pds_vdpa_cmd_get_vq_state(pdsv, qid, &avail, &used); + if (err) + return err; + + if (pdsv->actual_features & VIRTIO_F_RING_PACKED) { + state->packed.last_avail_idx = avail & 0x7fff; + state->packed.last_avail_counter = avail >> 15; + } else { + state->split.avail_index = avail; + /* state->split does not provide a used_index. */ + } + + return 0; +} + +static struct vdpa_notification_area +pds_vdpa_get_vq_notification(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct virtio_pci_modern_device *vd_mdev; + struct vdpa_notification_area area; + + area.addr = pdsv->vqs[qid].notify_pa; + + vd_mdev = &pdsv->vdpa_aux->vd_mdev; + if (!vd_mdev->notify_offset_multiplier) + area.size = PDS_PAGE_SIZE; + else + area.size = vd_mdev->notify_offset_multiplier; + + return area; +} + +static int pds_vdpa_get_vq_irq(struct vdpa_device *vdpa_dev, u16 qid) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->vqs[qid].irq; +} + +static u32 pds_vdpa_get_vq_align(struct vdpa_device *vdpa_dev) +{ + return PDS_PAGE_SIZE; +} + +static u32 pds_vdpa_get_vq_group(struct vdpa_device *vdpa_dev, u16 idx) +{ + return 0; +} + +static u64 pds_vdpa_get_device_features(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return le64_to_cpu(pdsv->vdpa_aux->ident.hw_features); +} + +static int pds_vdpa_set_driver_features(struct vdpa_device *vdpa_dev, u64 features) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct device *dev = &pdsv->vdpa_dev.dev; + u64 nego_features; + u64 missing; + + if (!(features & BIT_ULL(VIRTIO_F_ACCESS_PLATFORM)) && features) { + dev_err(dev, "VIRTIO_F_ACCESS_PLATFORM is not negotiated\n"); + return -EOPNOTSUPP; + } + + pdsv->req_features = features; + + /* Check for valid feature bits */ + nego_features = features & le64_to_cpu(pdsv->vdpa_aux->ident.hw_features); + missing = pdsv->req_features & ~nego_features; + if (missing) { + dev_err(dev, "Can't support all requested features in %#llx, missing %#llx features\n", + pdsv->req_features, missing); + return -EOPNOTSUPP; + } + + dev_dbg(dev, "%s: %#llx => %#llx\n", + __func__, pdsv->actual_features, nego_features); + + if (pdsv->actual_features == nego_features) + return 0; + + vp_modern_set_features(&pdsv->vdpa_aux->vd_mdev, nego_features); + pdsv->actual_features = nego_features; + + return 0; +} + +static u64 pds_vdpa_get_driver_features(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return pdsv->actual_features; +} + +static void pds_vdpa_set_config_cb(struct vdpa_device *vdpa_dev, + struct vdpa_callback *cb) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + pdsv->config_cb.callback = cb->callback; + pdsv->config_cb.private = cb->private; +} + +static u16 pds_vdpa_get_vq_num_max(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + /* qemu has assert() that vq_num_max <= VIRTQUEUE_MAX_SIZE (1024) */ + return min_t(u16, 1024, BIT(le16_to_cpu(pdsv->vdpa_aux->ident.max_qlen))); +} + +static u32 pds_vdpa_get_device_id(struct vdpa_device *vdpa_dev) +{ + return VIRTIO_ID_NET; +} + +static u32 pds_vdpa_get_vendor_id(struct vdpa_device *vdpa_dev) +{ + return PCI_VENDOR_ID_PENSANDO; +} + +static u8 pds_vdpa_get_status(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + return vp_modern_get_status(&pdsv->vdpa_aux->vd_mdev); +} + +static void pds_vdpa_set_status(struct vdpa_device *vdpa_dev, u8 status) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + + vp_modern_set_status(&pdsv->vdpa_aux->vd_mdev, status); + + /* Note: still working with FW on the need for this reset cmd */ + if (status == 0) + pds_vdpa_cmd_reset(pdsv); +} + +static int pds_vdpa_reset(struct vdpa_device *vdpa_dev) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + struct device *dev; + int err = 0; + u8 status; + int i; + + dev = &pdsv->vdpa_aux->padev->aux_dev.dev; + status = pds_vdpa_get_status(vdpa_dev); + + if (status == 0) + return 0; + + if (status & VIRTIO_CONFIG_S_DRIVER_OK) { + /* Reset the vqs */ + for (i = 0; i < pdsv->num_vqs && !err; i++) { + err = pds_vdpa_cmd_reset_vq(pdsv, i); + if (err) + dev_err(dev, "%s: reset_vq failed qid %d: %pe\n", + __func__, i, ERR_PTR(err)); + pds_vdpa_release_irq(pdsv, i); + memset(&pdsv->vqs[i], 0, sizeof(pdsv->vqs[0])); + pdsv->vqs[i].ready = false; + } + } + + pds_vdpa_set_status(vdpa_dev, 0); + + return 0; +} + +static size_t pds_vdpa_get_config_size(struct vdpa_device *vdpa_dev) +{ + return sizeof(struct virtio_net_config); +} + +static void pds_vdpa_get_config(struct vdpa_device *vdpa_dev, + unsigned int offset, + void *buf, unsigned int len) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + void __iomem *device; + + if (offset + len > sizeof(struct virtio_net_config)) { + WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len); + return; + } + + device = pdsv->vdpa_aux->vd_mdev.device; + memcpy_fromio(buf, device + offset, len); +} + +static void pds_vdpa_set_config(struct vdpa_device *vdpa_dev, + unsigned int offset, const void *buf, + unsigned int len) +{ + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); + void __iomem *device; + + if (offset + len > sizeof(struct virtio_net_config)) { + WARN(true, "%s: bad read, offset %d len %d\n", __func__, offset, len); + return; + } + + device = pdsv->vdpa_aux->vd_mdev.device; + memcpy_toio(device + offset, buf, len); +} + +static const struct vdpa_config_ops pds_vdpa_ops = { + .set_vq_address = pds_vdpa_set_vq_address, + .set_vq_num = pds_vdpa_set_vq_num, + .kick_vq = pds_vdpa_kick_vq, + .set_vq_cb = pds_vdpa_set_vq_cb, + .set_vq_ready = pds_vdpa_set_vq_ready, + .get_vq_ready = pds_vdpa_get_vq_ready, + .set_vq_state = pds_vdpa_set_vq_state, + .get_vq_state = pds_vdpa_get_vq_state, + .get_vq_notification = pds_vdpa_get_vq_notification, + .get_vq_irq = pds_vdpa_get_vq_irq, + .get_vq_align = pds_vdpa_get_vq_align, + .get_vq_group = pds_vdpa_get_vq_group, + + .get_device_features = pds_vdpa_get_device_features, + .set_driver_features = pds_vdpa_set_driver_features, + .get_driver_features = pds_vdpa_get_driver_features, + .set_config_cb = pds_vdpa_set_config_cb, + .get_vq_num_max = pds_vdpa_get_vq_num_max, + .get_device_id = pds_vdpa_get_device_id, + .get_vendor_id = pds_vdpa_get_vendor_id, + .get_status = pds_vdpa_get_status, + .set_status = pds_vdpa_set_status, + .reset = pds_vdpa_reset, + .get_config_size = pds_vdpa_get_config_size, + .get_config = pds_vdpa_get_config, + .set_config = pds_vdpa_set_config, +}; static struct virtio_device_id pds_vdpa_id_table[] = { {VIRTIO_ID_NET, VIRTIO_DEV_ANY_ID}, {0}, @@ -21,12 +421,142 @@ static struct virtio_device_id pds_vdpa_id_table[] = { static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, const struct vdpa_dev_set_config *add_config) { - return -EOPNOTSUPP; + struct pds_vdpa_aux *vdpa_aux; + struct pds_vdpa_device *pdsv; + struct vdpa_mgmt_dev *mgmt; + u16 fw_max_vqs, vq_pairs; + struct device *dma_dev; + struct pci_dev *pdev; + struct device *dev; + u8 mac[ETH_ALEN]; + int err; + int i; + + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); + dev = &vdpa_aux->padev->aux_dev.dev; + mgmt = &vdpa_aux->vdpa_mdev; + + if (vdpa_aux->pdsv) { + dev_warn(dev, "Multiple vDPA devices on a VF is not supported.\n"); + return -EOPNOTSUPP; + } + + pdsv = vdpa_alloc_device(struct pds_vdpa_device, vdpa_dev, + dev, &pds_vdpa_ops, 1, 1, name, false); + if (IS_ERR(pdsv)) { + dev_err(dev, "Failed to allocate vDPA structure: %pe\n", pdsv); + return PTR_ERR(pdsv); + } + + vdpa_aux->pdsv = pdsv; + pdsv->vdpa_aux = vdpa_aux; + + pdev = vdpa_aux->padev->vf_pdev; + dma_dev = &pdev->dev; + pdsv->vdpa_dev.dma_dev = dma_dev; + + err = pds_vdpa_cmd_reset(pdsv); + if (err) { + dev_err(dev, "Failed to reset hw: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + err = pds_vdpa_init_hw(pdsv); + if (err) { + dev_err(dev, "Failed to init hw: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + fw_max_vqs = le16_to_cpu(pdsv->vdpa_aux->ident.max_vqs); + vq_pairs = fw_max_vqs / 2; + + /* Make sure we have the queues being requested */ + if (add_config->mask & (1 << VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) + vq_pairs = add_config->net.max_vq_pairs; + + pdsv->num_vqs = 2 * vq_pairs; + if (mgmt->supported_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) + pdsv->num_vqs++; + + if (pdsv->num_vqs > fw_max_vqs) { + dev_err(dev, "%s: queue count requested %u greater than max %u\n", + __func__, pdsv->num_vqs, fw_max_vqs); + err = -ENOSPC; + goto err_unmap; + } + + if (pdsv->num_vqs != fw_max_vqs) { + err = pds_vdpa_cmd_set_max_vq_pairs(pdsv, vq_pairs); + if (err) { + dev_err(dev, "Failed to set max_vq_pairs: %pe\n", + ERR_PTR(err)); + goto err_unmap; + } + } + + /* Set a mac, either from the user config if provided + * or set a random mac if default is 00:..:00 + */ + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MACADDR)) { + ether_addr_copy(mac, add_config->net.mac); + pds_vdpa_cmd_set_mac(pdsv, mac); + } else { + struct virtio_net_config __iomem *vc; + + vc = pdsv->vdpa_aux->vd_mdev.device; + memcpy_fromio(mac, vc->mac, sizeof(mac)); + if (is_zero_ether_addr(mac)) { + eth_random_addr(mac); + dev_info(dev, "setting random mac %pM\n", mac); + pds_vdpa_cmd_set_mac(pdsv, mac); + } + } + + for (i = 0; i < pdsv->num_vqs; i++) { + pdsv->vqs[i].qid = i; + pdsv->vqs[i].pdsv = pdsv; + pdsv->vqs[i].irq = VIRTIO_MSI_NO_VECTOR; + pdsv->vqs[i].notify = vp_modern_map_vq_notify(&pdsv->vdpa_aux->vd_mdev, + i, &pdsv->vqs[i].notify_pa); + } + + pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev; + + /* We use the _vdpa_register_device() call rather than the + * vdpa_register_device() to avoid a deadlock because our + * dev_add() is called with the vdpa_dev_lock already set + * by vdpa_nl_cmd_dev_add_set_doit() + */ + err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs); + if (err) { + dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + + pds_vdpa_debugfs_add_vdpadev(vdpa_aux); + + return 0; + +err_unmap: + put_device(&pdsv->vdpa_dev.dev); + vdpa_aux->pdsv = NULL; + return err; } static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *vdpa_dev) { + struct pds_vdpa_aux *vdpa_aux; + + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); + _vdpa_unregister_device(vdpa_dev); + + pds_vdpa_cmd_reset(vdpa_aux->pdsv); + pds_vdpa_debugfs_reset_vdpadev(vdpa_aux); + + vdpa_aux->pdsv = NULL; + + dev_info(&vdpa_aux->padev->aux_dev.dev, "Removed vdpa device\n"); } static const struct vdpa_mgmtdev_ops pds_vdpa_mgmt_dev_ops = { -- 2.17.1
Shannon Nelson
2023-Apr-25 21:26 UTC
[PATCH v4 virtio 09/10] pds_vdpa: subscribe to the pds_core events
Register for the pds_core's notification events, primarily to find out when the FW has been reset so we can pass this on back up the chain. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- drivers/vdpa/pds/vdpa_dev.c | 68 ++++++++++++++++++++++++++++++++++++- drivers/vdpa/pds/vdpa_dev.h | 1 + 2 files changed, 68 insertions(+), 1 deletion(-) diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c index c3316f0faa0c..93b12f73423f 100644 --- a/drivers/vdpa/pds/vdpa_dev.c +++ b/drivers/vdpa/pds/vdpa_dev.c @@ -21,6 +21,61 @@ static struct pds_vdpa_device *vdpa_to_pdsv(struct vdpa_device *vdpa_dev) return container_of(vdpa_dev, struct pds_vdpa_device, vdpa_dev); } +static int pds_vdpa_notify_handler(struct notifier_block *nb, + unsigned long ecode, + void *data) +{ + struct pds_vdpa_device *pdsv = container_of(nb, struct pds_vdpa_device, nb); + struct device *dev = &pdsv->vdpa_aux->padev->aux_dev.dev; + + dev_dbg(dev, "%s: event code %lu\n", __func__, ecode); + + /* Give the upper layers a hint that something interesting + * may have happened. It seems that the only thing this + * triggers in the virtio-net drivers above us is a check + * of link status. + * + * We don't set the NEEDS_RESET flag for EVENT_RESET + * because we're likely going through a recovery or + * fw_update and will be back up and running soon. + */ + if (ecode == PDS_EVENT_RESET || ecode == PDS_EVENT_LINK_CHANGE) { + if (pdsv->config_cb.callback) + pdsv->config_cb.callback(pdsv->config_cb.private); + } + + return 0; +} + +static int pds_vdpa_register_event_handler(struct pds_vdpa_device *pdsv) +{ + struct device *dev = &pdsv->vdpa_aux->padev->aux_dev.dev; + struct notifier_block *nb = &pdsv->nb; + int err; + + if (!nb->notifier_call) { + nb->notifier_call = pds_vdpa_notify_handler; + err = pdsc_register_notify(nb); + if (err) { + nb->notifier_call = NULL; + dev_err(dev, "failed to register pds event handler: %ps\n", + ERR_PTR(err)); + return -EINVAL; + } + dev_dbg(dev, "pds event handler registered\n"); + } + + return 0; +} + +static void pds_vdpa_unregister_event_handler(struct pds_vdpa_device *pdsv) +{ + if (pdsv->nb.notifier_call) { + pdsc_unregister_notify(&pdsv->nb); + pdsv->nb.notifier_call = NULL; + } +} + static int pds_vdpa_set_vq_address(struct vdpa_device *vdpa_dev, u16 qid, u64 desc_addr, u64 driver_addr, u64 device_addr) { @@ -522,6 +577,12 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, pdsv->vdpa_dev.mdev = &vdpa_aux->vdpa_mdev; + err = pds_vdpa_register_event_handler(pdsv); + if (err) { + dev_err(dev, "Failed to register for PDS events: %pe\n", ERR_PTR(err)); + goto err_unmap; + } + /* We use the _vdpa_register_device() call rather than the * vdpa_register_device() to avoid a deadlock because our * dev_add() is called with the vdpa_dev_lock already set @@ -530,13 +591,15 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, err = _vdpa_register_device(&pdsv->vdpa_dev, pdsv->num_vqs); if (err) { dev_err(dev, "Failed to register to vDPA bus: %pe\n", ERR_PTR(err)); - goto err_unmap; + goto err_unevent; } pds_vdpa_debugfs_add_vdpadev(vdpa_aux); return 0; +err_unevent: + pds_vdpa_unregister_event_handler(pdsv); err_unmap: put_device(&pdsv->vdpa_dev.dev); vdpa_aux->pdsv = NULL; @@ -546,8 +609,11 @@ static int pds_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name, static void pds_vdpa_dev_del(struct vdpa_mgmt_dev *mdev, struct vdpa_device *vdpa_dev) { + struct pds_vdpa_device *pdsv = vdpa_to_pdsv(vdpa_dev); struct pds_vdpa_aux *vdpa_aux; + pds_vdpa_unregister_event_handler(pdsv); + vdpa_aux = container_of(mdev, struct pds_vdpa_aux, vdpa_mdev); _vdpa_unregister_device(vdpa_dev); diff --git a/drivers/vdpa/pds/vdpa_dev.h b/drivers/vdpa/pds/vdpa_dev.h index a21596f438c1..1650a2b08845 100644 --- a/drivers/vdpa/pds/vdpa_dev.h +++ b/drivers/vdpa/pds/vdpa_dev.h @@ -40,6 +40,7 @@ struct pds_vdpa_device { u8 vdpa_index; /* rsvd for future subdevice use */ u8 num_vqs; /* num vqs in use */ struct vdpa_callback config_cb; + struct notifier_block nb; }; int pds_vdpa_get_mgmt_info(struct pds_vdpa_aux *vdpa_aux); -- 2.17.1
Shannon Nelson
2023-Apr-25 21:26 UTC
[PATCH v4 virtio 10/10] pds_vdpa: pds_vdps.rst and Kconfig
Add the documentation and Kconfig entry for pds_vdpa driver. Signed-off-by: Shannon Nelson <shannon.nelson at amd.com> --- .../device_drivers/ethernet/amd/pds_vdpa.rst | 85 +++++++++++++++++++ .../device_drivers/ethernet/index.rst | 1 + MAINTAINERS | 4 + drivers/vdpa/Kconfig | 8 ++ 4 files changed, 98 insertions(+) create mode 100644 Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst diff --git a/Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst b/Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst new file mode 100644 index 000000000000..587927d3de92 --- /dev/null +++ b/Documentation/networking/device_drivers/ethernet/amd/pds_vdpa.rst @@ -0,0 +1,85 @@ +.. SPDX-License-Identifier: GPL-2.0+ +.. note: can be edited and viewed with /usr/bin/formiko-vim + +=========================================================+PCI vDPA driver for the AMD/Pensando(R) DSC adapter family +=========================================================+ +AMD/Pensando vDPA VF Device Driver + +Copyright(c) 2023 Advanced Micro Devices, Inc + +Overview +=======+ +The ``pds_vdpa`` driver is an auxiliary bus driver that supplies +a vDPA device for use by the virtio network stack. It is used with +the Pensando Virtual Function devices that offer vDPA and virtio queue +services. It depends on the ``pds_core`` driver and hardware for the PF +and VF PCI handling as well as for device configuration services. + +Using the device +===============+ +The ``pds_vdpa`` device is enabled via multiple configuration steps and +depends on the ``pds_core`` driver to create and enable SR-IOV Virtual +Function devices. After the VFs are enabled, we enable the vDPA service +in the ``pds_core`` device to create the auxiliary devices used by pds_vdpa. + +Example steps: + +.. code-block:: bash + + #!/bin/bash + + modprobe pds_core + modprobe vdpa + modprobe pds_vdpa + + PF_BDF=`ls /sys/module/pds_core/drivers/pci\:pds_core/*/sriov_numvfs | awk -F / '{print $7}'` + + # Enable vDPA VF auxiliary device(s) in the PF + devlink dev param set pci/$PF_BDF name enable_vnet cmode runtime value true + + # Create a VF for vDPA use + echo 1 > /sys/bus/pci/drivers/pds_core/$PF_BDF/sriov_numvfs + + # Find the vDPA services/devices available + PDS_VDPA_MGMT=`vdpa mgmtdev show | grep vDPA | head -1 | cut -d: -f1` + + # Create a vDPA device for use in virtio network configurations + vdpa dev add name vdpa1 mgmtdev $PDS_VDPA_MGMT mac 00:11:22:33:44:55 + + # Set up an ethernet interface on the vdpa device + modprobe virtio_vdpa + + + +Enabling the driver +==================+ +The driver is enabled via the standard kernel configuration system, +using the make command:: + + make oldconfig/menuconfig/etc. + +The driver is located in the menu structure at: + + -> Device Drivers + -> Network device support (NETDEVICES [=y]) + -> Ethernet driver support + -> Pensando devices + -> Pensando Ethernet PDS_VDPA Support + +Support +======+ +For general Linux networking support, please use the netdev mailing +list, which is monitored by Pensando personnel:: + + netdev at vger.kernel.org + +For more specific support needs, please use the Pensando driver support +email:: + + drivers at pensando.io diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst index 417ca514a4d0..94ecb67c0885 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -15,6 +15,7 @@ Contents: amazon/ena altera/altera_tse amd/pds_core + amd/pds_vdpa aquantia/atlantic chelsio/cxgb cirrus/cs89x0 diff --git a/MAINTAINERS b/MAINTAINERS index 6ac562e0381e..93210a8ac74f 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -22148,6 +22148,10 @@ SNET DPU VIRTIO DATA PATH ACCELERATOR R: Alvaro Karsz <alvaro.karsz at solid-run.com> F: drivers/vdpa/solidrun/ +PDS DSC VIRTIO DATA PATH ACCELERATOR +R: Shannon Nelson <shannon.nelson at amd.com> +F: drivers/vdpa/pds/ + VIRTIO BALLOON M: "Michael S. Tsirkin" <mst at redhat.com> M: David Hildenbrand <david at redhat.com> diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig index cd6ad92f3f05..2ee1b288691d 100644 --- a/drivers/vdpa/Kconfig +++ b/drivers/vdpa/Kconfig @@ -116,4 +116,12 @@ config ALIBABA_ENI_VDPA This driver includes a HW monitor device that reads health values from the DPU. +config PDS_VDPA + tristate "vDPA driver for AMD/Pensando DSC devices" + depends on PDS_CORE + help + vDPA network driver for AMD/Pensando's PDS Core devices. + With this driver, the VirtIO dataplane can be + offloaded to an AMD/Pensando DSC device. + endif # VDPA -- 2.17.1