search for: iort_pci_devid

Displaying 4 results from an estimated 4 matches for "iort_pci_devid".

2019 Nov 22
0
[RFC 06/13] ACPI/IORT: Support VIOT virtio-pci node
...++++++--------- 1 file changed, 93 insertions(+), 24 deletions(-) diff --git a/drivers/acpi/iort.c b/drivers/acpi/iort.c index adc5953fffa5..b517aa4e83ba 100644 --- a/drivers/acpi/iort.c +++ b/drivers/acpi/iort.c @@ -30,10 +30,17 @@ struct iort_its_msi_chip { u32 translation_id; }; +struct iort_pci_devid { + u16 segment; + u8 bus; + u8 devfn; +}; + struct iort_fwnode { struct list_head list; struct acpi_iort_node *iort_node; struct fwnode_handle *fwnode; + struct iort_pci_devid *pci_devid; }; static LIST_HEAD(iort_fwnode_list); static DEFINE_SPINLOCK(iort_fwnode_lock); @@ -44,7 +51,8 @@ s...
2019 Nov 22
0
[RFC 08/13] ACPI/IORT: Add callback to update a device's fwnode
...existing fwnode + * + * A PCI device isn't instantiated by the IORT driver. The IOMMU driver sets or + * removes its fwnode using this function. + */ +void iort_iommu_update_fwnode(struct device *dev, struct fwnode_handle *fwnode) +{ + struct pci_dev *pdev; + struct iort_fwnode *curr; + struct iort_pci_devid *devid; + + if (!dev_is_pci(dev)) + return; + + pdev = to_pci_dev(dev); + + spin_lock(&iort_fwnode_lock); + list_for_each_entry(curr, &iort_fwnode_list, list) { + devid = curr->pci_devid; + if (devid && + pci_domain_nr(pdev->bus) == devid->segment && +...
2019 Nov 22
16
[RFC 00/13] virtio-iommu on non-devicetree platforms
I'm seeking feedback on multi-platform support for virtio-iommu. At the moment only devicetree (DT) is supported and we don't have a pleasant solution for other platforms. Once we figure out the topology description, x86 support is trivial. Since the IOMMU manages memory accesses from other devices, the guest kernel needs to initialize the IOMMU before endpoints start issuing DMA.
2019 Nov 22
16
[RFC 00/13] virtio-iommu on non-devicetree platforms
I'm seeking feedback on multi-platform support for virtio-iommu. At the moment only devicetree (DT) is supported and we don't have a pleasant solution for other platforms. Once we figure out the topology description, x86 support is trivial. Since the IOMMU manages memory accesses from other devices, the guest kernel needs to initialize the IOMMU before endpoints start issuing DMA.