Daniel Drake
2018-Aug-28 02:23 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
On Fri, Aug 24, 2018 at 11:42 PM, Peter Wu <peter at lekensteyn.nl> wrote:> Are these systems also affected through runtime power management? For > example: > > modprobe nouveau # should enable runtime PM > sleep 6 # wait for runtime suspend to kick in > lspci -s1: # runtime resume by reading PCI config space > > On laptops from about 2015-2016 with a GTX 9xxM this sequence results in > hangs on various laptops > (https://bugzilla.kernel.org/show_bug.cgi?id=156341).This works fine here. I'm facing a different issue.>> After a lot of experimentation I found a workaround: during resume, >> set the value of PCI_PREF_BASE_UPPER32 to 0 on the parent PCI bridge. >> Easily done in drivers/pci/quirks.c. Now all nvidia stuff works fine. > > I am curious, how did you discover this? While this could work, perhaps > there are alternative workarounds/fixes?Based on the observation that the following procedure works fine (note the addition of step 3): 1. Boot 2. Suspend/resume 3. echo rescan > /sys/bus/pci/devices/0000:00:1c.0/rescan 4. Load nouveau driver 5. Start X I worked through the rescan codepath until I had isolated the specific code which magically makes things work (in pci_bridge_check_ranges). Having found that, step 3 in the above test procedure can be replaced with a simple: setpci -s 00:1c.0 0x28.l=0> When you say "parent PCI" bridge, is that actually the device you see in > "lspci -tv"? On a Dell XPS 9560, the GPU is under a different device: > > -[0000:00]-+-00.0 Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers > +-01.0-[01]----00.0 NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] > > 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)Yes, it's the parent bridge shown by lspci. The address of this varies from system to system.>> 1. Is the Intel PCI bridge misbehaving here? Why does writing the same >> value of PCI_PREF_BASE_UPPER32 make any difference at all? > > At what point in the suspend code path did you insert this write? It is > possible that the write somehow acted as a fence/memory barrier?static void quirk_pref_base_upper32(struct pci_dev *dev) { u32 pref_base_upper32; pci_read_config_dword(dev, PCI_PREF_BASE_UPPER32, &pref_base_upper32); pci_write_config_dword(dev, PCI_PREF_BASE_UPPER32, pref_base_upper32); } DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x9d10, quirk_pref_base_upper32); I don't think it's acting as a barrier. I tried changing this code to rewrite other registers such as PCI_PREF_MEMORY_BASE and that makes the bug come back.>> 2. Who is responsible for saving and restoring PCI bridge >> configuration during suspend and resume? Linux? ACPI? BIOS? > > Not sure about PCI bridges, but at least for the PCI Express Capability > registers, it is in control of the OS when control is granted via the > ACPI _OSC method.I guess you are referring to pci_save_pcie_state(). I can't see anything equivalent for the bridge registers.> As Windows is probably not affected by this issue, a change must be > possible to make Linux more compatible with Windows. Though I am not > sure what change is needed.I agree. There's a definite difference with Windows here and it would be great to find a fix along those lines.> I recently compared PCI configuration space access and ACPI method > invocation using QEMU + VFIO with Linux 4.18, Windows 7 and Windows 10 > (1803). There were differences like disabling MSI/interrupts before > suspend, setting the Enable Clock Power Management bit in PCI Express > Link Control and more, but applying these changes were so far not really > successful.Interesting. Do you know any way that I could spy on Windows' accesses to the PCI bridge registers? Looking at at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF I suspect VFIO would not help me here. It says: Note: If they are grouped with other devices in this manner, pci root ports and bridges should neither be bound to vfio at boot, nor be added to the VM. Thanks Daniel
Peter Wu
2018-Aug-28 09:57 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
On Tue, Aug 28, 2018 at 10:23:24AM +0800, Daniel Drake wrote:> On Fri, Aug 24, 2018 at 11:42 PM, Peter Wu <peter at lekensteyn.nl> wrote: > > Are these systems also affected through runtime power management? For > > example: > > > > modprobe nouveau # should enable runtime PM > > sleep 6 # wait for runtime suspend to kick in > > lspci -s1: # runtime resume by reading PCI config space > > > > On laptops from about 2015-2016 with a GTX 9xxM this sequence results in > > hangs on various laptops > > (https://bugzilla.kernel.org/show_bug.cgi?id=156341). > > This works fine here. I'm facing a different issue.Just to be sure, after "sleep", do both devices report "suspended" in /sys/bus/pci/devices/0000:00:1c.0/power/runtime_status /sys/bus/pci/devices/0000:01:00.0/power/runtime_status and was this reproduced with a recent mainline kernel with no special cmdline options? The endlessm kernel on Github seems to have quite some patches, one of them explicitly disable runtime PM: https://github.com/endlessm/linux/commit/8b128b50cd6725eee2ae9025a1510a221d9b42f2> >> After a lot of experimentation I found a workaround: during resume, > >> set the value of PCI_PREF_BASE_UPPER32 to 0 on the parent PCI bridge. > >> Easily done in drivers/pci/quirks.c. Now all nvidia stuff works fine. > > > > I am curious, how did you discover this? While this could work, perhaps > > there are alternative workarounds/fixes? > > Based on the observation that the following procedure works fine (note > the addition of step 3): > > 1. Boot > 2. Suspend/resume > 3. echo rescan > /sys/bus/pci/devices/0000:00:1c.0/rescan > 4. Load nouveau driver > 5. Start X > > I worked through the rescan codepath until I had isolated the specific > code which magically makes things work (in pci_bridge_check_ranges). > > Having found that, step 3 in the above test procedure can be replaced > with a simple: > setpci -s 00:1c.0 0x28.l=0 > > > When you say "parent PCI" bridge, is that actually the device you see in > > "lspci -tv"? On a Dell XPS 9560, the GPU is under a different device: > > > > -[0000:00]-+-00.0 Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers > > +-01.0-[01]----00.0 NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] > > > > 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05) > > Yes, it's the parent bridge shown by lspci. The address of this varies > from system to system.Could you share some details: - acpidump - lspci -nnxxxxvvv - BIOS version (from /sys/class/dmi/id/) - kernel version (mainline?) Perhaps there is some magic in the ACPI suspend or resume path that causes this.> >> 1. Is the Intel PCI bridge misbehaving here? Why does writing the same > >> value of PCI_PREF_BASE_UPPER32 make any difference at all? > > > > At what point in the suspend code path did you insert this write? It is > > possible that the write somehow acted as a fence/memory barrier? > > static void quirk_pref_base_upper32(struct pci_dev *dev) > { > u32 pref_base_upper32; > pci_read_config_dword(dev, PCI_PREF_BASE_UPPER32, &pref_base_upper32); > pci_write_config_dword(dev, PCI_PREF_BASE_UPPER32, pref_base_upper32); > } > DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x9d10, quirk_pref_base_upper32); > > I don't think it's acting as a barrier. I tried changing this code to > rewrite other registers such as PCI_PREF_MEMORY_BASE and that makes > the bug come back. > > >> 2. Who is responsible for saving and restoring PCI bridge > >> configuration during suspend and resume? Linux? ACPI? BIOS? > > > > Not sure about PCI bridges, but at least for the PCI Express Capability > > registers, it is in control of the OS when control is granted via the > > ACPI _OSC method. > > I guess you are referring to pci_save_pcie_state(). I can't see > anything equivalent for the bridge registers.Yes that would be the function, called via pci_save_state.> > I recently compared PCI configuration space access and ACPI method > > invocation using QEMU + VFIO with Linux 4.18, Windows 7 and Windows 10 > > (1803). There were differences like disabling MSI/interrupts before > > suspend, setting the Enable Clock Power Management bit in PCI Express > > Link Control and more, but applying these changes were so far not really > > successful. > > Interesting. Do you know any way that I could spy on Windows' accesses > to the PCI bridge registers? > Looking at at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF > I suspect VFIO would not help me here. > It says: > Note: If they are grouped with other devices in this manner, pci > root ports and bridges should neither be bound to vfio at boot, nor be > added to the VM.Only non-bridge devices can be passed to a guest, but perhaps logging access to the emulated bridge is already sufficient. The Prefetchable Base Upper 32 Bits register is at offset 0x28. In a trace where the Nvidia device is disabled/enabled via Device Manager, I see writes on the enable path: 2571 at 1535108904.593107:rp_write_config (ioh3420, @0x28, 0x0, len=0x4) For Linux, I only see one write at startup, none on runtime resume. I did not test system sleep/resume. (disable/enable is arguably a bit different from system s/r, you may want to do additional testing here.) Full log for WIndows 10 and Linux: https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/XPS9560/slogs/win10-rp-enable-disable.txt#L3418 https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/XPS9560/slogs/linux-rp.txt lspci for the emulated bridge: https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/XPS9560/lspci-vm-vfio.txt#L359 The rp_*_config trace points are non-standard and require patches: https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/patches/qemu-trace.diff -- Kind regards, Peter Wu https://lekensteyn.nl
Karol Herbst
2018-Aug-29 00:19 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
hi everybody. I came up with another workaround for the runtime suspend/resume issues we have as well: https://github.com/karolherbst/linux/commit/3cab4c50f77cf97c6c19a9b1e7884366f78f35a5.patch I don't think this is really a bug inside the kernel or not directly. If you for example not use Nouveau but simply enable the runpm features without a driver or a very dumb stub driver, the GPU should be able to suspend and resume correctly. At least this is the case on my laptop. I was able to disable enough part of Nouveaus code to be able to tell that running some signed firmware embedded in the vbios on the GPU embedded PMU is starting the runpm issues to appear on my laptop. This firmware is also used by the nvidia driver, which makes the argument "it happens with Nouveau and nvidia" a useless one. I have no idea what this is all about, but it might be the hardware/firmware just being overprotecting and bailing out on an untrusted state, maybe it is a bug inside the kernel, maybe a bug inside nvidias firmware, which would be super hard to fix as it's embedded in the vbios. On Tue, Aug 28, 2018 at 11:57 AM, Peter Wu <peter at lekensteyn.nl> wrote:> On Tue, Aug 28, 2018 at 10:23:24AM +0800, Daniel Drake wrote: >> On Fri, Aug 24, 2018 at 11:42 PM, Peter Wu <peter at lekensteyn.nl> wrote: >> > Are these systems also affected through runtime power management? For >> > example: >> > >> > modprobe nouveau # should enable runtime PM >> > sleep 6 # wait for runtime suspend to kick in >> > lspci -s1: # runtime resume by reading PCI config space >> > >> > On laptops from about 2015-2016 with a GTX 9xxM this sequence results in >> > hangs on various laptops >> > (https://bugzilla.kernel.org/show_bug.cgi?id=156341). >> >> This works fine here. I'm facing a different issue. > > Just to be sure, after "sleep", do both devices report "suspended" in > /sys/bus/pci/devices/0000:00:1c.0/power/runtime_status > /sys/bus/pci/devices/0000:01:00.0/power/runtime_status > > and was this reproduced with a recent mainline kernel with no special > cmdline options? The endlessm kernel on Github seems to have quite some > patches, one of them explicitly disable runtime PM: > https://github.com/endlessm/linux/commit/8b128b50cd6725eee2ae9025a1510a221d9b42f2 > >> >> After a lot of experimentation I found a workaround: during resume, >> >> set the value of PCI_PREF_BASE_UPPER32 to 0 on the parent PCI bridge. >> >> Easily done in drivers/pci/quirks.c. Now all nvidia stuff works fine. >> > >> > I am curious, how did you discover this? While this could work, perhaps >> > there are alternative workarounds/fixes? >> >> Based on the observation that the following procedure works fine (note >> the addition of step 3): >> >> 1. Boot >> 2. Suspend/resume >> 3. echo rescan > /sys/bus/pci/devices/0000:00:1c.0/rescan >> 4. Load nouveau driver >> 5. Start X >> >> I worked through the rescan codepath until I had isolated the specific >> code which magically makes things work (in pci_bridge_check_ranges). >> >> Having found that, step 3 in the above test procedure can be replaced >> with a simple: >> setpci -s 00:1c.0 0x28.l=0 >> >> > When you say "parent PCI" bridge, is that actually the device you see in >> > "lspci -tv"? On a Dell XPS 9560, the GPU is under a different device: >> > >> > -[0000:00]-+-00.0 Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers >> > +-01.0-[01]----00.0 NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] >> > >> > 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05) >> >> Yes, it's the parent bridge shown by lspci. The address of this varies >> from system to system. > > Could you share some details: > - acpidump > - lspci -nnxxxxvvv > - BIOS version (from /sys/class/dmi/id/) > - kernel version (mainline?) > > Perhaps there is some magic in the ACPI suspend or resume path that > causes this. > >> >> 1. Is the Intel PCI bridge misbehaving here? Why does writing the same >> >> value of PCI_PREF_BASE_UPPER32 make any difference at all? >> > >> > At what point in the suspend code path did you insert this write? It is >> > possible that the write somehow acted as a fence/memory barrier? >> >> static void quirk_pref_base_upper32(struct pci_dev *dev) >> { >> u32 pref_base_upper32; >> pci_read_config_dword(dev, PCI_PREF_BASE_UPPER32, &pref_base_upper32); >> pci_write_config_dword(dev, PCI_PREF_BASE_UPPER32, pref_base_upper32); >> } >> DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x9d10, quirk_pref_base_upper32); >> >> I don't think it's acting as a barrier. I tried changing this code to >> rewrite other registers such as PCI_PREF_MEMORY_BASE and that makes >> the bug come back. >> >> >> 2. Who is responsible for saving and restoring PCI bridge >> >> configuration during suspend and resume? Linux? ACPI? BIOS? >> > >> > Not sure about PCI bridges, but at least for the PCI Express Capability >> > registers, it is in control of the OS when control is granted via the >> > ACPI _OSC method. >> >> I guess you are referring to pci_save_pcie_state(). I can't see >> anything equivalent for the bridge registers. > > Yes that would be the function, called via pci_save_state. > >> > I recently compared PCI configuration space access and ACPI method >> > invocation using QEMU + VFIO with Linux 4.18, Windows 7 and Windows 10 >> > (1803). There were differences like disabling MSI/interrupts before >> > suspend, setting the Enable Clock Power Management bit in PCI Express >> > Link Control and more, but applying these changes were so far not really >> > successful. >> >> Interesting. Do you know any way that I could spy on Windows' accesses >> to the PCI bridge registers? >> Looking at at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF >> I suspect VFIO would not help me here. >> It says: >> Note: If they are grouped with other devices in this manner, pci >> root ports and bridges should neither be bound to vfio at boot, nor be >> added to the VM. > > Only non-bridge devices can be passed to a guest, but perhaps logging > access to the emulated bridge is already sufficient. The Prefetchable > Base Upper 32 Bits register is at offset 0x28. > > In a trace where the Nvidia device is disabled/enabled via Device > Manager, I see writes on the enable path: > > 2571 at 1535108904.593107:rp_write_config (ioh3420, @0x28, 0x0, len=0x4) > > For Linux, I only see one write at startup, none on runtime resume. > I did not test system sleep/resume. (disable/enable is arguably a bit > different from system s/r, you may want to do additional testing here.) > > Full log for WIndows 10 and Linux: > https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/XPS9560/slogs/win10-rp-enable-disable.txt#L3418 > https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/XPS9560/slogs/linux-rp.txt > lspci for the emulated bridge: > https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/XPS9560/lspci-vm-vfio.txt#L359 > The rp_*_config trace points are non-standard and require patches: > https://github.com/Lekensteyn/acpi-stuff/blob/master/d3test/patches/qemu-trace.diff > -- > Kind regards, > Peter Wu > https://lekensteyn.nl > _______________________________________________ > Nouveau mailing list > Nouveau at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/nouveau
Karol Herbst
2018-Aug-29 12:40 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
On Tue, Aug 28, 2018 at 4:23 AM, Daniel Drake <drake at endlessm.com> wrote:> On Fri, Aug 24, 2018 at 11:42 PM, Peter Wu <peter at lekensteyn.nl> wrote: >> Are these systems also affected through runtime power management? For >> example: >> >> modprobe nouveau # should enable runtime PM >> sleep 6 # wait for runtime suspend to kick in >> lspci -s1: # runtime resume by reading PCI config space >> >> On laptops from about 2015-2016 with a GTX 9xxM this sequence results in >> hangs on various laptops >> (https://bugzilla.kernel.org/show_bug.cgi?id=156341). > > This works fine here. I'm facing a different issue. > >>> After a lot of experimentation I found a workaround: during resume, >>> set the value of PCI_PREF_BASE_UPPER32 to 0 on the parent PCI bridge. >>> Easily done in drivers/pci/quirks.c. Now all nvidia stuff works fine. >> >> I am curious, how did you discover this? While this could work, perhaps >> there are alternative workarounds/fixes? > > Based on the observation that the following procedure works fine (note > the addition of step 3): > > 1. Boot > 2. Suspend/resume > 3. echo rescan > /sys/bus/pci/devices/0000:00:1c.0/rescan > 4. Load nouveau driver > 5. Start X > > I worked through the rescan codepath until I had isolated the specific > code which magically makes things work (in pci_bridge_check_ranges). > > Having found that, step 3 in the above test procedure can be replaced > with a simple: > setpci -s 00:1c.0 0x28.l=0 > >> When you say "parent PCI" bridge, is that actually the device you see in >> "lspci -tv"? On a Dell XPS 9560, the GPU is under a different device: >> >> -[0000:00]-+-00.0 Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers >> +-01.0-[01]----00.0 NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] >> >> 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05) > > Yes, it's the parent bridge shown by lspci. The address of this varies > from system to system. > >>> 1. Is the Intel PCI bridge misbehaving here? Why does writing the same >>> value of PCI_PREF_BASE_UPPER32 make any difference at all? >> >> At what point in the suspend code path did you insert this write? It is >> possible that the write somehow acted as a fence/memory barrier? > > static void quirk_pref_base_upper32(struct pci_dev *dev) > { > u32 pref_base_upper32; > pci_read_config_dword(dev, PCI_PREF_BASE_UPPER32, &pref_base_upper32); > pci_write_config_dword(dev, PCI_PREF_BASE_UPPER32, pref_base_upper32); > } > DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x9d10, quirk_pref_base_upper32); >this workaround fixes runtime suspend/resume on my laptop as well... but what baffles me most is, unloading nouveau does as well. I will see what bits are exactly "fixing" it in the nouveau unloading path and maybe we can get around this issue inside nouveau. It would be still nice to get to the root cause of all of this as there are three known workarounds (at least on my system): 1. unload nouveau 2. skip setting the D3 power state via PCI config space (and still do the ACPI bits) 3. write value of PCI_PREF_BASE_UPPER32> I don't think it's acting as a barrier. I tried changing this code to > rewrite other registers such as PCI_PREF_MEMORY_BASE and that makes > the bug come back. > >>> 2. Who is responsible for saving and restoring PCI bridge >>> configuration during suspend and resume? Linux? ACPI? BIOS? >> >> Not sure about PCI bridges, but at least for the PCI Express Capability >> registers, it is in control of the OS when control is granted via the >> ACPI _OSC method. > > I guess you are referring to pci_save_pcie_state(). I can't see > anything equivalent for the bridge registers. > >> As Windows is probably not affected by this issue, a change must be >> possible to make Linux more compatible with Windows. Though I am not >> sure what change is needed. > > I agree. There's a definite difference with Windows here and it would > be great to find a fix along those lines. > >> I recently compared PCI configuration space access and ACPI method >> invocation using QEMU + VFIO with Linux 4.18, Windows 7 and Windows 10 >> (1803). There were differences like disabling MSI/interrupts before >> suspend, setting the Enable Clock Power Management bit in PCI Express >> Link Control and more, but applying these changes were so far not really >> successful. > > Interesting. Do you know any way that I could spy on Windows' accesses > to the PCI bridge registers? > Looking at at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF > I suspect VFIO would not help me here. > It says: > Note: If they are grouped with other devices in this manner, pci > root ports and bridges should neither be bound to vfio at boot, nor be > added to the VM. > > Thanks > Daniel > _______________________________________________ > Nouveau mailing list > Nouveau at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/nouveau
Karol Herbst
2018-Aug-30 00:13 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
ohh actually, I was testing with a kernel without this workaround applied, so I need to retest it later. On Wed, Aug 29, 2018 at 2:40 PM, Karol Herbst <kherbst at redhat.com> wrote:> On Tue, Aug 28, 2018 at 4:23 AM, Daniel Drake <drake at endlessm.com> wrote: >> On Fri, Aug 24, 2018 at 11:42 PM, Peter Wu <peter at lekensteyn.nl> wrote: >>> Are these systems also affected through runtime power management? For >>> example: >>> >>> modprobe nouveau # should enable runtime PM >>> sleep 6 # wait for runtime suspend to kick in >>> lspci -s1: # runtime resume by reading PCI config space >>> >>> On laptops from about 2015-2016 with a GTX 9xxM this sequence results in >>> hangs on various laptops >>> (https://bugzilla.kernel.org/show_bug.cgi?id=156341). >> >> This works fine here. I'm facing a different issue. >> >>>> After a lot of experimentation I found a workaround: during resume, >>>> set the value of PCI_PREF_BASE_UPPER32 to 0 on the parent PCI bridge. >>>> Easily done in drivers/pci/quirks.c. Now all nvidia stuff works fine. >>> >>> I am curious, how did you discover this? While this could work, perhaps >>> there are alternative workarounds/fixes? >> >> Based on the observation that the following procedure works fine (note >> the addition of step 3): >> >> 1. Boot >> 2. Suspend/resume >> 3. echo rescan > /sys/bus/pci/devices/0000:00:1c.0/rescan >> 4. Load nouveau driver >> 5. Start X >> >> I worked through the rescan codepath until I had isolated the specific >> code which magically makes things work (in pci_bridge_check_ranges). >> >> Having found that, step 3 in the above test procedure can be replaced >> with a simple: >> setpci -s 00:1c.0 0x28.l=0 >> >>> When you say "parent PCI" bridge, is that actually the device you see in >>> "lspci -tv"? On a Dell XPS 9560, the GPU is under a different device: >>> >>> -[0000:00]-+-00.0 Intel Corporation Xeon E3-1200 v6/7th Gen Core Processor Host Bridge/DRAM Registers >>> +-01.0-[01]----00.0 NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] >>> >>> 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05) >> >> Yes, it's the parent bridge shown by lspci. The address of this varies >> from system to system. >> >>>> 1. Is the Intel PCI bridge misbehaving here? Why does writing the same >>>> value of PCI_PREF_BASE_UPPER32 make any difference at all? >>> >>> At what point in the suspend code path did you insert this write? It is >>> possible that the write somehow acted as a fence/memory barrier? >> >> static void quirk_pref_base_upper32(struct pci_dev *dev) >> { >> u32 pref_base_upper32; >> pci_read_config_dword(dev, PCI_PREF_BASE_UPPER32, &pref_base_upper32); >> pci_write_config_dword(dev, PCI_PREF_BASE_UPPER32, pref_base_upper32); >> } >> DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x9d10, quirk_pref_base_upper32); >> > > this workaround fixes runtime suspend/resume on my laptop as well... > but what baffles me most is, unloading nouveau does as well. I will > see what bits are exactly "fixing" it in the nouveau unloading path > and maybe we can get around this issue inside nouveau. It would be > still nice to get to the root cause of all of this as there are three > known workarounds (at least on my system): > 1. unload nouveau > 2. skip setting the D3 power state via PCI config space (and still do > the ACPI bits) > 3. write value of PCI_PREF_BASE_UPPER32 > >> I don't think it's acting as a barrier. I tried changing this code to >> rewrite other registers such as PCI_PREF_MEMORY_BASE and that makes >> the bug come back. >> >>>> 2. Who is responsible for saving and restoring PCI bridge >>>> configuration during suspend and resume? Linux? ACPI? BIOS? >>> >>> Not sure about PCI bridges, but at least for the PCI Express Capability >>> registers, it is in control of the OS when control is granted via the >>> ACPI _OSC method. >> >> I guess you are referring to pci_save_pcie_state(). I can't see >> anything equivalent for the bridge registers. >> >>> As Windows is probably not affected by this issue, a change must be >>> possible to make Linux more compatible with Windows. Though I am not >>> sure what change is needed. >> >> I agree. There's a definite difference with Windows here and it would >> be great to find a fix along those lines. >> >>> I recently compared PCI configuration space access and ACPI method >>> invocation using QEMU + VFIO with Linux 4.18, Windows 7 and Windows 10 >>> (1803). There were differences like disabling MSI/interrupts before >>> suspend, setting the Enable Clock Power Management bit in PCI Express >>> Link Control and more, but applying these changes were so far not really >>> successful. >> >> Interesting. Do you know any way that I could spy on Windows' accesses >> to the PCI bridge registers? >> Looking at at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF >> I suspect VFIO would not help me here. >> It says: >> Note: If they are grouped with other devices in this manner, pci >> root ports and bridges should neither be bound to vfio at boot, nor be >> added to the VM. >> >> Thanks >> Daniel >> _______________________________________________ >> Nouveau mailing list >> Nouveau at lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/nouveau
Daniel Drake
2018-Aug-30 07:41 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
On Tue, Aug 28, 2018 at 5:57 PM, Peter Wu <peter at lekensteyn.nl> wrote:> Just to be sure, after "sleep", do both devices report "suspended" in > /sys/bus/pci/devices/0000:00:1c.0/power/runtime_status > /sys/bus/pci/devices/0000:01:00.0/power/runtime_status > > and was this reproduced with a recent mainline kernel with no special > cmdline options? The endlessm kernel on Github seems to have quite some > patches, one of them explicitly disable runtime PM: > https://github.com/endlessm/linux/commit/8b128b50cd6725eee2ae9025a1510a221d9b42f2Yes, I checked for this issue in the past and I'm certain that nouveau runtime pm works fine. I also checked again now on X542UQ and the results are the same. nouveau can do runtime suspend/resume (confirmed by reading runtime_status) and then render 3D graphics OK. lspci is fine too. It is just S3 suspend that is affected. This was testing on Linux 4.18 unmodified. I had to set nouveau runpm parameter to 1 for it to use runtime pm. Also checked with Karol's patch, the S3 issue is still there. Seems like 2 different issues.> Could you share some details: > - acpidump > - lspci -nnxxxxvvv > - BIOS version (from /sys/class/dmi/id/) > - kernel version (mainline?)Linux 4.18 mainline BIOS version: X542UQ.202 acpidump: https://gist.githubusercontent.com/dsd/79352284d4adce14f30d70e94fad89f2/raw/ed9480e924be413fff567da2edd5a2a7a86619d0/gistfile1.txt pci: https://gist.githubusercontent.com/dsd/79352284d4adce14f30d70e94fad89f2/raw/ed9480e924be413fff567da2edd5a2a7a86619d0/pci> Only non-bridge devices can be passed to a guest, but perhaps logging > access to the emulated bridge is already sufficient. The Prefetchable > Base Upper 32 Bits register is at offset 0x28. > > In a trace where the Nvidia device is disabled/enabled via Device > Manager, I see writes on the enable path: > > 2571 at 1535108904.593107:rp_write_config (ioh3420, @0x28, 0x0, len=0x4) > > For Linux, I only see one write at startup, none on runtime resume. > I did not test system sleep/resume. (disable/enable is arguably a bit > different from system s/r, you may want to do additional testing here.)I managed to install Win10 Home under virt-manager with the nvidia device passed through. However the nvidia windows driver installer refuses to install, says: The NVIDIA graphics driver is not compatible with this version of Windows. This graphics driver could not find compatible graphics hardware. One trick for similar sounding problems is to change hypervisor vendor ID but no luck here. I was going to check if I can monitor PCI bridge config space access even without the nvidia driver installed, but I can't find a way to make the windows VM suspend and resume - the option is not available in the VM. Daniel
Daniel Drake
2018-Sep-05 06:26 UTC
[Nouveau] Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
On Tue, Aug 28, 2018 at 5:57 PM, Peter Wu <peter at lekensteyn.nl> wrote:> Only non-bridge devices can be passed to a guest, but perhaps logging > access to the emulated bridge is already sufficient. The Prefetchable > Base Upper 32 Bits register is at offset 0x28. > > In a trace where the Nvidia device is disabled/enabled via Device > Manager, I see writes on the enable path: > > 2571 at 1535108904.593107:rp_write_config (ioh3420, @0x28, 0x0, len=0x4)Did you do anything special to get an emulated bridge included in this setup? Folllowing the instructions at https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF I can successfully pass through devices to windows running under virt-manager. In the nvidia GPU case I haven't got passed the driver installation failure, but I can pass through other devices OK and install their drivers. However I do not end up with any PCI-to-PCI bridges in this setup. The passed through device sits at address 00:08.0, parent is the PCI host bridge 00:00.0. (I'm trying to spy if Windows appears to restore or reset the PCI bridge prefetch registers upon resume) Thanks Daniel
Possibly Parallel Threads
- Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
- Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
- [PATCH v3] PCI: Reprogram bridge prefetch registers on resume
- Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues
- Rewriting Intel PCI bridge prefetch base address bits solves nvidia graphics issues