Karol Herbst
2019-Nov-20 21:40 UTC
[Nouveau] [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki <rafael at kernel.org> wrote:> > On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg > <mika.westerberg at intel.com> wrote: > > > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > > <mika.westerberg at intel.com> wrote: > > > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > > <mika.westerberg at intel.com> wrote: > > > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > > overall, what I really want to know is, _why_ does it work on windows? > > > > > > > > > > > > So do I ;-) > > > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't work? If > > > > > > > anybody has any idea on how we could dig into this and figure it out > > > > > > > on this level, this would probably allow us to get closer to the root > > > > > > > cause? no? > > > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your system and > > > > > > does it have any effect? > > > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > > > should hopefully reveal something. > > > > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > > laptops as well, and I've heard about users having the same issues on > > > > > Asus and MSI laptops as well. > > > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > > > started supporting it). Maybe Dell added check for Linux because at that > > > > time Linux did not support it. > > > > > > > > > > the point is, it's not checking it by default, so by default you still > > > run into the windows 8 codepath. > > > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > > path by default. There are a bunch of similar entries for Dell machines. > > OK, so the "Linux path" works and the other doesn't. > > I thought that this was the other way around, sorry for the confusion. > > > Of course this does not help the non-Dell users so we would still need > > to figure out the root cause. > > Right. > > Whatever it is, though, AML appears to be involved in it and AFAICS > there's no evidence that it affects any root ports that are not > populated with NVidia GPUs. >last week or so I found systems where the GPU was under the "PCI Express Root Port" (name from lspci) and on those systems all of that seems to work. So I am wondering if it's indeed just the 0x1901 one, which also explains Mikas case that Thunderbolt stuff works as devices never get populated under this particular bridge controller, but under those "Root Port"s> Now, one thing is still not clear to me from the discussion so far: is > the _PR3 method you mentioned defined under the GPU device object or > under the port device object? >
Rafael J. Wysocki
2019-Nov-20 22:29 UTC
[Nouveau] [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 10:40 PM Karol Herbst <kherbst at redhat.com> wrote:> > On Wed, Nov 20, 2019 at 10:37 PM Rafael J. Wysocki <rafael at kernel.org> wrote: > > > > On Wed, Nov 20, 2019 at 4:53 PM Mika Westerberg > > <mika.westerberg at intel.com> wrote: > > > > > > On Wed, Nov 20, 2019 at 04:37:14PM +0100, Karol Herbst wrote: > > > > On Wed, Nov 20, 2019 at 4:15 PM Mika Westerberg > > > > <mika.westerberg at intel.com> wrote: > > > > > > > > > > On Wed, Nov 20, 2019 at 01:11:52PM +0100, Karol Herbst wrote: > > > > > > On Wed, Nov 20, 2019 at 1:09 PM Mika Westerberg > > > > > > <mika.westerberg at intel.com> wrote: > > > > > > > > > > > > > > On Wed, Nov 20, 2019 at 12:58:00PM +0100, Karol Herbst wrote: > > > > > > > > overall, what I really want to know is, _why_ does it work on windows? > > > > > > > > > > > > > > So do I ;-) > > > > > > > > > > > > > > > Or what are we doing differently on Linux so that it doesn't work? If > > > > > > > > anybody has any idea on how we could dig into this and figure it out > > > > > > > > on this level, this would probably allow us to get closer to the root > > > > > > > > cause? no? > > > > > > > > > > > > > > Have you tried to use the acpi_rev_override parameter in your system and > > > > > > > does it have any effect? > > > > > > > > > > > > > > Also did you try to trace the ACPI _ON/_OFF() methods? I think that > > > > > > > should hopefully reveal something. > > > > > > > > > > > > > > > > > > > I think I did in the past and it seemed to have worked, there is just > > > > > > one big issue with this: it's a Dell specific workaround afaik, and > > > > > > this issue plagues not just Dell, but we've seen it on HP and Lenovo > > > > > > laptops as well, and I've heard about users having the same issues on > > > > > > Asus and MSI laptops as well. > > > > > > > > > > Maybe it is not a workaround at all but instead it simply determines > > > > > whether the system supports RTD3 or something like that (IIRC Windows 8 > > > > > started supporting it). Maybe Dell added check for Linux because at that > > > > > time Linux did not support it. > > > > > > > > > > > > > the point is, it's not checking it by default, so by default you still > > > > run into the windows 8 codepath. > > > > > > Well you can add the quirk to acpi_rev_dmi_table[] so it goes to that > > > path by default. There are a bunch of similar entries for Dell machines. > > > > OK, so the "Linux path" works and the other doesn't. > > > > I thought that this was the other way around, sorry for the confusion. > > > > > Of course this does not help the non-Dell users so we would still need > > > to figure out the root cause. > > > > Right. > > > > Whatever it is, though, AML appears to be involved in it and AFAICS > > there's no evidence that it affects any root ports that are not > > populated with NVidia GPUs. > > > > last week or so I found systems where the GPU was under the "PCI > Express Root Port" (name from lspci) and on those systems all of that > seems to work. So I am wondering if it's indeed just the 0x1901 one, > which also explains Mikas case that Thunderbolt stuff works as devices > never get populated under this particular bridge controller, but under > those "Root Port"sIt always is a PCIe port, but its location within the SoC may matter. Also some custom AML-based power management is involved and that may be making specific assumptions on the configuration of the SoC and the GPU at the time of its invocation which unfortunately are not known to us. However, it looks like the AML invoked to power down the GPU from acpi_pci_set_power_state() gets confused if it is not in PCI D0 at that point, so it looks like that AML tries to access device memory on the GPU (beyond the PCI config space) or similar which is not accessible in PCI power states below D0.
Mika Westerberg
2019-Nov-21 11:28 UTC
[Nouveau] [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
On Wed, Nov 20, 2019 at 11:29:33PM +0100, Rafael J. Wysocki wrote:> > last week or so I found systems where the GPU was under the "PCI > > Express Root Port" (name from lspci) and on those systems all of that > > seems to work. So I am wondering if it's indeed just the 0x1901 one, > > which also explains Mikas case that Thunderbolt stuff works as devices > > never get populated under this particular bridge controller, but under > > those "Root Port"s > > It always is a PCIe port, but its location within the SoC may matter.Exactly. Intel hardware has PCIe ports on CPU side (these are called PEG, PCI Express Graphics, ports), and the PCH side. I think the IP is still the same.> Also some custom AML-based power management is involved and that may > be making specific assumptions on the configuration of the SoC and the > GPU at the time of its invocation which unfortunately are not known to > us. > > However, it looks like the AML invoked to power down the GPU from > acpi_pci_set_power_state() gets confused if it is not in PCI D0 at > that point, so it looks like that AML tries to access device memory on > the GPU (beyond the PCI config space) or similar which is not > accessible in PCI power states below D0.Or the PCI config space of the GPU when the parent root port is in D3hot (as it is the case here). Also then the GPU config space is not accessible. I took a look at the HP Omen ACPI tables which has similar problem and there is also check for Windows 7 (but not Linux) so I think one alternative workaround would be to add these devices into acpi_osi_dmi_table[] where .callback is set to dmi_disable_osi_win8 (or pass 'acpi_osi="!Windows 2012"' in the kernel command line).
Maybe Matching Threads
- [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
- [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
- [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
- [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges
- [PATCH v4] pci: prevent putting nvidia GPUs into lower device states on certain intel bridges