Displaying 13 results from an estimated 13 matches for "drm_dp_connection_status_notifi".
Did you mean:
drm_dp_connection_status_notify
2019 Sep 03
0
[PATCH v2 17/27] drm/dp_mst: Rename drm_dp_add_port and drm_dp_update_port
The names for these functions are rather confusing. drm_dp_add_port()
sounds like a function that would simply create a port and add it to a
topology, and do nothing more. Similarly, drm_dp_update_port() would be
assumed to be the function that should be used to update port
information after initial creation.
While those assumptions are currently correct in how these functions are
used, a quick
2019 Sep 03
0
[PATCH v2 21/27] drm/dp_mst: Don't forget to update port->input in drm_dp_mst_handle_conn_stat()
This probably hasn't caused any problems up until now since it's
probably nearly impossible to encounter this in the wild, however if we
were to receive a connection status notification from the MST hub after
resume while we're in the middle of reprobing the link addresses for a
topology then there's a much larger chance that a port could have
changed from being an output port to
2019 Sep 25
2
[PATCH v2 16/27] drm/dp_mst: Refactor pdt setup/teardown, add more locking
On Tue, Sep 03, 2019 at 04:45:54PM -0400, Lyude Paul wrote:
> Since we're going to be implementing suspend/resume reprobing very soon,
> we need to make sure we are extra careful to ensure that our locking
> actually protects the topology state where we expect it to. Turns out
> this isn't the case with drm_dp_port_setup_pdt() and
> drm_dp_port_teardown_pdt(), both of which
2019 Sep 25
1
[PATCH v2 20/27] drm/dp_mst: Protect drm_dp_mst_port members with connection_mutex
On Tue, Sep 03, 2019 at 04:45:58PM -0400, Lyude Paul wrote:
> Yes-you read that right. Currently there is literally no locking in
> place for any of the drm_dp_mst_port struct members that can be modified
> in response to a link address response, or a connection status response.
> Which literally means if we're unlucky enough to have any sort of
> hotplugging event happen before
2019 Sep 03
0
[PATCH v2 19/27] drm/dp_mst: Handle UP requests asynchronously
Once upon a time, hotplugging devices on MST branches actually worked in
DRM. Now, it only works in amdgpu (likely because of how it's hotplug
handlers are implemented). On both i915 and nouveau, hotplug
notifications from MST branches are noticed - but trying to respond to
them causes messaging timeouts and causes the whole topology state to go
out of sync with reality, usually resulting in
2019 Oct 22
0
[PATCH v5 04/14] drm/dp_mst: Handle UP requests asynchronously
Once upon a time, hotplugging devices on MST branches actually worked in
DRM. Now, it only works in amdgpu (likely because of how it's hotplug
handlers are implemented). On both i915 and nouveau, hotplug
notifications from MST branches are noticed - but trying to respond to
them causes messaging timeouts and causes the whole topology state to go
out of sync with reality, usually resulting in
2019 Oct 22
0
[PATCH v5 03/14] drm/dp_mst: Refactor pdt setup/teardown, add more locking
Since we're going to be implementing suspend/resume reprobing very soon,
we need to make sure we are extra careful to ensure that our locking
actually protects the topology state where we expect it to. Turns out
this isn't the case with drm_dp_port_setup_pdt() and
drm_dp_port_teardown_pdt(), both of which change port->mstb without
grabbing &mgr->lock.
Additionally, since most
2019 Sep 03
0
[PATCH v2 16/27] drm/dp_mst: Refactor pdt setup/teardown, add more locking
Since we're going to be implementing suspend/resume reprobing very soon,
we need to make sure we are extra careful to ensure that our locking
actually protects the topology state where we expect it to. Turns out
this isn't the case with drm_dp_port_setup_pdt() and
drm_dp_port_teardown_pdt(), both of which change port->mstb without
grabbing &mgr->lock.
Additionally, since most
2019 Sep 03
0
[PATCH v2 20/27] drm/dp_mst: Protect drm_dp_mst_port members with connection_mutex
Yes-you read that right. Currently there is literally no locking in
place for any of the drm_dp_mst_port struct members that can be modified
in response to a link address response, or a connection status response.
Which literally means if we're unlucky enough to have any sort of
hotplugging event happen before we're finished with reprobing link
addresses, we'll race and the contents of
2019 Sep 25
0
[PATCH v2 16/27] drm/dp_mst: Refactor pdt setup/teardown, add more locking
On Wed, 2019-09-25 at 15:27 -0400, Sean Paul wrote:
> On Tue, Sep 03, 2019 at 04:45:54PM -0400, Lyude Paul wrote:
> > Since we're going to be implementing suspend/resume reprobing very soon,
> > we need to make sure we are extra careful to ensure that our locking
> > actually protects the topology state where we expect it to. Turns out
> > this isn't the case
2019 Sep 03
50
[PATCH v2 00/27] DP MST Refactors + debugging tools + suspend/resume reprobing
This is the large series for adding MST suspend/resume reprobing that
I've been working on for quite a while now. In addition, I:
- Refactored and cleaned up any code I ended up digging through in the
process of understanding how some parts of these helpers worked.
- Added some debugging tools along the way that I ended up needing to
figure out some issues in my own code
Note that
2019 Oct 22
0
[PATCH v5 06/14] drm/dp_mst: Protect drm_dp_mst_port members with locking
This is a complicated one. Essentially, there's currently a problem in the MST
core that hasn't really caused any issues that we're aware of (emphasis on "that
we're aware of"): locking.
When we go through and probe the link addresses and path resources in a
topology, we hold no locks when updating ports with said information. The
members I'm referring to in
2019 Oct 22
17
[PATCH v5 00/14] DP MST Refactors + debugging tools + suspend/resume reprobing
This is the final portion of the large series for adding MST
suspend/resume reprobing that I've been working on for quite a while
now. In addition, I:
* Refactored and cleaned up any code I ended up digging through in the
process of understanding how some parts of these helpers worked.
* Added some debugging tools along the way that I ended up needing to
figure out some issues in my own