Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 00/18] drm/display/dp_mst: Drop Radeon MST support, make MST atomic-only
Ugh, thanks ./scripts/get_maintainers.pl for confusing and breaking git-send email <<. Sorry for the resend everyone. For quite a while we've been carrying around a lot of legacy modesetting code in the MST helpers that has been rather annoying to keep around, and very often gets in the way of trying to implement additional functionality in MST such as fallback link rate retraining, dynamic BPC management and DSC support, etc. because of the fact that we can't rely on atomic for everything. Luckily, we only actually have one user of the legacy MST code in the kernel - radeon. Originally I was thinking of trying to maintain this code and keep it around in some form, but I'm pretty unconvinced anyone is actually using this. My reasoning for that is because I've seen nearly no issues regarding MST on radeon for quite a while now - despite the fact my local testing seems to indicate it's quite broken. This isn't too surprising either, as MST support in radeon.ko is gated behind a module parameter that isn't enabled by default. This isn't to say I wouldn't be open to alternative suggestions, but I'd rather not be the one to have to spend time on that if at all possible! Plus, I already floated the idea of dropping this code by AMD folks a few times and didn't get much resistance. As well, this series has some basic refactoring that I did along the way and some bugs I had to fix in order to get my atomic-only MST code working. Most of this is pretty straight forward and simply renaming things to more closely match the DisplayPort specification, as I think this will also make maintaining this code a lot easier in the long run (I've gotten myself confused way too many times because of this). So far I've tested this on all three MST drivers: amdgpu, i915 and nouveau, along with making sure that removing the radeon MST code doesn't break anything else. The one thing I very much could use help with regarding testing though is making sure that this works with amdgpu's DSC support on MST. So, with this we should be using the atomic state as much as possible with MST modesetting, hooray! Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> Lyude Paul (18): drm/amdgpu/dc/mst: Rename dp_mst_stream_allocation(_table) drm/amdgpu/dm/mst: Rename get_payload_table() drm/display/dp_mst: Rename drm_dp_mst_vcpi_allocation drm/display/dp_mst: Call them time slots, not VCPI slots drm/display/dp_mst: Fix confusing docs for drm_dp_atomic_release_time_slots() drm/display/dp_mst: Add some missing kdocs for atomic MST structs drm/display/dp_mst: Add helper for finding payloads in atomic MST state drm/display/dp_mst: Add nonblocking helpers for DP MST drm/display/dp_mst: Don't open code modeset checks for releasing time slots drm/display/dp_mst: Fix modeset tracking in drm_dp_atomic_release_vcpi_slots() drm/nouveau/kms: Cache DP encoders in nouveau_connector drm/nouveau/kms: Pull mst state in for all modesets drm/display/dp_mst: Add helpers for serializing SST <-> MST transitions drm/display/dp_mst: Drop all ports from topology on CSNs before queueing link address work drm/display/dp_mst: Skip releasing payloads if last connected port isn't connected drm/display/dp_mst: Maintain time slot allocations when deleting payloads drm/radeon: Drop legacy MST support drm/display/dp_mst: Move all payload info into the atomic state .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 72 +- .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 111 +- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 126 +- drivers/gpu/drm/amd/display/dc/core/dc_link.c | 10 +- drivers/gpu/drm/amd/display/dc/dm_helpers.h | 4 +- .../amd/display/include/link_service_types.h | 18 +- drivers/gpu/drm/display/drm_dp_mst_topology.c | 1160 ++++++++--------- drivers/gpu/drm/i915/display/intel_display.c | 11 + drivers/gpu/drm/i915/display/intel_dp.c | 9 + drivers/gpu/drm/i915/display/intel_dp_mst.c | 91 +- drivers/gpu/drm/i915/display/intel_hdcp.c | 24 +- drivers/gpu/drm/nouveau/dispnv50/disp.c | 202 ++- drivers/gpu/drm/nouveau/dispnv50/disp.h | 2 + drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +- drivers/gpu/drm/nouveau/nouveau_connector.h | 3 + drivers/gpu/drm/radeon/Makefile | 2 +- drivers/gpu/drm/radeon/atombios_crtc.c | 11 +- drivers/gpu/drm/radeon/atombios_encoders.c | 59 - drivers/gpu/drm/radeon/radeon_atombios.c | 2 - drivers/gpu/drm/radeon/radeon_connectors.c | 61 +- drivers/gpu/drm/radeon/radeon_device.c | 1 - drivers/gpu/drm/radeon/radeon_dp_mst.c | 778 ----------- drivers/gpu/drm/radeon/radeon_drv.c | 4 - drivers/gpu/drm/radeon/radeon_encoders.c | 14 +- drivers/gpu/drm/radeon/radeon_irq_kms.c | 10 +- drivers/gpu/drm/radeon/radeon_mode.h | 40 - include/drm/display/drm_dp_mst_helper.h | 230 ++-- 27 files changed, 991 insertions(+), 2082 deletions(-) delete mode 100644 drivers/gpu/drm/radeon/radeon_dp_mst.c -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 01/18] drm/amdgpu/dc/mst: Rename dp_mst_stream_allocation(_table)
Just to make this more clear to outside contributors that these are DC-specific structs, as this also threw me into a loop a number of times before I figured out the purpose of this. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 9 ++++----- drivers/gpu/drm/amd/display/dc/core/dc_link.c | 10 +++++----- drivers/gpu/drm/amd/display/dc/dm_helpers.h | 4 ++-- .../gpu/drm/amd/display/include/link_service_types.h | 11 ++++++++--- 4 files changed, 19 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c index 7c799ddc1d27..1bd70d306c22 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c @@ -153,9 +153,8 @@ enum dc_edid_status dm_helpers_parse_edid_caps( return result; } -static void get_payload_table( - struct amdgpu_dm_connector *aconnector, - struct dp_mst_stream_allocation_table *proposed_table) +static void get_payload_table(struct amdgpu_dm_connector *aconnector, + struct dc_dp_mst_stream_allocation_table *proposed_table) { int i; struct drm_dp_mst_topology_mgr *mst_mgr @@ -177,7 +176,7 @@ static void get_payload_table( mst_mgr->payloads[i].payload_state = DP_PAYLOAD_REMOTE) { - struct dp_mst_stream_allocation *sa + struct dc_dp_mst_stream_allocation *sa &proposed_table->stream_allocations[ proposed_table->stream_count]; @@ -201,7 +200,7 @@ void dm_helpers_dp_update_branch_info( bool dm_helpers_dp_mst_write_payload_allocation_table( struct dc_context *ctx, const struct dc_stream_state *stream, - struct dp_mst_stream_allocation_table *proposed_table, + struct dc_dp_mst_stream_allocation_table *proposed_table, bool enable) { struct amdgpu_dm_connector *aconnector; diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c index a789ea8af27f..db0f5158a0c2 100644 --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c @@ -3424,7 +3424,7 @@ static void update_mst_stream_alloc_table( struct dc_link *link, struct stream_encoder *stream_enc, struct hpo_dp_stream_encoder *hpo_dp_stream_enc, // TODO: Rename stream_enc to dio_stream_enc? - const struct dp_mst_stream_allocation_table *proposed_table) + const struct dc_dp_mst_stream_allocation_table *proposed_table) { struct link_mst_stream_allocation work_table[MAX_CONTROLLER_NUM] = { 0 }; struct link_mst_stream_allocation *dc_alloc; @@ -3586,7 +3586,7 @@ enum dc_status dc_link_allocate_mst_payload(struct pipe_ctx *pipe_ctx) { struct dc_stream_state *stream = pipe_ctx->stream; struct dc_link *link = stream->link; - struct dp_mst_stream_allocation_table proposed_table = {0}; + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; struct fixed31_32 avg_time_slots_per_mtp; struct fixed31_32 pbn; struct fixed31_32 pbn_per_slot; @@ -3691,7 +3691,7 @@ enum dc_status dc_link_reduce_mst_payload(struct pipe_ctx *pipe_ctx, uint32_t bw struct fixed31_32 avg_time_slots_per_mtp; struct fixed31_32 pbn; struct fixed31_32 pbn_per_slot; - struct dp_mst_stream_allocation_table proposed_table = {0}; + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; uint8_t i; enum act_return_status ret; const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res); @@ -3779,7 +3779,7 @@ enum dc_status dc_link_increase_mst_payload(struct pipe_ctx *pipe_ctx, uint32_t struct fixed31_32 pbn; struct fixed31_32 pbn_per_slot; struct link_encoder *link_encoder = link->link_enc; - struct dp_mst_stream_allocation_table proposed_table = {0}; + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; uint8_t i; enum act_return_status ret; const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res); @@ -3855,7 +3855,7 @@ static enum dc_status deallocate_mst_payload(struct pipe_ctx *pipe_ctx) { struct dc_stream_state *stream = pipe_ctx->stream; struct dc_link *link = stream->link; - struct dp_mst_stream_allocation_table proposed_table = {0}; + struct dc_dp_mst_stream_allocation_table proposed_table = {0}; struct fixed31_32 avg_time_slots_per_mtp = dc_fixpt_from_int(0); int i; bool mst_mode = (link->type == dc_connection_mst_branch); diff --git a/drivers/gpu/drm/amd/display/dc/dm_helpers.h b/drivers/gpu/drm/amd/display/dc/dm_helpers.h index fb6a2d7b6470..8173f4b80424 100644 --- a/drivers/gpu/drm/amd/display/dc/dm_helpers.h +++ b/drivers/gpu/drm/amd/display/dc/dm_helpers.h @@ -33,7 +33,7 @@ #include "dc_types.h" #include "dc.h" -struct dp_mst_stream_allocation_table; +struct dc_dp_mst_stream_allocation_table; struct aux_payload; enum aux_return_code_type; @@ -77,7 +77,7 @@ void dm_helpers_dp_update_branch_info( bool dm_helpers_dp_mst_write_payload_allocation_table( struct dc_context *ctx, const struct dc_stream_state *stream, - struct dp_mst_stream_allocation_table *proposed_table, + struct dc_dp_mst_stream_allocation_table *proposed_table, bool enable); /* diff --git a/drivers/gpu/drm/amd/display/include/link_service_types.h b/drivers/gpu/drm/amd/display/include/link_service_types.h index 447a56286dd0..91bffc5bf52c 100644 --- a/drivers/gpu/drm/amd/display/include/link_service_types.h +++ b/drivers/gpu/drm/amd/display/include/link_service_types.h @@ -245,8 +245,13 @@ union dpcd_training_lane_set { }; +/* AMD's copy of various payload data for MST. We have two copies of the payload table (one in DRM, + * one in DC) since DRM's MST helpers can't be accessed here. This stream allocation table should + * _ONLY_ be filled out from DM and then passed to DC, do NOT use these for _any_ kind of atomic + * state calculations in DM, or you will break something. + */ /* DP MST stream allocation (payload bandwidth number) */ -struct dp_mst_stream_allocation { +struct dc_dp_mst_stream_allocation { uint8_t vcp_id; /* number of slots required for the DP stream in * transport packet */ @@ -254,11 +259,11 @@ struct dp_mst_stream_allocation { }; /* DP MST stream allocation table */ -struct dp_mst_stream_allocation_table { +struct dc_dp_mst_stream_allocation_table { /* number of DP video streams */ int stream_count; /* array of stream allocations */ - struct dp_mst_stream_allocation stream_allocations[MAX_CONTROLLER_NUM]; + struct dc_dp_mst_stream_allocation stream_allocations[MAX_CONTROLLER_NUM]; }; #endif /*__DAL_LINK_SERVICE_TYPES_H__*/ -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 02/18] drm/amdgpu/dm/mst: Rename get_payload_table()
This function isn't too confusing if you see the comment around the call-site for it, but if you don't then it's not at all obvious this is meant to copy DRM's payload table over to DC's internal state structs. Seeing this function before finding that comment definitely threw me into a loop a few times. So, let's rename this to make it's purpose more obvious regardless of where in the code you are. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c index 1bd70d306c22..1eaacab0334b 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c @@ -153,8 +153,9 @@ enum dc_edid_status dm_helpers_parse_edid_caps( return result; } -static void get_payload_table(struct amdgpu_dm_connector *aconnector, - struct dc_dp_mst_stream_allocation_table *proposed_table) +static void +fill_dc_mst_payload_table_from_drm(struct amdgpu_dm_connector *aconnector, + struct dc_dp_mst_stream_allocation_table *proposed_table) { int i; struct drm_dp_mst_topology_mgr *mst_mgr @@ -252,7 +253,7 @@ bool dm_helpers_dp_mst_write_payload_allocation_table( * stream. AMD ASIC stream slot allocation should follow the same * sequence. copy DRM MST allocation to dc */ - get_payload_table(aconnector, proposed_table); + fill_dc_mst_payload_table_from_drm(aconnector, proposed_table); return true; } -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 03/18] drm/display/dp_mst: Rename drm_dp_mst_vcpi_allocation
In retrospect, the name I chose for this originally is confusing, as there's a lot more info in here then just the VCPI. This really should be called a payload. Let's make it more obvious that this is meant to be related to the atomic state and is about payloads by renaming it to drm_dp_mst_atomic_payload. Also, rename various variables throughout the code that use atomic payloads. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 96 +++++++++---------- include/drm/display/drm_dp_mst_helper.h | 4 +- 2 files changed, 50 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index 67b3b9697da7..38eecb89e22d 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4381,7 +4381,7 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, int pbn_div) { struct drm_dp_mst_topology_state *topology_state; - struct drm_dp_vcpi_allocation *pos, *vcpi = NULL; + struct drm_dp_mst_atomic_payload *pos, *payload = NULL; int prev_slots, prev_bw, req_slots; topology_state = drm_atomic_get_mst_topology_state(state, mgr); @@ -4389,11 +4389,11 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, return PTR_ERR(topology_state); /* Find the current allocation for this port, if any */ - list_for_each_entry(pos, &topology_state->vcpis, next) { + list_for_each_entry(pos, &topology_state->payloads, next) { if (pos->port == port) { - vcpi = pos; - prev_slots = vcpi->vcpi; - prev_bw = vcpi->pbn; + payload = pos; + prev_slots = payload->vcpi; + prev_bw = payload->pbn; /* * This should never happen, unless the driver tries @@ -4410,7 +4410,7 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, break; } } - if (!vcpi) { + if (!payload) { prev_slots = 0; prev_bw = 0; } @@ -4428,17 +4428,17 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, port, prev_bw, pbn); /* Add the new allocation to the state */ - if (!vcpi) { - vcpi = kzalloc(sizeof(*vcpi), GFP_KERNEL); - if (!vcpi) + if (!payload) { + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) return -ENOMEM; drm_dp_mst_get_port_malloc(port); - vcpi->port = port; - list_add(&vcpi->next, &topology_state->vcpis); + payload->port = port; + list_add(&payload->next, &topology_state->payloads); } - vcpi->vcpi = req_slots; - vcpi->pbn = pbn; + payload->vcpi = req_slots; + payload->pbn = pbn; return req_slots; } @@ -4475,21 +4475,21 @@ int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state, struct drm_dp_mst_port *port) { struct drm_dp_mst_topology_state *topology_state; - struct drm_dp_vcpi_allocation *pos; + struct drm_dp_mst_atomic_payload *pos; bool found = false; topology_state = drm_atomic_get_mst_topology_state(state, mgr); if (IS_ERR(topology_state)) return PTR_ERR(topology_state); - list_for_each_entry(pos, &topology_state->vcpis, next) { + list_for_each_entry(pos, &topology_state->payloads, next) { if (pos->port == port) { found = true; break; } } if (WARN_ON(!found)) { - drm_err(mgr->dev, "no VCPI for [MST PORT:%p] found in mst state %p\n", + drm_err(mgr->dev, "No payload for [MST PORT:%p] found in mst state %p\n", port, &topology_state->base); return -EINVAL; } @@ -5072,7 +5072,7 @@ drm_dp_mst_duplicate_state(struct drm_private_obj *obj) { struct drm_dp_mst_topology_state *state, *old_state to_dp_mst_topology_state(obj->state); - struct drm_dp_vcpi_allocation *pos, *vcpi; + struct drm_dp_mst_atomic_payload *pos, *payload; state = kmemdup(old_state, sizeof(*state), GFP_KERNEL); if (!state) @@ -5080,25 +5080,25 @@ drm_dp_mst_duplicate_state(struct drm_private_obj *obj) __drm_atomic_helper_private_obj_duplicate_state(obj, &state->base); - INIT_LIST_HEAD(&state->vcpis); + INIT_LIST_HEAD(&state->payloads); - list_for_each_entry(pos, &old_state->vcpis, next) { + list_for_each_entry(pos, &old_state->payloads, next) { /* Prune leftover freed VCPI allocations */ if (!pos->vcpi) continue; - vcpi = kmemdup(pos, sizeof(*vcpi), GFP_KERNEL); - if (!vcpi) + payload = kmemdup(pos, sizeof(*payload), GFP_KERNEL); + if (!payload) goto fail; - drm_dp_mst_get_port_malloc(vcpi->port); - list_add(&vcpi->next, &state->vcpis); + drm_dp_mst_get_port_malloc(payload->port); + list_add(&payload->next, &state->payloads); } return &state->base; fail: - list_for_each_entry_safe(pos, vcpi, &state->vcpis, next) { + list_for_each_entry_safe(pos, payload, &state->payloads, next) { drm_dp_mst_put_port_malloc(pos->port); kfree(pos); } @@ -5112,9 +5112,9 @@ static void drm_dp_mst_destroy_state(struct drm_private_obj *obj, { struct drm_dp_mst_topology_state *mst_state to_dp_mst_topology_state(state); - struct drm_dp_vcpi_allocation *pos, *tmp; + struct drm_dp_mst_atomic_payload *pos, *tmp; - list_for_each_entry_safe(pos, tmp, &mst_state->vcpis, next) { + list_for_each_entry_safe(pos, tmp, &mst_state->payloads, next) { /* We only keep references to ports with non-zero VCPIs */ if (pos->vcpi) drm_dp_mst_put_port_malloc(pos->port); @@ -5147,7 +5147,7 @@ static int drm_dp_mst_atomic_check_mstb_bw_limit(struct drm_dp_mst_branch *mstb, struct drm_dp_mst_topology_state *state) { - struct drm_dp_vcpi_allocation *vcpi; + struct drm_dp_mst_atomic_payload *payload; struct drm_dp_mst_port *port; int pbn_used = 0, ret; bool found = false; @@ -5155,9 +5155,9 @@ drm_dp_mst_atomic_check_mstb_bw_limit(struct drm_dp_mst_branch *mstb, /* Check that we have at least one port in our state that's downstream * of this branch, otherwise we can skip this branch */ - list_for_each_entry(vcpi, &state->vcpis, next) { - if (!vcpi->pbn || - !drm_dp_mst_port_downstream_of_branch(vcpi->port, mstb)) + list_for_each_entry(payload, &state->payloads, next) { + if (!payload->pbn || + !drm_dp_mst_port_downstream_of_branch(payload->port, mstb)) continue; found = true; @@ -5188,7 +5188,7 @@ static int drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, struct drm_dp_mst_topology_state *state) { - struct drm_dp_vcpi_allocation *vcpi; + struct drm_dp_mst_atomic_payload *payload; int pbn_used = 0; if (port->pdt == DP_PEER_DEVICE_NONE) @@ -5197,10 +5197,10 @@ drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { bool found = false; - list_for_each_entry(vcpi, &state->vcpis, next) { - if (vcpi->port != port) + list_for_each_entry(payload, &state->payloads, next) { + if (payload->port != port) continue; - if (!vcpi->pbn) + if (!payload->pbn) return 0; found = true; @@ -5220,7 +5220,7 @@ drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, return -EINVAL; } - pbn_used = vcpi->pbn; + pbn_used = payload->pbn; } else { pbn_used = drm_dp_mst_atomic_check_mstb_bw_limit(port->mstb, state); @@ -5245,25 +5245,25 @@ static inline int drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_topology_state *mst_state) { - struct drm_dp_vcpi_allocation *vcpi; + struct drm_dp_mst_atomic_payload *payload; int avail_slots = mst_state->total_avail_slots, payload_count = 0; - list_for_each_entry(vcpi, &mst_state->vcpis, next) { - /* Releasing VCPI is always OK-even if the port is gone */ - if (!vcpi->vcpi) { + list_for_each_entry(payload, &mst_state->payloads, next) { + /* Releasing payloads is always OK-even if the port is gone */ + if (!payload->vcpi) { drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all VCPI slots\n", - vcpi->port); + payload->port); continue; } drm_dbg_atomic(mgr->dev, "[MST PORT:%p] requires %d vcpi slots\n", - vcpi->port, vcpi->vcpi); + payload->port, payload->vcpi); - avail_slots -= vcpi->vcpi; + avail_slots -= payload->vcpi; if (avail_slots < 0) { drm_dbg_atomic(mgr->dev, "[MST PORT:%p] not enough VCPI slots in mst state %p (avail=%d)\n", - vcpi->port, mst_state, avail_slots + vcpi->vcpi); + payload->port, mst_state, avail_slots + payload->vcpi); return -ENOSPC; } @@ -5296,7 +5296,7 @@ drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr, int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr) { struct drm_dp_mst_topology_state *mst_state; - struct drm_dp_vcpi_allocation *pos; + struct drm_dp_mst_atomic_payload *pos; struct drm_connector *connector; struct drm_connector_state *conn_state; struct drm_crtc *crtc; @@ -5307,7 +5307,7 @@ int drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm if (IS_ERR(mst_state)) return -EINVAL; - list_for_each_entry(pos, &mst_state->vcpis, next) { + list_for_each_entry(pos, &mst_state->payloads, next) { connector = pos->port->connector; @@ -5361,7 +5361,7 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, bool enable) { struct drm_dp_mst_topology_state *mst_state; - struct drm_dp_vcpi_allocation *pos; + struct drm_dp_mst_atomic_payload *pos; bool found = false; int vcpi = 0; @@ -5370,7 +5370,7 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, if (IS_ERR(mst_state)) return PTR_ERR(mst_state); - list_for_each_entry(pos, &mst_state->vcpis, next) { + list_for_each_entry(pos, &mst_state->payloads, next) { if (pos->port == port) { found = true; break; @@ -5557,7 +5557,7 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, mst_state->start_slot = 1; mst_state->mgr = mgr; - INIT_LIST_HEAD(&mst_state->vcpis); + INIT_LIST_HEAD(&mst_state->payloads); drm_atomic_private_obj_init(dev, &mgr->base, &mst_state->base, diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index 10adec068b7f..5671173f9f37 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -542,7 +542,7 @@ struct drm_dp_payload { #define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base) -struct drm_dp_vcpi_allocation { +struct drm_dp_mst_atomic_payload { struct drm_dp_mst_port *port; int vcpi; int pbn; @@ -552,7 +552,7 @@ struct drm_dp_vcpi_allocation { struct drm_dp_mst_topology_state { struct drm_private_state base; - struct list_head vcpis; + struct list_head payloads; struct drm_dp_mst_topology_mgr *mgr; u8 total_avail_slots; u8 start_slot; -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 04/18] drm/display/dp_mst: Call them time slots, not VCPI slots
VCPI is only sort of the correct term here, originally the majority of this code simply referred to timeslots vaguely as "slots" - and since I started working on it and adding atomic functionality, the name "VCPI slots" has been used to represent time slots. Now that we actually have consistent access to the DisplayPort spec thanks to VESA, I now know this isn't actually the proper term - as the specification refers to these as time slots. Since we're trying to make this code as easy to figure out as possible, let's take this opportunity to correct this nomenclature and call them by their proper name - timeslots. Likewise, we rename various functions appropriately, along with replacing references in the kernel documentation and various debugging messages. It's important to note that this patch series leaves the legacy MST code untouched for the most part, which is fine since we'll be removing it soon anyhow. There should be no functional changes in this series. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 2 +- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 28 ++--- drivers/gpu/drm/display/drm_dp_mst_topology.c | 106 +++++++++--------- drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 +- drivers/gpu/drm/nouveau/dispnv50/disp.c | 4 +- include/drm/display/drm_dp_mst_helper.h | 6 +- 6 files changed, 75 insertions(+), 76 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index ad4571190a90..f84a4ad736d8 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7393,7 +7393,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, clock = adjusted_mode->clock; dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false); } - dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_vcpi_slots(state, + dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_time_slots(state, mst_mgr, mst_port, dm_new_connector_state->pbn, diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index 9221b6690a4a..e40ff51e7be0 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -378,7 +378,7 @@ static int dm_dp_mst_atomic_check(struct drm_connector *connector, return 0; } - return drm_dp_atomic_release_vcpi_slots(state, + return drm_dp_atomic_release_time_slots(state, mst_mgr, mst_port); } @@ -689,7 +689,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, if (initial_slack[next_index] > fair_pbn_alloc) { vars[next_index].pbn += fair_pbn_alloc; - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, vars[next_index].pbn, @@ -699,7 +699,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn); } else { vars[next_index].pbn -= fair_pbn_alloc; - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, vars[next_index].pbn, @@ -708,7 +708,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, } } else { vars[next_index].pbn += initial_slack[next_index]; - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, vars[next_index].pbn, @@ -718,7 +718,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16; } else { vars[next_index].pbn -= initial_slack[next_index]; - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, vars[next_index].pbn, @@ -775,7 +775,7 @@ static void try_disable_dsc(struct drm_atomic_state *state, break; vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps); - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, vars[next_index].pbn, @@ -787,7 +787,7 @@ static void try_disable_dsc(struct drm_atomic_state *state, vars[next_index].bpp_x16 = 0; } else { vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.max_kbps); - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, vars[next_index].pbn, @@ -873,11 +873,11 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); vars[i + k].dsc_enabled = false; vars[i + k].bpp_x16 = 0; - if (drm_dp_atomic_find_vcpi_slots(state, - params[i].port->mgr, - params[i].port, - vars[i + k].pbn, - dm_mst_get_pbn_divider(dc_link)) < 0) + if (drm_dp_atomic_find_time_slots(state, + params[i].port->mgr, + params[i].port, + vars[i + k].pbn, + dm_mst_get_pbn_divider(dc_link)) < 0) return false; } if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) { @@ -891,7 +891,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); vars[i + k].dsc_enabled = true; vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16; - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port, vars[i + k].pbn, @@ -901,7 +901,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); vars[i + k].dsc_enabled = false; vars[i + k].bpp_x16 = 0; - if (drm_dp_atomic_find_vcpi_slots(state, + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port, vars[i + k].pbn, diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index 38eecb89e22d..702ff5d9ecc7 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4304,11 +4304,11 @@ struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_ EXPORT_SYMBOL(drm_dp_mst_get_edid); /** - * drm_dp_find_vcpi_slots() - Find VCPI slots for this PBN value + * drm_dp_find_vcpi_slots() - Find time slots for this PBN value * @mgr: manager to use * @pbn: payload bandwidth to convert into slots. * - * Calculate the number of VCPI slots that will be required for the given PBN + * Calculate the number of time slots that will be required for the given PBN * value. This function is deprecated, and should not be used in atomic * drivers. * @@ -4345,17 +4345,17 @@ static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr, } /** - * drm_dp_atomic_find_vcpi_slots() - Find and add VCPI slots to the state + * drm_dp_atomic_find_time_slots() - Find and add time slots to the state * @state: global atomic state * @mgr: MST topology manager for the port - * @port: port to find vcpi slots for + * @port: port to find time slots for * @pbn: bandwidth required for the mode in PBN * @pbn_div: divider for DSC mode that takes FEC into account * - * Allocates VCPI slots to @port, replacing any previous VCPI allocations it + * Allocates time slots to @port, replacing any previous timeslot allocations it * may have had. Any atomic drivers which support MST must call this function * in their &drm_encoder_helper_funcs.atomic_check() callback to change the - * current VCPI allocation for the new state, but only when + * current timeslot allocation for the new state, but only when * &drm_crtc_state.mode_changed or &drm_crtc_state.connectors_changed is set * to ensure compatibility with userspace applications that still use the * legacy modesetting UAPI. @@ -4365,17 +4365,17 @@ static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr, * * Additionally, it is OK to call this function multiple times on the same * @port as needed. It is not OK however, to call this function and - * drm_dp_atomic_release_vcpi_slots() in the same atomic check phase. + * drm_dp_atomic_release_time_slots() in the same atomic check phase. * * See also: - * drm_dp_atomic_release_vcpi_slots() + * drm_dp_atomic_release_time_slots() * drm_dp_mst_atomic_check() * * Returns: * Total slots in the atomic state assigned for this port, or a negative error * code if the port no longer exists */ -int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, +int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, int pbn, int pbn_div) @@ -4392,17 +4392,17 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, list_for_each_entry(pos, &topology_state->payloads, next) { if (pos->port == port) { payload = pos; - prev_slots = payload->vcpi; + prev_slots = payload->time_slots; prev_bw = payload->pbn; /* * This should never happen, unless the driver tries - * releasing and allocating the same VCPI allocation, + * releasing and allocating the same timeslot allocation, * which is an error */ if (WARN_ON(!prev_slots)) { drm_err(mgr->dev, - "cannot allocate and release VCPI on [MST PORT:%p] in the same state\n", + "cannot allocate and release time slots on [MST PORT:%p] in the same state\n", port); return -EINVAL; } @@ -4420,7 +4420,7 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, req_slots = DIV_ROUND_UP(pbn, pbn_div); - drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] VCPI %d -> %d\n", + drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] TU %d -> %d\n", port->connector->base.id, port->connector->name, port, prev_slots, req_slots); drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] PBN %d -> %d\n", @@ -4437,20 +4437,20 @@ int drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, payload->port = port; list_add(&payload->next, &topology_state->payloads); } - payload->vcpi = req_slots; + payload->time_slots = req_slots; payload->pbn = pbn; return req_slots; } -EXPORT_SYMBOL(drm_dp_atomic_find_vcpi_slots); +EXPORT_SYMBOL(drm_dp_atomic_find_time_slots); /** - * drm_dp_atomic_release_vcpi_slots() - Release allocated vcpi slots + * drm_dp_atomic_release_time_slots() - Release allocated time slots * @state: global atomic state * @mgr: MST topology manager for the port - * @port: The port to release the VCPI slots from + * @port: The port to release the time slots from * - * Releases any VCPI slots that have been allocated to a port in the atomic + * Releases any time slots that have been allocated to a port in the atomic * state. Any atomic drivers which support MST must call this function in * their &drm_connector_helper_funcs.atomic_check() callback when the * connector will no longer have VCPI allocated (e.g. because its CRTC was @@ -4459,18 +4459,18 @@ EXPORT_SYMBOL(drm_dp_atomic_find_vcpi_slots); * It is OK to call this even if @port has been removed from the system. * Additionally, it is OK to call this function multiple times on the same * @port as needed. It is not OK however, to call this function and - * drm_dp_atomic_find_vcpi_slots() on the same @port in a single atomic check + * drm_dp_atomic_find_time_slots() on the same @port in a single atomic check * phase. * * See also: - * drm_dp_atomic_find_vcpi_slots() + * drm_dp_atomic_find_time_slots() * drm_dp_mst_atomic_check() * * Returns: * 0 if all slots for this port were added back to * &drm_dp_mst_topology_state.avail_slots or negative error code */ -int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state, +int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) { @@ -4494,16 +4494,16 @@ int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state, return -EINVAL; } - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] VCPI %d -> 0\n", port, pos->vcpi); - if (pos->vcpi) { + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] TU %d -> 0\n", port, pos->time_slots); + if (pos->time_slots) { drm_dp_mst_put_port_malloc(port); - pos->vcpi = 0; + pos->time_slots = 0; pos->pbn = 0; } return 0; } -EXPORT_SYMBOL(drm_dp_atomic_release_vcpi_slots); +EXPORT_SYMBOL(drm_dp_atomic_release_time_slots); /** * drm_dp_mst_update_slots() - updates the slot info depending on the DP ecoding format @@ -4557,7 +4557,7 @@ bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, ret = drm_dp_init_vcpi(mgr, &port->vcpi, pbn, slots); if (ret) { - drm_dbg_kms(mgr->dev, "failed to init vcpi slots=%d ret=%d\n", + drm_dbg_kms(mgr->dev, "failed to init time slots=%d ret=%d\n", DIV_ROUND_UP(pbn, mgr->pbn_div), ret); drm_dp_mst_topology_put_port(port); goto out; @@ -5083,8 +5083,8 @@ drm_dp_mst_duplicate_state(struct drm_private_obj *obj) INIT_LIST_HEAD(&state->payloads); list_for_each_entry(pos, &old_state->payloads, next) { - /* Prune leftover freed VCPI allocations */ - if (!pos->vcpi) + /* Prune leftover freed timeslot allocations */ + if (!pos->time_slots) continue; payload = kmemdup(pos, sizeof(*payload), GFP_KERNEL); @@ -5116,7 +5116,7 @@ static void drm_dp_mst_destroy_state(struct drm_private_obj *obj, list_for_each_entry_safe(pos, tmp, &mst_state->payloads, next) { /* We only keep references to ports with non-zero VCPIs */ - if (pos->vcpi) + if (pos->time_slots) drm_dp_mst_put_port_malloc(pos->port); kfree(pos); } @@ -5242,28 +5242,28 @@ drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, } static inline int -drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_topology_state *mst_state) +drm_dp_mst_atomic_check_payload_alloc_limits(struct drm_dp_mst_topology_mgr *mgr, + struct drm_dp_mst_topology_state *mst_state) { struct drm_dp_mst_atomic_payload *payload; int avail_slots = mst_state->total_avail_slots, payload_count = 0; list_for_each_entry(payload, &mst_state->payloads, next) { /* Releasing payloads is always OK-even if the port is gone */ - if (!payload->vcpi) { - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all VCPI slots\n", + if (!payload->time_slots) { + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all time slots\n", payload->port); continue; } - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] requires %d vcpi slots\n", - payload->port, payload->vcpi); + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] requires %d time slots\n", + payload->port, payload->time_slots); - avail_slots -= payload->vcpi; + avail_slots -= payload->time_slots; if (avail_slots < 0) { drm_dbg_atomic(mgr->dev, - "[MST PORT:%p] not enough VCPI slots in mst state %p (avail=%d)\n", - payload->port, mst_state, avail_slots + payload->vcpi); + "[MST PORT:%p] not enough time slots in mst state %p (avail=%d)\n", + payload->port, mst_state, avail_slots + payload->time_slots); return -ENOSPC; } @@ -5274,7 +5274,7 @@ drm_dp_mst_atomic_check_vcpi_alloc_limit(struct drm_dp_mst_topology_mgr *mgr, return -EINVAL; } } - drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p VCPI avail=%d used=%d\n", + drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p TU avail=%d used=%d\n", mgr, mst_state, avail_slots, mst_state->total_avail_slots - avail_slots); return 0; @@ -5363,7 +5363,7 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, struct drm_dp_mst_topology_state *mst_state; struct drm_dp_mst_atomic_payload *pos; bool found = false; - int vcpi = 0; + int time_slots = 0; mst_state = drm_atomic_get_mst_topology_state(state, port->mgr); @@ -5379,30 +5379,30 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, if (!found) { drm_dbg_atomic(state->dev, - "[MST PORT:%p] Couldn't find VCPI allocation in mst state %p\n", + "[MST PORT:%p] Couldn't find payload in mst state %p\n", port, mst_state); return -EINVAL; } if (pos->dsc_enabled == enable) { drm_dbg_atomic(state->dev, - "[MST PORT:%p] DSC flag is already set to %d, returning %d VCPI slots\n", - port, enable, pos->vcpi); - vcpi = pos->vcpi; + "[MST PORT:%p] DSC flag is already set to %d, returning %d time slots\n", + port, enable, pos->time_slots); + time_slots = pos->time_slots; } if (enable) { - vcpi = drm_dp_atomic_find_vcpi_slots(state, port->mgr, port, pbn, pbn_div); + time_slots = drm_dp_atomic_find_time_slots(state, port->mgr, port, pbn, pbn_div); drm_dbg_atomic(state->dev, - "[MST PORT:%p] Enabling DSC flag, reallocating %d VCPI slots on the port\n", - port, vcpi); - if (vcpi < 0) + "[MST PORT:%p] Enabling DSC flag, reallocating %d time slots on the port\n", + port, time_slots); + if (time_slots < 0) return -EINVAL; } pos->dsc_enabled = enable; - return vcpi; + return time_slots; } EXPORT_SYMBOL(drm_dp_mst_atomic_enable_dsc); /** @@ -5412,15 +5412,15 @@ EXPORT_SYMBOL(drm_dp_mst_atomic_enable_dsc); * * Checks the given topology state for an atomic update to ensure that it's * valid. This includes checking whether there's enough bandwidth to support - * the new VCPI allocations in the atomic update. + * the new timeslot allocations in the atomic update. * * Any atomic drivers supporting DP MST must make sure to call this after * checking the rest of their state in their * &drm_mode_config_funcs.atomic_check() callback. * * See also: - * drm_dp_atomic_find_vcpi_slots() - * drm_dp_atomic_release_vcpi_slots() + * drm_dp_atomic_find_time_slots() + * drm_dp_atomic_release_time_slots() * * Returns: * @@ -5436,7 +5436,7 @@ int drm_dp_mst_atomic_check(struct drm_atomic_state *state) if (!mgr->mst_state) continue; - ret = drm_dp_mst_atomic_check_vcpi_alloc_limit(mgr, mst_state); + ret = drm_dp_mst_atomic_check_payload_alloc_limits(mgr, mst_state); if (ret) break; diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c index 061b277e5ce7..0c922667398a 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c @@ -70,7 +70,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, crtc_state->pipe_bpp, false); - slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr, + slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr, connector->port, crtc_state->pbn, drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr, @@ -344,8 +344,7 @@ intel_dp_mst_atomic_check(struct drm_connector *connector, } mgr = &enc_to_mst(to_intel_encoder(old_conn_state->best_encoder))->primary->dp.mst_mgr; - ret = drm_dp_atomic_release_vcpi_slots(&state->base, mgr, - intel_connector->port); + ret = drm_dp_atomic_release_time_slots(&state->base, mgr, intel_connector->port); return ret; } diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 4347f0b61797..631dba5a2418 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -1070,7 +1070,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder, false); } - slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr, mstc->port, + slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port, asyh->dp.pbn, 0); if (slots < 0) return slots; @@ -1282,7 +1282,7 @@ nv50_mstc_atomic_check(struct drm_connector *connector, return 0; } - return drm_dp_atomic_release_vcpi_slots(state, mgr, mstc->port); + return drm_dp_atomic_release_time_slots(state, mgr, mstc->port); } static int diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index 5671173f9f37..8ab4f14f2344 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -544,7 +544,7 @@ struct drm_dp_payload { struct drm_dp_mst_atomic_payload { struct drm_dp_mst_port *port; - int vcpi; + int time_slots; int pbn; bool dsc_enabled; struct list_head next; @@ -846,7 +846,7 @@ void drm_dp_mst_connector_early_unregister(struct drm_connector *connector, struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr); int __must_check -drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state, +drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, int pbn, int pbn_div); @@ -858,7 +858,7 @@ int __must_check drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr); int __must_check -drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state, +drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr, -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 05/18] drm/display/dp_mst: Fix confusing docs for drm_dp_atomic_release_time_slots()
For some reason we mention returning 0 if "slots have been added back to drm_dp_mst_topology_state->avail_slots". This is totally misleading, avail_slots is simply for figuring out the total number of slots available in total on the topology and has no relation to the current payload allocations. So, let's get rid of that comment. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index 702ff5d9ecc7..ec52f91b1f0e 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4467,8 +4467,7 @@ EXPORT_SYMBOL(drm_dp_atomic_find_time_slots); * drm_dp_mst_atomic_check() * * Returns: - * 0 if all slots for this port were added back to - * &drm_dp_mst_topology_state.avail_slots or negative error code + * 0 on success, negative error code otherwise */ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 06/18] drm/display/dp_mst: Add some missing kdocs for atomic MST structs
Since we're about to start adding some stuff here, we may as well fill in any missing documentation that we forgot to write. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- include/drm/display/drm_dp_mst_helper.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index 8ab4f14f2344..eb0ea578b227 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -542,19 +542,43 @@ struct drm_dp_payload { #define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base) +/** + * struct drm_dp_mst_atomic_payload - Atomic state struct for an MST payload + * + * The primary atomic state structure for a given MST payload. Stores information like current + * bandwidth allocation, intended action for this payload, etc. + */ struct drm_dp_mst_atomic_payload { + /** @port: The MST port assigned to this payload */ struct drm_dp_mst_port *port; + /** @time_slots: The number of timeslots allocated to this payload */ int time_slots; + /** @pbn: The payload bandwidth for this payload */ int pbn; + /** @dsc_enabled: Whether or not this payload has DSC enabled */ bool dsc_enabled; + + /** @next: The list node for this payload */ struct list_head next; }; +/** + * struct drm_dp_mst_topology_state - DisplayPort MST topology atomic state + * + * This struct represents the atomic state of the toplevel DisplayPort MST manager + */ struct drm_dp_mst_topology_state { + /** @base: Base private state for atomic */ struct drm_private_state base; + + /** @payloads: The list of payloads being created/destroyed in this state */ struct list_head payloads; + /** @mgr: The topology manager */ struct drm_dp_mst_topology_mgr *mgr; + + /** @total_avail_slots: The total number of slots this topology can handle (63 or 64) */ u8 total_avail_slots; + /** @start_slot: The first usable time slot in this topology (1 or 0) */ u8 start_slot; }; -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 07/18] drm/display/dp_mst: Add helper for finding payloads in atomic MST state
We already open-code this quite often, and will be iterating through payloads even more once we've moved all of the payload tracking into the atomic state. So, let's add a helper for doing this. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 109 ++++++++---------- 1 file changed, 45 insertions(+), 64 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index ec52f91b1f0e..0bc2c7a90c37 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -1737,6 +1737,19 @@ drm_dp_mst_dump_port_topology_history(struct drm_dp_mst_port *port) {} #define save_port_topology_ref(port, type) #endif +static struct drm_dp_mst_atomic_payload * +drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state, + struct drm_dp_mst_port *port) +{ + struct drm_dp_mst_atomic_payload *payload; + + list_for_each_entry(payload, &state->payloads, next) + if (payload->port == port) + return payload; + + return NULL; +} + static void drm_dp_destroy_mst_branch_device(struct kref *kref) { struct drm_dp_mst_branch *mstb @@ -4381,39 +4394,31 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, int pbn_div) { struct drm_dp_mst_topology_state *topology_state; - struct drm_dp_mst_atomic_payload *pos, *payload = NULL; - int prev_slots, prev_bw, req_slots; + struct drm_dp_mst_atomic_payload *payload = NULL; + int prev_slots = 0, prev_bw = 0, req_slots; topology_state = drm_atomic_get_mst_topology_state(state, mgr); if (IS_ERR(topology_state)) return PTR_ERR(topology_state); /* Find the current allocation for this port, if any */ - list_for_each_entry(pos, &topology_state->payloads, next) { - if (pos->port == port) { - payload = pos; - prev_slots = payload->time_slots; - prev_bw = payload->pbn; - - /* - * This should never happen, unless the driver tries - * releasing and allocating the same timeslot allocation, - * which is an error - */ - if (WARN_ON(!prev_slots)) { - drm_err(mgr->dev, - "cannot allocate and release time slots on [MST PORT:%p] in the same state\n", - port); - return -EINVAL; - } + payload = drm_atomic_get_mst_payload_state(topology_state, port); + if (payload) { + prev_slots = payload->time_slots; + prev_bw = payload->pbn; - break; + /* + * This should never happen, unless the driver tries + * releasing and allocating the same timeslot allocation, + * which is an error + */ + if (WARN_ON(!prev_slots)) { + drm_err(mgr->dev, + "cannot allocate and release time slots on [MST PORT:%p] in the same state\n", + port); + return -EINVAL; } } - if (!payload) { - prev_slots = 0; - prev_bw = 0; - } if (pbn_div <= 0) pbn_div = mgr->pbn_div; @@ -4474,30 +4479,24 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_port *port) { struct drm_dp_mst_topology_state *topology_state; - struct drm_dp_mst_atomic_payload *pos; - bool found = false; + struct drm_dp_mst_atomic_payload *payload; topology_state = drm_atomic_get_mst_topology_state(state, mgr); if (IS_ERR(topology_state)) return PTR_ERR(topology_state); - list_for_each_entry(pos, &topology_state->payloads, next) { - if (pos->port == port) { - found = true; - break; - } - } - if (WARN_ON(!found)) { + payload = drm_atomic_get_mst_payload_state(topology_state, port); + if (WARN_ON(!payload)) { drm_err(mgr->dev, "No payload for [MST PORT:%p] found in mst state %p\n", port, &topology_state->base); return -EINVAL; } - drm_dbg_atomic(mgr->dev, "[MST PORT:%p] TU %d -> 0\n", port, pos->time_slots); - if (pos->time_slots) { + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] TU %d -> 0\n", port, payload->time_slots); + if (payload->time_slots) { drm_dp_mst_put_port_malloc(port); - pos->time_slots = 0; - pos->pbn = 0; + payload->time_slots = 0; + payload->pbn = 0; } return 0; @@ -5194,18 +5193,8 @@ drm_dp_mst_atomic_check_port_bw_limit(struct drm_dp_mst_port *port, return 0; if (drm_dp_mst_is_end_device(port->pdt, port->mcs)) { - bool found = false; - - list_for_each_entry(payload, &state->payloads, next) { - if (payload->port != port) - continue; - if (!payload->pbn) - return 0; - - found = true; - break; - } - if (!found) + payload = drm_atomic_get_mst_payload_state(state, port); + if (!payload) return 0; /* @@ -5360,34 +5349,26 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, bool enable) { struct drm_dp_mst_topology_state *mst_state; - struct drm_dp_mst_atomic_payload *pos; - bool found = false; + struct drm_dp_mst_atomic_payload *payload; int time_slots = 0; mst_state = drm_atomic_get_mst_topology_state(state, port->mgr); - if (IS_ERR(mst_state)) return PTR_ERR(mst_state); - list_for_each_entry(pos, &mst_state->payloads, next) { - if (pos->port == port) { - found = true; - break; - } - } - - if (!found) { + payload = drm_atomic_get_mst_payload_state(mst_state, port); + if (!payload) { drm_dbg_atomic(state->dev, "[MST PORT:%p] Couldn't find payload in mst state %p\n", port, mst_state); return -EINVAL; } - if (pos->dsc_enabled == enable) { + if (payload->dsc_enabled == enable) { drm_dbg_atomic(state->dev, "[MST PORT:%p] DSC flag is already set to %d, returning %d time slots\n", - port, enable, pos->time_slots); - time_slots = pos->time_slots; + port, enable, payload->time_slots); + time_slots = payload->time_slots; } if (enable) { @@ -5399,7 +5380,7 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, return -EINVAL; } - pos->dsc_enabled = enable; + payload->dsc_enabled = enable; return time_slots; } -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 08/18] drm/display/dp_mst: Add nonblocking helpers for DP MST
As Daniel Vetter pointed out, if we only use the atomic modesetting locks with MST it's technically possible for a driver with non-blocking modesets to race when it comes to MST displays - as we make the mistake of not doing our own CRTC commit tracking in the topology_state object. This could potentially cause problems if something like this happens: * User starts non-blocking commit to disable CRTC-1 on MST topology 1 * User starts non-blocking commit to enable CRTC-2 on MST topology 1 There's no guarantee here that the commit for disabling CRTC-2 will only occur after CRTC-1 has finished, since neither commit shares a CRTC - only the private modesetting object for MST. Keep in mind this likely isn't a problem for blocking modesets, only non-blocking. So, begin fixing this by keeping track of which CRTCs on a topology have changed by keeping track of which CRTCs we release or allocate timeslots on. As well, add some helpers for: * Setting up the drm_crtc_commit structs in the ->commit_setup hook * Waiting for any CRTC dependencies from the previous topology state Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 9 +- drivers/gpu/drm/display/drm_dp_mst_topology.c | 93 +++++++++++++++++++ drivers/gpu/drm/i915/display/intel_display.c | 11 +++ drivers/gpu/drm/nouveau/dispnv50/disp.c | 12 +++ include/drm/display/drm_dp_mst_helper.h | 15 +++ 5 files changed, 139 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index f84a4ad736d8..d9c7393ef151 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -211,6 +211,7 @@ static int amdgpu_dm_encoder_init(struct drm_device *dev, static int amdgpu_dm_connector_get_modes(struct drm_connector *connector); static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state); +static int amdgpu_dm_atomic_setup_commit(struct drm_atomic_state *state); static int amdgpu_dm_atomic_check(struct drm_device *dev, struct drm_atomic_state *state); @@ -2808,7 +2809,8 @@ static const struct drm_mode_config_funcs amdgpu_dm_mode_funcs = { }; static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = { - .atomic_commit_tail = amdgpu_dm_atomic_commit_tail + .atomic_commit_tail = amdgpu_dm_atomic_commit_tail, + .atomic_commit_setup = amdgpu_dm_atomic_setup_commit, }; static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector) @@ -9558,6 +9560,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) DRM_ERROR("Waiting for fences timed out!"); drm_atomic_helper_update_legacy_modeset_state(dev, state); + drm_dp_mst_atomic_wait_for_dependencies(state); dm_state = dm_atomic_get_new_state(state); if (dm_state && dm_state->context) { @@ -9958,6 +9961,10 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state) dc_release_state(dc_state_temp); } +static int amdgpu_dm_atomic_setup_commit(struct drm_atomic_state *state) +{ + return drm_dp_mst_atomic_setup_commit(state); +} static int dm_force_atomic_commit(struct drm_connector *connector) { diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index 0bc2c7a90c37..a0ed29f83556 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4395,12 +4395,16 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, { struct drm_dp_mst_topology_state *topology_state; struct drm_dp_mst_atomic_payload *payload = NULL; + struct drm_connector_state *conn_state; int prev_slots = 0, prev_bw = 0, req_slots; topology_state = drm_atomic_get_mst_topology_state(state, mgr); if (IS_ERR(topology_state)) return PTR_ERR(topology_state); + conn_state = drm_atomic_get_new_connector_state(state, port->connector); + topology_state->pending_crtc_mask |= drm_crtc_mask(conn_state->crtc); + /* Find the current allocation for this port, if any */ payload = drm_atomic_get_mst_payload_state(topology_state, port); if (payload) { @@ -4480,11 +4484,15 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, { struct drm_dp_mst_topology_state *topology_state; struct drm_dp_mst_atomic_payload *payload; + struct drm_connector_state *conn_state; topology_state = drm_atomic_get_mst_topology_state(state, mgr); if (IS_ERR(topology_state)) return PTR_ERR(topology_state); + conn_state = drm_atomic_get_old_connector_state(state, port->connector); + topology_state->pending_crtc_mask |= drm_crtc_mask(conn_state->crtc); + payload = drm_atomic_get_mst_payload_state(topology_state, port); if (WARN_ON(!payload)) { drm_err(mgr->dev, "No payload for [MST PORT:%p] found in mst state %p\n", @@ -4503,6 +4511,83 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, } EXPORT_SYMBOL(drm_dp_atomic_release_time_slots); +/** + * drm_dp_mst_atomic_setup_commit() - setup_commit hook for MST helpers + * @state: global atomic state + * + * This function saves all of the &drm_crtc_commit structs in an atomic state that touch any CRTCs + * currently assigned to an MST topology. Drivers must call this hook from their + * &drm_mode_config_helper_funcs.atomic_commit_setup hook. + * + * Returns: + * 0 if all CRTC commits were retrieved successfully, negative error code otherwise + */ +int drm_dp_mst_atomic_setup_commit(struct drm_atomic_state *state) +{ + struct drm_dp_mst_topology_mgr *mgr; + struct drm_dp_mst_topology_state *mst_state; + struct drm_crtc *crtc; + struct drm_crtc_state *crtc_state; + int i, j, commit_idx, num_commit_deps; + + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { + if (!mst_state->pending_crtc_mask) + continue; + + num_commit_deps = hweight32(mst_state->pending_crtc_mask); + mst_state->commit_deps = kmalloc_array(num_commit_deps, + sizeof(*mst_state->commit_deps), GFP_KERNEL); + if (!mst_state->commit_deps) + return -ENOMEM; + mst_state->num_commit_deps = num_commit_deps; + + commit_idx = 0; + for_each_new_crtc_in_state(state, crtc, crtc_state, j) { + if (mst_state->pending_crtc_mask & drm_crtc_mask(crtc)) { + mst_state->commit_deps[commit_idx++] + drm_crtc_commit_get(crtc_state->commit); + } + } + } + + return 0; +} +EXPORT_SYMBOL(drm_dp_mst_atomic_setup_commit); + +/** + * drm_dp_mst_atomic_wait_for_dependencies() - Wait for all pending commits on MST topologies + * @state: global atomic state + * + * Goes through any MST topologies in this atomic state, and waits for any pending commits which + * touched CRTCs that were/are on an MST topology to be programmed to hardware and flipped to before + * returning. This is to prevent multiple non-blocking commits affecting an MST topology from racing + * with eachother by forcing them to be executed sequentially in situations where the only resources + * the modeset objects in these commits share are an MST topology. + * + * This function also prepares the new MST state for commit by performing some state preparation + * which can't be done until this point, such as reading back the final VC start slots (which are + * determined at commit-time) from the previous state. + * + * All MST drivers must call this function after calling drm_atomic_helper_wait_for_dependencies(), + * or whatever their equivalent of that is. + */ +void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state) +{ + struct drm_dp_mst_topology_state *old_mst_state; + struct drm_dp_mst_topology_mgr *mgr; + int i, j, ret; + + for_each_old_mst_mgr_in_state(state, mgr, old_mst_state, i) { + for (j = 0; j < old_mst_state->num_commit_deps; j++) { + ret = drm_crtc_commit_wait(old_mst_state->commit_deps[j]); + if (ret < 0) + drm_err(state->dev, "Failed to wait for %s: %d\n", + old_mst_state->commit_deps[j]->crtc->name, ret); + } + } +} +EXPORT_SYMBOL(drm_dp_mst_atomic_wait_for_dependencies); + /** * drm_dp_mst_update_slots() - updates the slot info depending on the DP ecoding format * @mst_state: mst_state to update @@ -5079,6 +5164,9 @@ drm_dp_mst_duplicate_state(struct drm_private_obj *obj) __drm_atomic_helper_private_obj_duplicate_state(obj, &state->base); INIT_LIST_HEAD(&state->payloads); + state->commit_deps = NULL; + state->num_commit_deps = 0; + state->pending_crtc_mask = 0; list_for_each_entry(pos, &old_state->payloads, next) { /* Prune leftover freed timeslot allocations */ @@ -5111,6 +5199,7 @@ static void drm_dp_mst_destroy_state(struct drm_private_obj *obj, struct drm_dp_mst_topology_state *mst_state to_dp_mst_topology_state(state); struct drm_dp_mst_atomic_payload *pos, *tmp; + int i; list_for_each_entry_safe(pos, tmp, &mst_state->payloads, next) { /* We only keep references to ports with non-zero VCPIs */ @@ -5119,6 +5208,10 @@ static void drm_dp_mst_destroy_state(struct drm_private_obj *obj, kfree(pos); } + for (i = 0; i < mst_state->num_commit_deps; i++) + drm_crtc_commit_put(mst_state->commit_deps[i]); + + kfree(mst_state->commit_deps); kfree(mst_state); } diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 186b37925d23..5475f66c0ed8 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -8446,6 +8446,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state) intel_atomic_commit_fence_wait(state); drm_atomic_helper_wait_for_dependencies(&state->base); + drm_dp_mst_atomic_wait_for_dependencies(&state->base); if (state->modeset) wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_MODESET); @@ -9514,6 +9515,15 @@ static int intel_initial_commit(struct drm_device *dev) return ret; } +static int intel_atomic_commit_setup(struct drm_atomic_state *state) +{ + return drm_dp_mst_atomic_setup_commit(state); +} + +static const struct drm_mode_config_helper_funcs intel_mode_config_funcs = { + .atomic_commit_setup = intel_atomic_commit_setup, +}; + static void intel_mode_config_init(struct drm_i915_private *i915) { struct drm_mode_config *mode_config = &i915->drm.mode_config; @@ -9528,6 +9538,7 @@ static void intel_mode_config_init(struct drm_i915_private *i915) mode_config->prefer_shadow = 1; mode_config->funcs = &intel_mode_funcs; + mode_config->helper_private = &intel_mode_config_funcs; mode_config->async_page_flip = HAS_ASYNC_FLIPS(i915); diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 631dba5a2418..768312607fdb 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -2134,6 +2134,7 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state) nv50_crc_atomic_stop_reporting(state); drm_atomic_helper_wait_for_fences(dev, state, false); drm_atomic_helper_wait_for_dependencies(state); + drm_dp_mst_atomic_wait_for_dependencies(state); drm_atomic_helper_update_legacy_modeset_state(dev, state); drm_atomic_helper_calc_timestamping_constants(state); @@ -2614,6 +2615,16 @@ nv50_disp_func = { .atomic_state_free = nv50_disp_atomic_state_free, }; +static int nv50_disp_atomic_setup_commit(struct drm_atomic_state *state) +{ + return drm_dp_mst_atomic_setup_commit(state); +} + +static const struct drm_mode_config_helper_funcs +nv50_disp_helper_func = { + .atomic_commit_setup = nv50_disp_atomic_setup_commit, +}; + /****************************************************************************** * Init *****************************************************************************/ @@ -2713,6 +2724,7 @@ nv50_display_create(struct drm_device *dev) nouveau_display(dev)->fini = nv50_display_fini; disp->disp = &nouveau_display(dev)->disp; dev->mode_config.funcs = &nv50_disp_func; + dev->mode_config.helper_private = &nv50_disp_helper_func; dev->mode_config.quirk_addfb_prefer_xbgr_30bpp = true; dev->mode_config.normalize_zpos = true; diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index eb0ea578b227..dd74afcee888 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -576,6 +576,19 @@ struct drm_dp_mst_topology_state { /** @mgr: The topology manager */ struct drm_dp_mst_topology_mgr *mgr; + /** + * @pending_crtc_mask: A bitmask of all CRTCs this topology state touches, drivers may + * modify this to add additional dependencies if needed. + */ + u32 pending_crtc_mask; + /** + * @commit_deps: A list of all CRTC commits affecting this topology, this field isn't + * populated until drm_dp_mst_atomic_wait_for_dependencies() is called. + */ + struct drm_crtc_commit **commit_deps; + /** @num_commit_deps: The number of CRTC commits in @commit_deps */ + size_t num_commit_deps; + /** @total_avail_slots: The total number of slots this topology can handle (63 or 64) */ u8 total_avail_slots; /** @start_slot: The first usable time slot in this topology (1 or 0) */ @@ -885,6 +898,8 @@ int __must_check drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); +void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state); +int __must_check drm_dp_mst_atomic_setup_commit(struct drm_atomic_state *state); int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, bool power_up); int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 09/18] drm/display/dp_mst: Don't open code modeset checks for releasing time slots
I'm not sure why, but at the time I originally wrote the find/release time slot helpers I thought we should avoid keeping modeset tracking out of the MST helpers. In retrospect though there's no actual good reason to do this, and the logic has ended up being identical across all the drivers using the helpers. Also, it needs to be fixed anyway so we don't break things when going atomic-only with MST. So, let's just move this code into drm_dp_atomic_release_time_slots() and stop open coding it. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 29 +++---------------- drivers/gpu/drm/display/drm_dp_mst_topology.c | 21 ++++++++++++-- drivers/gpu/drm/i915/display/intel_dp_mst.c | 24 +-------------- drivers/gpu/drm/nouveau/dispnv50/disp.c | 21 -------------- 4 files changed, 23 insertions(+), 72 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index e40ff51e7be0..b447c453b58d 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -353,34 +353,13 @@ dm_dp_mst_detect(struct drm_connector *connector, } static int dm_dp_mst_atomic_check(struct drm_connector *connector, - struct drm_atomic_state *state) + struct drm_atomic_state *state) { - struct drm_connector_state *new_conn_state - drm_atomic_get_new_connector_state(state, connector); - struct drm_connector_state *old_conn_state - drm_atomic_get_old_connector_state(state, connector); struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); - struct drm_crtc_state *new_crtc_state; - struct drm_dp_mst_topology_mgr *mst_mgr; - struct drm_dp_mst_port *mst_port; + struct drm_dp_mst_topology_mgr *mst_mgr = &aconnector->mst_port->mst_mgr; + struct drm_dp_mst_port *mst_port = aconnector->port; - mst_port = aconnector->port; - mst_mgr = &aconnector->mst_port->mst_mgr; - - if (!old_conn_state->crtc) - return 0; - - if (new_conn_state->crtc) { - new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); - if (!new_crtc_state || - !drm_atomic_crtc_needs_modeset(new_crtc_state) || - new_crtc_state->enable) - return 0; - } - - return drm_dp_atomic_release_time_slots(state, - mst_mgr, - mst_port); + return drm_dp_atomic_release_time_slots(state, mst_mgr, mst_port); } static const struct drm_connector_helper_funcs dm_dp_mst_connector_helper_funcs = { diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index a0ed29f83556..e73b34c0cc9e 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4484,14 +4484,29 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, { struct drm_dp_mst_topology_state *topology_state; struct drm_dp_mst_atomic_payload *payload; - struct drm_connector_state *conn_state; + struct drm_connector_state *old_conn_state, *new_conn_state; + + old_conn_state = drm_atomic_get_old_connector_state(state, port->connector); + if (!old_conn_state->crtc) + return 0; + + /* If the CRTC isn't disabled by this state, don't release it's payload */ + new_conn_state = drm_atomic_get_new_connector_state(state, port->connector); + if (new_conn_state->crtc) { + struct drm_crtc_state *crtc_state + drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); + + if (!crtc_state || + !drm_atomic_crtc_needs_modeset(crtc_state) || + crtc_state->enable) + return 0; + } topology_state = drm_atomic_get_mst_topology_state(state, mgr); if (IS_ERR(topology_state)) return PTR_ERR(topology_state); - conn_state = drm_atomic_get_old_connector_state(state, port->connector); - topology_state->pending_crtc_mask |= drm_crtc_mask(conn_state->crtc); + topology_state->pending_crtc_mask |= drm_crtc_mask(old_conn_state->crtc); payload = drm_atomic_get_mst_payload_state(topology_state, port); if (WARN_ON(!payload)) { diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c index 0c922667398a..4b0af3c26176 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c @@ -308,13 +308,10 @@ intel_dp_mst_atomic_check(struct drm_connector *connector, struct drm_atomic_state *_state) { struct intel_atomic_state *state = to_intel_atomic_state(_state); - struct drm_connector_state *new_conn_state - drm_atomic_get_new_connector_state(&state->base, connector); struct drm_connector_state *old_conn_state drm_atomic_get_old_connector_state(&state->base, connector); struct intel_connector *intel_connector to_intel_connector(connector); - struct drm_crtc *new_crtc = new_conn_state->crtc; struct drm_dp_mst_topology_mgr *mgr; int ret; @@ -326,27 +323,8 @@ intel_dp_mst_atomic_check(struct drm_connector *connector, if (ret) return ret; - if (!old_conn_state->crtc) - return 0; - - /* We only want to free VCPI if this state disables the CRTC on this - * connector - */ - if (new_crtc) { - struct intel_crtc *crtc = to_intel_crtc(new_crtc); - struct intel_crtc_state *crtc_state - intel_atomic_get_new_crtc_state(state, crtc); - - if (!crtc_state || - !drm_atomic_crtc_needs_modeset(&crtc_state->uapi) || - crtc_state->uapi.enable) - return 0; - } - mgr = &enc_to_mst(to_intel_encoder(old_conn_state->best_encoder))->primary->dp.mst_mgr; - ret = drm_dp_atomic_release_time_slots(&state->base, mgr, intel_connector->port); - - return ret; + return drm_dp_atomic_release_time_slots(&state->base, mgr, intel_connector->port); } static void clear_act_sent(struct intel_encoder *encoder, diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 768312607fdb..461e5e3345f8 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -1260,27 +1260,6 @@ nv50_mstc_atomic_check(struct drm_connector *connector, { struct nv50_mstc *mstc = nv50_mstc(connector); struct drm_dp_mst_topology_mgr *mgr = &mstc->mstm->mgr; - struct drm_connector_state *new_conn_state - drm_atomic_get_new_connector_state(state, connector); - struct drm_connector_state *old_conn_state - drm_atomic_get_old_connector_state(state, connector); - struct drm_crtc_state *crtc_state; - struct drm_crtc *new_crtc = new_conn_state->crtc; - - if (!old_conn_state->crtc) - return 0; - - /* We only want to free VCPI if this state disables the CRTC on this - * connector - */ - if (new_crtc) { - crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc); - - if (!crtc_state || - !drm_atomic_crtc_needs_modeset(crtc_state) || - crtc_state->enable) - return 0; - } return drm_dp_atomic_release_time_slots(state, mgr, mstc->port); } -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 10/18] drm/display/dp_mst: Fix modeset tracking in drm_dp_atomic_release_vcpi_slots()
Currently with the MST helpers we avoid releasing payloads _and_ avoid pulling in the MST state if there aren't any actual payload changes. While we want to keep the first step, we need to now make sure that we're always pulling in the MST state on all modesets that can modify payloads - even if the resulting payloads in the atomic state are identical to the previous ones. This is mainly to make it so that if a CRTC is still assigned to a connector but is set to DPMS off, the CRTC still holds it's payload allocation in the atomic state and still appropriately pulls in the MST state for commit tracking. Otherwise, we'll occasionally forget to update MST payloads from changes caused by non-atomic DPMS changes. Doing this also allows us to track bandwidth limitations in a state correctly even between DPMS changes, so that there's no chance of a simple ->active change being rejected by the atomic check. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index e73b34c0cc9e..c5edcf2a26c8 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4485,6 +4485,7 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_state *topology_state; struct drm_dp_mst_atomic_payload *payload; struct drm_connector_state *old_conn_state, *new_conn_state; + bool update_payload = true; old_conn_state = drm_atomic_get_old_connector_state(state, port->connector); if (!old_conn_state->crtc) @@ -4496,10 +4497,12 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, struct drm_crtc_state *crtc_state drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); - if (!crtc_state || - !drm_atomic_crtc_needs_modeset(crtc_state) || - crtc_state->enable) + /* No modeset means no payload changes, so it's safe to not pull in the MST state */ + if (!crtc_state || !drm_atomic_crtc_needs_modeset(crtc_state)) return 0; + + if (!crtc_state->mode_changed && !crtc_state->connectors_changed) + update_payload = false; } topology_state = drm_atomic_get_mst_topology_state(state, mgr); @@ -4507,6 +4510,8 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, return PTR_ERR(topology_state); topology_state->pending_crtc_mask |= drm_crtc_mask(old_conn_state->crtc); + if (!update_payload) + return 0; payload = drm_atomic_get_mst_payload_state(topology_state, port); if (WARN_ON(!payload)) { -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 11/18] drm/nouveau/kms: Cache DP encoders in nouveau_connector
Post-NV50, the only kind of encoder you'll find for DP connectors on Nvidia GPUs are SORs (serial output resources). Because SORs have fixed associations with their connectors, we can correctly assume that any DP connector on a nvidia GPU will have exactly one SOR encoder routed to it for DisplayPort. Since we're going to need to be able to retrieve this fixed SOR DP encoder much more often as a result of hooking up MST helpers for tracking SST<->MST transitions in atomic states, let's simply cache this encoder in nouveau_connector for any DP connectors on the system to avoid looking it up each time. This isn't safe for NV50 since PIORs then come into play, however there's no code pre-NV50 that would need to look this up anyhow - so it's not really an issue. Signed-off-by: Lyude Paul <lyude at redhat.com> --- drivers/gpu/drm/nouveau/nouveau_connector.c | 4 +++- drivers/gpu/drm/nouveau/nouveau_connector.h | 3 +++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c index 22b83a6577eb..ffbd8a9cf2af 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.c +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c @@ -1368,7 +1368,7 @@ nouveau_connector_create(struct drm_device *dev, kfree(nv_connector); return ERR_PTR(ret); } - fallthrough; + break; default: funcs = &nouveau_connector_funcs; break; @@ -1422,6 +1422,8 @@ nouveau_connector_create(struct drm_device *dev, switch (type) { case DRM_MODE_CONNECTOR_DisplayPort: + nv_connector->dp_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP); + fallthrough; case DRM_MODE_CONNECTOR_eDP: drm_dp_cec_register_connector(&nv_connector->aux, connector); break; diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.h b/drivers/gpu/drm/nouveau/nouveau_connector.h index b0773af5a98f..f468c181d9a3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.h +++ b/drivers/gpu/drm/nouveau/nouveau_connector.h @@ -127,6 +127,9 @@ struct nouveau_connector { struct drm_dp_aux aux; + /* The fixed DP encoder for this connector, if there is one */ + struct nouveau_encoder *dp_encoder; + int dithering_mode; int scaling_mode; -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 12/18] drm/nouveau/kms: Pull mst state in for all modesets
Since we're going to be relying on atomic locking for payloads now (and the MST mgr needs to track CRTCs), pull in the topology state for all modesets in nv50_msto_atomic_check(). Signed-off-by: Lyude Paul <lyude at redhat.com> --- drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 461e5e3345f8..834a5c5b77d5 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -1054,7 +1054,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder, if (ret) return ret; - if (!crtc_state->mode_changed && !crtc_state->connectors_changed) + if (!drm_atomic_crtc_needs_modeset(crtc_state)) return 0; /* -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 13/18] drm/display/dp_mst: Add helpers for serializing SST <-> MST transitions
There's another kind of situation where we could potentially race with nonblocking modesets and MST, especially if we were to only use the locking provided by atomic modesetting: * Display 1 begins as enabled on DP-1 in SST mode * Display 1 switches to MST mode, exposes one sink in MST mode * Userspace does non-blocking modeset to disable the SST display * Userspace does non-blocking modeset to enable the MST display with a different CRTC, but the SST display hasn't been fully taken down yet * Execution order between the last two commits isn't guaranteed since they share no drm resources We can fix this however, by ensuring that we always pull in the atomic topology state whenever a connector capable of driving an MST display performs its atomic check - and then tracking CRTC commits happening on the SST connector in the MST topology state. So, let's add some simple helpers for doing that and hook them up in various drivers. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 7 +++ drivers/gpu/drm/display/drm_dp_mst_topology.c | 59 +++++++++++++++++++ drivers/gpu/drm/i915/display/intel_dp.c | 9 +++ drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 +- drivers/gpu/drm/nouveau/dispnv50/disp.h | 2 + drivers/gpu/drm/nouveau/nouveau_connector.c | 14 +++++ include/drm/display/drm_dp_mst_helper.h | 2 + 7 files changed, 94 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index d9c7393ef151..ac8648e3c1c9 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7177,10 +7177,17 @@ amdgpu_dm_connector_atomic_check(struct drm_connector *conn, drm_atomic_get_old_connector_state(state, conn); struct drm_crtc *crtc = new_con_state->crtc; struct drm_crtc_state *new_crtc_state; + struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn); int ret; trace_amdgpu_dm_connector_atomic_check(new_con_state); + if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) { + ret = drm_dp_mst_root_conn_atomic_check(new_con_state, &aconn->mst_mgr); + if (ret < 0) + return ret; + } + if (!crtc) return 0; diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index c5edcf2a26c8..a775f9437868 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4608,6 +4608,65 @@ void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state) } EXPORT_SYMBOL(drm_dp_mst_atomic_wait_for_dependencies); +/** + * drm_dp_mst_root_conn_atomic_check() - Serialize CRTC commits on MST-capable connectors operating + * in SST mode + * @new_conn_state: The new connector state of the &drm_connector + * @mgr: The MST topology manager for the &drm_connector + * + * Since MST uses fake &drm_encoder structs, the generic atomic modesetting code isn't able to + * serialize non-blocking commits happening on the real DP connector of an MST topology switching + * into/away from MST mode - as the CRTC on the real DP connector and the CRTCs on the connector's + * MST topology will never share the same &drm_encoder. + * + * This function takes care of this serialization issue, by checking a root MST connector's atomic + * state to determine if it is about to have a modeset - and then pulling in the MST topology state + * if so, along with adding any relevant CRTCs to &drm_dp_mst_topology_state.pending_crtc_mask. + * + * Drivers implementing MST must call this function from the + * &drm_connector_helper_funcs.atomic_check hook of any physical DP &drm_connector capable of + * driving MST sinks. + * + * Returns: + * 0 on success, negative error code otherwise + */ +int drm_dp_mst_root_conn_atomic_check(struct drm_connector_state *new_conn_state, + struct drm_dp_mst_topology_mgr *mgr) +{ + struct drm_atomic_state *state = new_conn_state->state; + struct drm_connector_state *old_conn_state + drm_atomic_get_old_connector_state(state, new_conn_state->connector); + struct drm_crtc_state *crtc_state; + struct drm_dp_mst_topology_state *mst_state = NULL; + + if (new_conn_state->crtc) { + crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc); + if (crtc_state && drm_atomic_crtc_needs_modeset(crtc_state)) { + mst_state = drm_atomic_get_mst_topology_state(state, mgr); + if (IS_ERR(mst_state)) + return PTR_ERR(mst_state); + + mst_state->pending_crtc_mask |= drm_crtc_mask(new_conn_state->crtc); + } + } + + if (old_conn_state->crtc) { + crtc_state = drm_atomic_get_new_crtc_state(state, old_conn_state->crtc); + if (crtc_state && drm_atomic_crtc_needs_modeset(crtc_state)) { + if (!mst_state) { + mst_state = drm_atomic_get_mst_topology_state(state, mgr); + if (IS_ERR(mst_state)) + return PTR_ERR(mst_state); + } + + mst_state->pending_crtc_mask |= drm_crtc_mask(old_conn_state->crtc); + } + } + + return 0; +} +EXPORT_SYMBOL(drm_dp_mst_root_conn_atomic_check); + /** * drm_dp_mst_update_slots() - updates the slot info depending on the DP ecoding format * @mst_state: mst_state to update diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c index b8e2d3cd4d68..943b7c0acc04 100644 --- a/drivers/gpu/drm/i915/display/intel_dp.c +++ b/drivers/gpu/drm/i915/display/intel_dp.c @@ -4954,12 +4954,21 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn, { struct drm_i915_private *dev_priv = to_i915(conn->dev); struct intel_atomic_state *state = to_intel_atomic_state(_state); + struct drm_connector_state *conn_state = drm_atomic_get_new_connector_state(_state, conn); + struct intel_connector *intel_conn = to_intel_connector(conn); + struct intel_dp *intel_dp = enc_to_intel_dp(intel_conn->encoder); int ret; ret = intel_digital_connector_atomic_check(conn, &state->base); if (ret) return ret; + if (HAS_DP_MST(dev_priv) && !intel_dp_is_edp(intel_dp)) { + ret = drm_dp_mst_root_conn_atomic_check(conn_state, &intel_dp->mst_mgr); + if (ret) + return ret; + } + /* * We don't enable port sync on BDW due to missing w/as and * due to not having adjusted the modeset sequence appropriately. diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 834a5c5b77d5..57f74cfcdebf 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -1815,7 +1815,7 @@ nv50_sor_func = { .destroy = nv50_sor_destroy, }; -static bool nv50_has_mst(struct nouveau_drm *drm) +bool nv50_has_mst(struct nouveau_drm *drm) { struct nvkm_bios *bios = nvxx_bios(&drm->client.device); u32 data; diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.h b/drivers/gpu/drm/nouveau/dispnv50/disp.h index 38dec11e7dda..9d66c9c726c3 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.h +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.h @@ -106,6 +106,8 @@ void nv50_dmac_destroy(struct nv50_dmac *); */ struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder); +bool nv50_has_mst(struct nouveau_drm *drm); + u32 *evo_wait(struct nv50_dmac *, int nr); void evo_kick(u32 *, struct nv50_dmac *); diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c index ffbd8a9cf2af..1c8235f97737 100644 --- a/drivers/gpu/drm/nouveau/nouveau_connector.c +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c @@ -1104,11 +1104,25 @@ nouveau_connector_best_encoder(struct drm_connector *connector) return NULL; } +static int +nouveau_connector_atomic_check(struct drm_connector *connector, struct drm_atomic_state *state) +{ + struct nouveau_connector *nv_conn = nouveau_connector(connector); + struct drm_connector_state *conn_state + drm_atomic_get_new_connector_state(state, connector); + + if (!nv_conn->dp_encoder || !nv50_has_mst(nouveau_drm(connector->dev))) + return 0; + + return drm_dp_mst_root_conn_atomic_check(conn_state, &nv_conn->dp_encoder->dp.mstm->mgr); +} + static const struct drm_connector_helper_funcs nouveau_connector_helper_funcs = { .get_modes = nouveau_connector_get_modes, .mode_valid = nouveau_connector_mode_valid, .best_encoder = nouveau_connector_best_encoder, + .atomic_check = nouveau_connector_atomic_check, }; static const struct drm_connector_funcs diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index dd74afcee888..690ebcabb51f 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -906,6 +906,8 @@ int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, struct drm_dp_query_stream_enc_status_ack_reply *status); int __must_check drm_dp_mst_atomic_check(struct drm_atomic_state *state); +int __must_check drm_dp_mst_root_conn_atomic_check(struct drm_connector_state *new_conn_state, + struct drm_dp_mst_topology_mgr *mgr); void drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port); void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port); -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 14/18] drm/display/dp_mst: Drop all ports from topology on CSNs before queueing link address work
We want to start cutting down on all of the places that we use port validation, so that ports may be removed from the topology as quickly as possible to minimize the number of errors we run into as a result of being out of sync with the current topology status. This isn't a very typical scenario and I don't think I've ever even run into it - but since the next commit is going to make some changes to payload updates depending on their hotplug status I think it's a probably good idea to take precautions. Let's do this with CSNs by moving some code around so that we only queue link address probing work at the end of handling all CSNs - allowing us to make sure we drop as many topology references as we can beforehand. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index a775f9437868..dd314586bac3 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -2508,7 +2508,7 @@ drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb, return ret; } -static void +static int drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb, struct drm_dp_connection_status_notify *conn_stat) { @@ -2521,7 +2521,7 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb, port = drm_dp_get_port(mstb, conn_stat->port_number); if (!port) - return; + return 0; if (port->connector) { if (!port->input && conn_stat->input_port) { @@ -2574,8 +2574,7 @@ drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb, out: drm_dp_mst_topology_put_port(port); - if (dowork) - queue_work(system_long_wq, &mstb->mgr->work); + return dowork; } static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_topology_mgr *mgr, @@ -4071,7 +4070,7 @@ drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_branch *mstb = NULL; struct drm_dp_sideband_msg_req_body *msg = &up_req->msg; struct drm_dp_sideband_msg_hdr *hdr = &up_req->hdr; - bool hotplug = false; + bool hotplug = false, dowork = false; if (hdr->broadcast) { const u8 *guid = NULL; @@ -4094,11 +4093,14 @@ drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr, /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */ if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) { - drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); + dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat); hotplug = true; } drm_dp_mst_topology_put_mstb(mstb); + + if (dowork) + queue_work(system_long_wq, &mgr->work); return hotplug; } -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 15/18] drm/display/dp_mst: Skip releasing payloads if last connected port isn't connected
In the past, we've ran into strange issues regarding errors in response to trying to destroy payloads after a port has been unplugged. We fixed this back in: This is intended to replace the workaround that was added here: commit 3769e4c0af5b ("drm/dp_mst: Avoid to mess up payload table by ports in stale topology") which was intended fix to some of the payload leaks that were observed before, where we would attempt to determine if the port was still connected to the topology before updating payloads using drm_dp_mst_port_downstream_of_branch. This wasn't a particularly good solution, since one of the points of still having port and mstb validation is to avoid sending messages to newly disconnected branches wherever possible - thus the required use of drm_dp_mst_port_downstream_of_branch would indicate something may be wrong with said validation. It seems like it may have just been races and luck that made drm_dp_mst_port_downstream_of_branch work however, as while I was trying to figure out the true cause of this issue when removing the legacy MST code - I discovered an important excerpt in section 2.14.2.3.3.6 of the DP 2.0 specs: "BAD_PARAM - This reply is transmitted when a Message Transaction parameter is in error; for example, the next port number is invalid or /no device is connected/ to the port associated with the port number." Sure enough - removing the calls to drm_dp_mst_port_downstream_of_branch() and instead checking the ->ddps field of the parent port to see whether we should release a given payload or not seems to totally fix the issue. This does actually make sense to me, as it seems the implication is that given a topology where an MSTB is removed, the payload for the MST parent's port will be released automatically if that port is also marked as disconnected. However, if there's another parent in the chain after that which is connected - payloads must be released there with an ALLOCATE_PAYLOAD message. So, let's do that! Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 51 +++++++------------ 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index dd314586bac3..70adb8db4335 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -3137,7 +3137,7 @@ static struct drm_dp_mst_port *drm_dp_get_last_connected_port_to_mstb(struct drm static struct drm_dp_mst_branch * drm_dp_get_last_connected_port_and_mstb(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_branch *mstb, - int *port_num) + struct drm_dp_mst_port **last_port) { struct drm_dp_mst_branch *rmstb = NULL; struct drm_dp_mst_port *found_port; @@ -3153,7 +3153,8 @@ drm_dp_get_last_connected_port_and_mstb(struct drm_dp_mst_topology_mgr *mgr, if (drm_dp_mst_topology_try_get_mstb(found_port->parent)) { rmstb = found_port->parent; - *port_num = found_port->port_num; + *last_port = found_port; + drm_dp_mst_get_port_malloc(found_port); } else { /* Search again, starting from this parent */ mstb = found_port->parent; @@ -3170,7 +3171,7 @@ static int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr, int pbn) { struct drm_dp_sideband_msg_tx *txmsg; - struct drm_dp_mst_branch *mstb; + struct drm_dp_mst_branch *mstb = NULL; int ret, port_num; u8 sinks[DRM_DP_MAX_SDP_STREAMS]; int i; @@ -3178,12 +3179,22 @@ static int drm_dp_payload_send_msg(struct drm_dp_mst_topology_mgr *mgr, port_num = port->port_num; mstb = drm_dp_mst_topology_get_mstb_validated(mgr, port->parent); if (!mstb) { - mstb = drm_dp_get_last_connected_port_and_mstb(mgr, - port->parent, - &port_num); + struct drm_dp_mst_port *rport = NULL; + bool ddps; + mstb = drm_dp_get_last_connected_port_and_mstb(mgr, port->parent, &rport); if (!mstb) return -EINVAL; + + ddps = rport->ddps; + port_num = rport->port_num; + drm_dp_mst_put_port_malloc(rport); + + /* If the port is currently marked as disconnected, don't send a payload message */ + if (!ddps) { + ret = -EINVAL; + goto fail_put; + } } txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL); @@ -3384,7 +3395,6 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_s struct drm_dp_mst_port *port; int i, j; int cur_slots = start_slot; - bool skip; mutex_lock(&mgr->payload_lock); for (i = 0; i < mgr->max_payloads; i++) { @@ -3399,16 +3409,6 @@ int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_s port = container_of(vcpi, struct drm_dp_mst_port, vcpi); - mutex_lock(&mgr->lock); - skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary); - mutex_unlock(&mgr->lock); - - if (skip) { - drm_dbg_kms(mgr->dev, - "Virtual channel %d is not in current topology\n", - i); - continue; - } /* Validated ports don't matter if we're releasing * VCPI */ @@ -3509,7 +3509,6 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr) struct drm_dp_mst_port *port; int i; int ret = 0; - bool skip; mutex_lock(&mgr->payload_lock); for (i = 0; i < mgr->max_payloads; i++) { @@ -3519,13 +3518,6 @@ int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr) port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); - mutex_lock(&mgr->lock); - skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary); - mutex_unlock(&mgr->lock); - - if (skip) - continue; - drm_dbg_kms(mgr->dev, "payload %d %d\n", i, mgr->payloads[i].payload_state); if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) { ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]); @@ -4780,18 +4772,9 @@ EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots); void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) { - bool skip; - if (!port->vcpi.vcpi) return; - mutex_lock(&mgr->lock); - skip = !drm_dp_mst_port_downstream_of_branch(port, mgr->mst_primary); - mutex_unlock(&mgr->lock); - - if (skip) - return; - drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); port->vcpi.num_slots = 0; port->vcpi.pbn = 0; -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 16/18] drm/display/dp_mst: Maintain time slot allocations when deleting payloads
Currently, we set drm_dp_atomic_payload->time_slots to 0 in order to indicate that we're about to delete a payload in the current atomic state. Since we're going to be dropping all of the legacy code for handling the payload table however, we need to be able to ensure that we still keep track of the current time slot allocations for each payload so we can reuse this info when asking the root MST hub to delete payloads. We'll also be using it to recalculate the start slots of each VC. So, let's keep track of the intent of a payload in drm_dp_atomic_payload by adding ->delete, which we set whenever we're planning on deleting a payload during the current atomic commit. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/display/drm_dp_mst_topology.c | 14 +++++++------- include/drm/display/drm_dp_mst_helper.h | 5 ++++- 2 files changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index 70adb8db4335..10d26a7e028c 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -4410,7 +4410,7 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, * releasing and allocating the same timeslot allocation, * which is an error */ - if (WARN_ON(!prev_slots)) { + if (drm_WARN_ON(mgr->dev, payload->delete)) { drm_err(mgr->dev, "cannot allocate and release time slots on [MST PORT:%p] in the same state\n", port); @@ -4515,10 +4515,10 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, } drm_dbg_atomic(mgr->dev, "[MST PORT:%p] TU %d -> 0\n", port, payload->time_slots); - if (payload->time_slots) { + if (!payload->delete) { drm_dp_mst_put_port_malloc(port); - payload->time_slots = 0; payload->pbn = 0; + payload->delete = true; } return 0; @@ -5234,7 +5234,7 @@ drm_dp_mst_duplicate_state(struct drm_private_obj *obj) list_for_each_entry(pos, &old_state->payloads, next) { /* Prune leftover freed timeslot allocations */ - if (!pos->time_slots) + if (pos->delete) continue; payload = kmemdup(pos, sizeof(*payload), GFP_KERNEL); @@ -5266,8 +5266,8 @@ static void drm_dp_mst_destroy_state(struct drm_private_obj *obj, int i; list_for_each_entry_safe(pos, tmp, &mst_state->payloads, next) { - /* We only keep references to ports with non-zero VCPIs */ - if (pos->time_slots) + /* We only keep references to ports with active payloads */ + if (!pos->delete) drm_dp_mst_put_port_malloc(pos->port); kfree(pos); } @@ -5395,7 +5395,7 @@ drm_dp_mst_atomic_check_payload_alloc_limits(struct drm_dp_mst_topology_mgr *mgr list_for_each_entry(payload, &mst_state->payloads, next) { /* Releasing payloads is always OK-even if the port is gone */ - if (!payload->time_slots) { + if (payload->delete) { drm_dbg_atomic(mgr->dev, "[MST PORT:%p] releases all time slots\n", payload->port); continue; diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index 690ebcabb51f..ecd130028337 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -555,8 +555,11 @@ struct drm_dp_mst_atomic_payload { int time_slots; /** @pbn: The payload bandwidth for this payload */ int pbn; + + /** @delete: Whether or not we intend to delete this payload during this atomic commit */ + bool delete : 1; /** @dsc_enabled: Whether or not this payload has DSC enabled */ - bool dsc_enabled; + bool dsc_enabled : 1; /** @next: The list node for this payload */ struct list_head next; -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 17/18] drm/radeon: Drop legacy MST support
Right now, radeon is technically the only non-atomic driver still making use of the MST helpers - and thus the final user of all of the legacy MST helpers. Originally I was going to look into seeing if we could move legacy MST into the radeon driver itself, however: * SI and CIK both can use amdgpu, which still supports MST * It currently doesn't work according to my own testing. I'm sure with some troubleshooting we could likely fix it, but that brings me to point #2: * It was never actually enabled by default, and is still marked as experimental in the module parameter description * If people were using it, someone probably would have probably seen a bug report about how it is currently not functional by now. That certainly doesn't appear to be the case, since before getting access to my own hardware I had to go out of my way to try finding someone to help test whether this legacy MST code even works - even amongst AMD employees. * Getting rid of this code and only having atomic versions of the MST helpers to maintain is likely going to be a lot easier in the long run, and will make it a lot easier for others contributing to this code to follow along with what's happening. FWIW - if anyone still wants this code to be in the tree and has a good idea of how to support this without needing to maintain the legacy MST helpers (trying to move them would probably be acceptable), I'm happy to suggestions. But my hope is that we can just drop this code and forget about it. I've already run this idea by Harry Wentland and Alex Deucher a few times as well. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- drivers/gpu/drm/radeon/Makefile | 2 +- drivers/gpu/drm/radeon/atombios_crtc.c | 11 +- drivers/gpu/drm/radeon/atombios_encoders.c | 59 -- drivers/gpu/drm/radeon/radeon_atombios.c | 2 - drivers/gpu/drm/radeon/radeon_connectors.c | 61 +- drivers/gpu/drm/radeon/radeon_device.c | 1 - drivers/gpu/drm/radeon/radeon_dp_mst.c | 778 --------------------- drivers/gpu/drm/radeon/radeon_drv.c | 4 - drivers/gpu/drm/radeon/radeon_encoders.c | 14 +- drivers/gpu/drm/radeon/radeon_irq_kms.c | 10 +- drivers/gpu/drm/radeon/radeon_mode.h | 40 -- 11 files changed, 7 insertions(+), 975 deletions(-) delete mode 100644 drivers/gpu/drm/radeon/radeon_dp_mst.c diff --git a/drivers/gpu/drm/radeon/Makefile b/drivers/gpu/drm/radeon/Makefile index ea5380e24c3c..b783ab39a075 100644 --- a/drivers/gpu/drm/radeon/Makefile +++ b/drivers/gpu/drm/radeon/Makefile @@ -49,7 +49,7 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \ rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \ trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \ ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \ - radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o + radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon-$(CONFIG_MMU_NOTIFIER) += radeon_mn.o diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c index c94e429e75f9..e27da31d4f62 100644 --- a/drivers/gpu/drm/radeon/atombios_crtc.c +++ b/drivers/gpu/drm/radeon/atombios_crtc.c @@ -616,13 +616,6 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc, } } - if (radeon_encoder->is_mst_encoder) { - struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv; - struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv; - - dp_clock = dig_connector->dp_clock; - } - /* use recommended ref_div for ss */ if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { if (radeon_crtc->ss_enabled) { @@ -971,9 +964,7 @@ static bool atombios_crtc_prepare_pll(struct drm_crtc *crtc, struct drm_display_ radeon_crtc->bpc = 8; radeon_crtc->ss_enabled = false; - if (radeon_encoder->is_mst_encoder) { - radeon_dp_mst_prepare_pll(crtc, mode); - } else if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || + if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) || (radeon_encoder_get_dp_bridge_encoder_id(radeon_crtc->encoder) != ENCODER_OBJECT_ID_NONE)) { struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; struct drm_connector *connector diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c index 70bd84b7ef2b..597446a8df34 100644 --- a/drivers/gpu/drm/radeon/atombios_encoders.c +++ b/drivers/gpu/drm/radeon/atombios_encoders.c @@ -681,15 +681,7 @@ atombios_get_encoder_mode(struct drm_encoder *encoder) struct drm_connector *connector; struct radeon_connector *radeon_connector; struct radeon_connector_atom_dig *dig_connector; - struct radeon_encoder_atom_dig *dig_enc; - if (radeon_encoder_is_digital(encoder)) { - dig_enc = radeon_encoder->enc_priv; - if (dig_enc->active_mst_links) - return ATOM_ENCODER_MODE_DP_MST; - } - if (radeon_encoder->is_mst_encoder || radeon_encoder->offset) - return ATOM_ENCODER_MODE_DP_MST; /* dp bridges are always DP */ if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE) return ATOM_ENCODER_MODE_DP; @@ -1737,10 +1729,6 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode) case DRM_MODE_DPMS_SUSPEND: case DRM_MODE_DPMS_OFF: - /* don't power off encoders with active MST links */ - if (dig->active_mst_links) - return; - if (ASIC_IS_DCE4(rdev)) { if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); @@ -2006,53 +1994,6 @@ atombios_set_encoder_crtc_source(struct drm_encoder *encoder) radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id); } -void -atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, int fe) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); - int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source); - uint8_t frev, crev; - union crtc_source_param args; - - memset(&args, 0, sizeof(args)); - - if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) - return; - - if (frev != 1 && crev != 2) - DRM_ERROR("Unknown table for MST %d, %d\n", frev, crev); - - args.v2.ucCRTC = radeon_crtc->crtc_id; - args.v2.ucEncodeMode = ATOM_ENCODER_MODE_DP_MST; - - switch (fe) { - case 0: - args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID; - break; - case 1: - args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID; - break; - case 2: - args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID; - break; - case 3: - args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID; - break; - case 4: - args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID; - break; - case 5: - args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID; - break; - case 6: - args.v2.ucEncoderID = ASIC_INT_DIG7_ENCODER_ID; - break; - } - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); -} - static void atombios_apply_encoder_quirks(struct drm_encoder *encoder, struct drm_display_mode *mode) diff --git a/drivers/gpu/drm/radeon/radeon_atombios.c b/drivers/gpu/drm/radeon/radeon_atombios.c index 28c4413f4dc8..204127bad89c 100644 --- a/drivers/gpu/drm/radeon/radeon_atombios.c +++ b/drivers/gpu/drm/radeon/radeon_atombios.c @@ -826,8 +826,6 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev) } radeon_link_encoder_connector(dev); - - radeon_setup_mst_connector(dev); return true; } diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index 58db79921cd3..f7431d224604 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -37,33 +37,12 @@ #include <linux/pm_runtime.h> #include <linux/vga_switcheroo.h> -static int radeon_dp_handle_hpd(struct drm_connector *connector) -{ - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - int ret; - - ret = radeon_dp_mst_check_status(radeon_connector); - if (ret == -EINVAL) - return 1; - return 0; -} void radeon_connector_hotplug(struct drm_connector *connector) { struct drm_device *dev = connector->dev; struct radeon_device *rdev = dev->dev_private; struct radeon_connector *radeon_connector = to_radeon_connector(connector); - if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) { - struct radeon_connector_atom_dig *dig_connector - radeon_connector->con_priv; - - if (radeon_connector->is_mst_connector) - return; - if (dig_connector->is_mst) { - radeon_dp_handle_hpd(connector); - return; - } - } /* bail if the connector does not have hpd pin, e.g., * VGA, TV, etc. */ @@ -1664,9 +1643,6 @@ radeon_dp_detect(struct drm_connector *connector, bool force) struct drm_encoder *encoder = radeon_best_single_encoder(connector); int r; - if (radeon_dig_connector->is_mst) - return connector_status_disconnected; - if (!drm_kms_helper_is_poll_worker()) { r = pm_runtime_get_sync(connector->dev->dev); if (r < 0) { @@ -1729,21 +1705,12 @@ radeon_dp_detect(struct drm_connector *connector, bool force) radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector); if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) { ret = connector_status_connected; - if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) { + if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) radeon_dp_getdpcd(radeon_connector); - r = radeon_dp_mst_probe(radeon_connector); - if (r == 1) - ret = connector_status_disconnected; - } } else { if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) { - if (radeon_dp_getdpcd(radeon_connector)) { - r = radeon_dp_mst_probe(radeon_connector); - if (r == 1) - ret = connector_status_disconnected; - else - ret = connector_status_connected; - } + if (radeon_dp_getdpcd(radeon_connector)) + ret = connector_status_connected; } else { /* try non-aux ddc (DP to DVI/HDMI/etc. adapter) */ if (radeon_ddc_probe(radeon_connector, false)) @@ -2561,25 +2528,3 @@ radeon_add_legacy_connector(struct drm_device *dev, connector->display_info.subpixel_order = subpixel_order; drm_connector_register(connector); } - -void radeon_setup_mst_connector(struct drm_device *dev) -{ - struct radeon_device *rdev = dev->dev_private; - struct drm_connector *connector; - struct radeon_connector *radeon_connector; - - if (!ASIC_IS_DCE5(rdev)) - return; - - if (radeon_mst == 0) - return; - - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - radeon_connector = to_radeon_connector(connector); - - if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort) - continue; - - radeon_dp_mst_init(radeon_connector); - } -} diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c index 15692cb241fc..10548a184cc5 100644 --- a/drivers/gpu/drm/radeon/radeon_device.c +++ b/drivers/gpu/drm/radeon/radeon_device.c @@ -1437,7 +1437,6 @@ int radeon_device_init(struct radeon_device *rdev, goto failed; radeon_gem_debugfs_init(rdev); - radeon_mst_debugfs_init(rdev); if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) { /* Acceleration not working on AGP card try again diff --git a/drivers/gpu/drm/radeon/radeon_dp_mst.c b/drivers/gpu/drm/radeon/radeon_dp_mst.c deleted file mode 100644 index 54ced1f4ff67..000000000000 --- a/drivers/gpu/drm/radeon/radeon_dp_mst.c +++ /dev/null @@ -1,778 +0,0 @@ -// SPDX-License-Identifier: MIT - -#include <drm/display/drm_dp_mst_helper.h> -#include <drm/drm_fb_helper.h> -#include <drm/drm_file.h> -#include <drm/drm_probe_helper.h> - -#include "atom.h" -#include "ni_reg.h" -#include "radeon.h" - -static struct radeon_encoder *radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector); - -static int radeon_atom_set_enc_offset(int id) -{ - static const int offsets[] = { EVERGREEN_CRTC0_REGISTER_OFFSET, - EVERGREEN_CRTC1_REGISTER_OFFSET, - EVERGREEN_CRTC2_REGISTER_OFFSET, - EVERGREEN_CRTC3_REGISTER_OFFSET, - EVERGREEN_CRTC4_REGISTER_OFFSET, - EVERGREEN_CRTC5_REGISTER_OFFSET, - 0x13830 - 0x7030 }; - - return offsets[id]; -} - -static int radeon_dp_mst_set_be_cntl(struct radeon_encoder *primary, - struct radeon_encoder_mst *mst_enc, - enum radeon_hpd_id hpd, bool enable) -{ - struct drm_device *dev = primary->base.dev; - struct radeon_device *rdev = dev->dev_private; - uint32_t reg; - int retries = 0; - uint32_t temp; - - reg = RREG32(NI_DIG_BE_CNTL + primary->offset); - - /* set MST mode */ - reg &= ~NI_DIG_FE_DIG_MODE(7); - reg |= NI_DIG_FE_DIG_MODE(NI_DIG_MODE_DP_MST); - - if (enable) - reg |= NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe); - else - reg &= ~NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe); - - reg |= NI_DIG_HPD_SELECT(hpd); - DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DIG_BE_CNTL + primary->offset, reg); - WREG32(NI_DIG_BE_CNTL + primary->offset, reg); - - if (enable) { - uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe); - - do { - temp = RREG32(NI_DIG_FE_CNTL + offset); - } while ((temp & NI_DIG_SYMCLK_FE_ON) && retries++ < 10000); - if (retries == 10000) - DRM_ERROR("timed out waiting for FE %d %d\n", primary->offset, mst_enc->fe); - } - return 0; -} - -static int radeon_dp_mst_set_stream_attrib(struct radeon_encoder *primary, - int stream_number, - int fe, - int slots) -{ - struct drm_device *dev = primary->base.dev; - struct radeon_device *rdev = dev->dev_private; - u32 temp, val; - int retries = 0; - int satreg, satidx; - - satreg = stream_number >> 1; - satidx = stream_number & 1; - - temp = RREG32(NI_DP_MSE_SAT0 + satreg + primary->offset); - - val = NI_DP_MSE_SAT_SLOT_COUNT0(slots) | NI_DP_MSE_SAT_SRC0(fe); - - val <<= (16 * satidx); - - temp &= ~(0xffff << (16 * satidx)); - - temp |= val; - - DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DP_MSE_SAT0 + satreg + primary->offset, temp); - WREG32(NI_DP_MSE_SAT0 + satreg + primary->offset, temp); - - WREG32(NI_DP_MSE_SAT_UPDATE + primary->offset, 1); - - do { - unsigned value1, value2; - udelay(10); - temp = RREG32(NI_DP_MSE_SAT_UPDATE + primary->offset); - - value1 = temp & NI_DP_MSE_SAT_UPDATE_MASK; - value2 = temp & NI_DP_MSE_16_MTP_KEEPOUT; - - if (!value1 && !value2) - break; - } while (retries++ < 50); - - if (retries == 10000) - DRM_ERROR("timed out waitin for SAT update %d\n", primary->offset); - - /* MTP 16 ? */ - return 0; -} - -static int radeon_dp_mst_update_stream_attribs(struct radeon_connector *mst_conn, - struct radeon_encoder *primary) -{ - struct drm_device *dev = mst_conn->base.dev; - struct stream_attribs new_attribs[6]; - int i; - int idx = 0; - struct radeon_connector *radeon_connector; - struct drm_connector *connector; - - memset(new_attribs, 0, sizeof(new_attribs)); - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - struct radeon_encoder *subenc; - struct radeon_encoder_mst *mst_enc; - - radeon_connector = to_radeon_connector(connector); - if (!radeon_connector->is_mst_connector) - continue; - - if (radeon_connector->mst_port != mst_conn) - continue; - - subenc = radeon_connector->mst_encoder; - mst_enc = subenc->enc_priv; - - if (!mst_enc->enc_active) - continue; - - new_attribs[idx].fe = mst_enc->fe; - new_attribs[idx].slots = drm_dp_mst_get_vcpi_slots(&mst_conn->mst_mgr, mst_enc->port); - idx++; - } - - for (i = 0; i < idx; i++) { - if (new_attribs[i].fe != mst_conn->cur_stream_attribs[i].fe || - new_attribs[i].slots != mst_conn->cur_stream_attribs[i].slots) { - radeon_dp_mst_set_stream_attrib(primary, i, new_attribs[i].fe, new_attribs[i].slots); - mst_conn->cur_stream_attribs[i].fe = new_attribs[i].fe; - mst_conn->cur_stream_attribs[i].slots = new_attribs[i].slots; - } - } - - for (i = idx; i < mst_conn->enabled_attribs; i++) { - radeon_dp_mst_set_stream_attrib(primary, i, 0, 0); - mst_conn->cur_stream_attribs[i].fe = 0; - mst_conn->cur_stream_attribs[i].slots = 0; - } - mst_conn->enabled_attribs = idx; - return 0; -} - -static int radeon_dp_mst_set_vcp_size(struct radeon_encoder *mst, s64 avg_time_slots_per_mtp) -{ - struct drm_device *dev = mst->base.dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder_mst *mst_enc = mst->enc_priv; - uint32_t val, temp; - uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe); - int retries = 0; - uint32_t x = drm_fixp2int(avg_time_slots_per_mtp); - uint32_t y = drm_fixp2int_ceil((avg_time_slots_per_mtp - x) << 26); - - val = NI_DP_MSE_RATE_X(x) | NI_DP_MSE_RATE_Y(y); - - WREG32(NI_DP_MSE_RATE_CNTL + offset, val); - - do { - temp = RREG32(NI_DP_MSE_RATE_UPDATE + offset); - udelay(10); - } while ((temp & 0x1) && (retries++ < 10000)); - - if (retries >= 10000) - DRM_ERROR("timed out wait for rate cntl %d\n", mst_enc->fe); - return 0; -} - -static int radeon_dp_mst_get_ddc_modes(struct drm_connector *connector) -{ - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_connector *master = radeon_connector->mst_port; - struct edid *edid; - int ret = 0; - - edid = drm_dp_mst_get_edid(connector, &master->mst_mgr, radeon_connector->port); - radeon_connector->edid = edid; - DRM_DEBUG_KMS("edid retrieved %p\n", edid); - if (radeon_connector->edid) { - drm_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid); - ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid); - return ret; - } - drm_connector_update_edid_property(&radeon_connector->base, NULL); - - return ret; -} - -static int radeon_dp_mst_get_modes(struct drm_connector *connector) -{ - return radeon_dp_mst_get_ddc_modes(connector); -} - -static enum drm_mode_status -radeon_dp_mst_mode_valid(struct drm_connector *connector, - struct drm_display_mode *mode) -{ - /* TODO - validate mode against available PBN for link */ - if (mode->clock < 10000) - return MODE_CLOCK_LOW; - - if (mode->flags & DRM_MODE_FLAG_DBLCLK) - return MODE_H_ILLEGAL; - - return MODE_OK; -} - -static struct -drm_encoder *radeon_mst_best_encoder(struct drm_connector *connector) -{ - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - - return &radeon_connector->mst_encoder->base; -} - -static int -radeon_dp_mst_detect(struct drm_connector *connector, - struct drm_modeset_acquire_ctx *ctx, - bool force) -{ - struct radeon_connector *radeon_connector - to_radeon_connector(connector); - struct radeon_connector *master = radeon_connector->mst_port; - - if (drm_connector_is_unregistered(connector)) - return connector_status_disconnected; - - return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr, - radeon_connector->port); -} - -static const struct drm_connector_helper_funcs radeon_dp_mst_connector_helper_funcs = { - .get_modes = radeon_dp_mst_get_modes, - .mode_valid = radeon_dp_mst_mode_valid, - .best_encoder = radeon_mst_best_encoder, - .detect_ctx = radeon_dp_mst_detect, -}; - -static void -radeon_dp_mst_connector_destroy(struct drm_connector *connector) -{ - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - struct radeon_encoder *radeon_encoder = radeon_connector->mst_encoder; - - drm_encoder_cleanup(&radeon_encoder->base); - kfree(radeon_encoder); - drm_connector_cleanup(connector); - kfree(radeon_connector); -} - -static const struct drm_connector_funcs radeon_dp_mst_connector_funcs = { - .dpms = drm_helper_connector_dpms, - .fill_modes = drm_helper_probe_single_connector_modes, - .destroy = radeon_dp_mst_connector_destroy, -}; - -static struct drm_connector *radeon_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, - const char *pathprop) -{ - struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr); - struct drm_device *dev = master->base.dev; - struct radeon_connector *radeon_connector; - struct drm_connector *connector; - - radeon_connector = kzalloc(sizeof(*radeon_connector), GFP_KERNEL); - if (!radeon_connector) - return NULL; - - radeon_connector->is_mst_connector = true; - connector = &radeon_connector->base; - radeon_connector->port = port; - radeon_connector->mst_port = master; - DRM_DEBUG_KMS("\n"); - - drm_connector_init(dev, connector, &radeon_dp_mst_connector_funcs, DRM_MODE_CONNECTOR_DisplayPort); - drm_connector_helper_add(connector, &radeon_dp_mst_connector_helper_funcs); - radeon_connector->mst_encoder = radeon_dp_create_fake_mst_encoder(master); - - drm_object_attach_property(&connector->base, dev->mode_config.path_property, 0); - drm_object_attach_property(&connector->base, dev->mode_config.tile_property, 0); - drm_connector_set_path_property(connector, pathprop); - - return connector; -} - -static const struct drm_dp_mst_topology_cbs mst_cbs = { - .add_connector = radeon_dp_add_mst_connector, -}; - -static struct -radeon_connector *radeon_mst_find_connector(struct drm_encoder *encoder) -{ - struct drm_device *dev = encoder->dev; - struct drm_connector *connector; - - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - struct radeon_connector *radeon_connector = to_radeon_connector(connector); - if (!connector->encoder) - continue; - if (!radeon_connector->is_mst_connector) - continue; - - DRM_DEBUG_KMS("checking %p vs %p\n", connector->encoder, encoder); - if (connector->encoder == encoder) - return radeon_connector; - } - return NULL; -} - -void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode) -{ - struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); - struct drm_device *dev = crtc->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(radeon_crtc->encoder); - struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv; - struct radeon_connector *radeon_connector = radeon_mst_find_connector(&radeon_encoder->base); - int dp_clock; - struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv; - - if (radeon_connector) { - radeon_connector->pixelclock_for_modeset = mode->clock; - if (radeon_connector->base.display_info.bpc) - radeon_crtc->bpc = radeon_connector->base.display_info.bpc; - else - radeon_crtc->bpc = 8; - } - - DRM_DEBUG_KMS("dp_clock %p %d\n", dig_connector, dig_connector->dp_clock); - dp_clock = dig_connector->dp_clock; - radeon_crtc->ss_enabled - radeon_atombios_get_asic_ss_info(rdev, &radeon_crtc->ss, - ASIC_INTERNAL_SS_ON_DP, - dp_clock); -} - -static void -radeon_mst_encoder_dpms(struct drm_encoder *encoder, int mode) -{ - struct drm_device *dev = encoder->dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder, *primary; - struct radeon_encoder_mst *mst_enc; - struct radeon_encoder_atom_dig *dig_enc; - struct radeon_connector *radeon_connector; - struct drm_crtc *crtc; - struct radeon_crtc *radeon_crtc; - int slots; - s64 fixed_pbn, fixed_pbn_per_slot, avg_time_slots_per_mtp; - if (!ASIC_IS_DCE5(rdev)) { - DRM_ERROR("got mst dpms on non-DCE5\n"); - return; - } - - radeon_connector = radeon_mst_find_connector(encoder); - if (!radeon_connector) - return; - - radeon_encoder = to_radeon_encoder(encoder); - - mst_enc = radeon_encoder->enc_priv; - - primary = mst_enc->primary; - - dig_enc = primary->enc_priv; - - crtc = encoder->crtc; - DRM_DEBUG_KMS("got connector %d\n", dig_enc->active_mst_links); - - switch (mode) { - case DRM_MODE_DPMS_ON: - dig_enc->active_mst_links++; - - radeon_crtc = to_radeon_crtc(crtc); - - if (dig_enc->active_mst_links == 1) { - mst_enc->fe = dig_enc->dig_encoder; - mst_enc->fe_from_be = true; - atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe); - - atombios_dig_encoder_setup(&primary->base, ATOM_ENCODER_CMD_SETUP, 0); - atombios_dig_transmitter_setup2(&primary->base, ATOM_TRANSMITTER_ACTION_ENABLE, - 0, 0, dig_enc->dig_encoder); - - if (radeon_dp_needs_link_train(mst_enc->connector) || - dig_enc->active_mst_links == 1) { - radeon_dp_link_train(&primary->base, &mst_enc->connector->base); - } - - } else { - mst_enc->fe = radeon_atom_pick_dig_encoder(encoder, radeon_crtc->crtc_id); - if (mst_enc->fe == -1) - DRM_ERROR("failed to get frontend for dig encoder\n"); - mst_enc->fe_from_be = false; - atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe); - } - - DRM_DEBUG_KMS("dig encoder is %d %d %d\n", dig_enc->dig_encoder, - dig_enc->linkb, radeon_crtc->crtc_id); - - slots = drm_dp_find_vcpi_slots(&radeon_connector->mst_port->mst_mgr, - mst_enc->pbn); - drm_dp_mst_allocate_vcpi(&radeon_connector->mst_port->mst_mgr, - radeon_connector->port, - mst_enc->pbn, slots); - drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr, 1); - - radeon_dp_mst_set_be_cntl(primary, mst_enc, - radeon_connector->mst_port->hpd.hpd, true); - - mst_enc->enc_active = true; - radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary); - - fixed_pbn = drm_int2fixp(mst_enc->pbn); - fixed_pbn_per_slot = drm_int2fixp(radeon_connector->mst_port->mst_mgr.pbn_div); - avg_time_slots_per_mtp = drm_fixp_div(fixed_pbn, fixed_pbn_per_slot); - radeon_dp_mst_set_vcp_size(radeon_encoder, avg_time_slots_per_mtp); - - atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0, - mst_enc->fe); - drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr); - - drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr); - - break; - case DRM_MODE_DPMS_STANDBY: - case DRM_MODE_DPMS_SUSPEND: - case DRM_MODE_DPMS_OFF: - DRM_ERROR("DPMS OFF %d\n", dig_enc->active_mst_links); - - if (!mst_enc->enc_active) - return; - - drm_dp_mst_reset_vcpi_slots(&radeon_connector->mst_port->mst_mgr, mst_enc->port); - drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr, 1); - - drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr); - /* and this can also fail */ - drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr); - - drm_dp_mst_deallocate_vcpi(&radeon_connector->mst_port->mst_mgr, mst_enc->port); - - mst_enc->enc_active = false; - radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary); - - radeon_dp_mst_set_be_cntl(primary, mst_enc, - radeon_connector->mst_port->hpd.hpd, false); - atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0, - mst_enc->fe); - - if (!mst_enc->fe_from_be) - radeon_atom_release_dig_encoder(rdev, mst_enc->fe); - - mst_enc->fe_from_be = false; - dig_enc->active_mst_links--; - if (dig_enc->active_mst_links == 0) { - /* drop link */ - } - - break; - } - -} - -static bool radeon_mst_mode_fixup(struct drm_encoder *encoder, - const struct drm_display_mode *mode, - struct drm_display_mode *adjusted_mode) -{ - struct radeon_encoder_mst *mst_enc; - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); - struct radeon_connector_atom_dig *dig_connector; - int bpp = 24; - - mst_enc = radeon_encoder->enc_priv; - - mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp, false); - - mst_enc->primary->active_device = mst_enc->primary->devices & mst_enc->connector->devices; - DRM_DEBUG_KMS("setting active device to %08x from %08x %08x for encoder %d\n", - mst_enc->primary->active_device, mst_enc->primary->devices, - mst_enc->connector->devices, mst_enc->primary->base.encoder_type); - - - drm_mode_set_crtcinfo(adjusted_mode, 0); - dig_connector = mst_enc->connector->con_priv; - dig_connector->dp_lane_count = drm_dp_max_lane_count(dig_connector->dpcd); - dig_connector->dp_clock = drm_dp_max_link_rate(dig_connector->dpcd); - DRM_DEBUG_KMS("dig clock %p %d %d\n", dig_connector, - dig_connector->dp_lane_count, dig_connector->dp_clock); - return true; -} - -static void radeon_mst_encoder_prepare(struct drm_encoder *encoder) -{ - struct radeon_connector *radeon_connector; - struct radeon_encoder *radeon_encoder, *primary; - struct radeon_encoder_mst *mst_enc; - struct radeon_encoder_atom_dig *dig_enc; - - radeon_connector = radeon_mst_find_connector(encoder); - if (!radeon_connector) { - DRM_DEBUG_KMS("failed to find connector %p\n", encoder); - return; - } - radeon_encoder = to_radeon_encoder(encoder); - - radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); - - mst_enc = radeon_encoder->enc_priv; - - primary = mst_enc->primary; - - dig_enc = primary->enc_priv; - - mst_enc->port = radeon_connector->port; - - if (dig_enc->dig_encoder == -1) { - dig_enc->dig_encoder = radeon_atom_pick_dig_encoder(&primary->base, -1); - primary->offset = radeon_atom_set_enc_offset(dig_enc->dig_encoder); - atombios_set_mst_encoder_crtc_source(encoder, dig_enc->dig_encoder); - - - } - DRM_DEBUG_KMS("%d %d\n", dig_enc->dig_encoder, primary->offset); -} - -static void -radeon_mst_encoder_mode_set(struct drm_encoder *encoder, - struct drm_display_mode *mode, - struct drm_display_mode *adjusted_mode) -{ - DRM_DEBUG_KMS("\n"); -} - -static void radeon_mst_encoder_commit(struct drm_encoder *encoder) -{ - radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_ON); - DRM_DEBUG_KMS("\n"); -} - -static const struct drm_encoder_helper_funcs radeon_mst_helper_funcs = { - .dpms = radeon_mst_encoder_dpms, - .mode_fixup = radeon_mst_mode_fixup, - .prepare = radeon_mst_encoder_prepare, - .mode_set = radeon_mst_encoder_mode_set, - .commit = radeon_mst_encoder_commit, -}; - -static void radeon_dp_mst_encoder_destroy(struct drm_encoder *encoder) -{ - drm_encoder_cleanup(encoder); - kfree(encoder); -} - -static const struct drm_encoder_funcs radeon_dp_mst_enc_funcs = { - .destroy = radeon_dp_mst_encoder_destroy, -}; - -static struct radeon_encoder * -radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector) -{ - struct drm_device *dev = connector->base.dev; - struct radeon_device *rdev = dev->dev_private; - struct radeon_encoder *radeon_encoder; - struct radeon_encoder_mst *mst_enc; - struct drm_encoder *encoder; - const struct drm_connector_helper_funcs *connector_funcs = connector->base.helper_private; - struct drm_encoder *enc_master = connector_funcs->best_encoder(&connector->base); - - DRM_DEBUG_KMS("enc master is %p\n", enc_master); - radeon_encoder = kzalloc(sizeof(*radeon_encoder), GFP_KERNEL); - if (!radeon_encoder) - return NULL; - - radeon_encoder->enc_priv = kzalloc(sizeof(*mst_enc), GFP_KERNEL); - if (!radeon_encoder->enc_priv) { - kfree(radeon_encoder); - return NULL; - } - encoder = &radeon_encoder->base; - switch (rdev->num_crtc) { - case 1: - encoder->possible_crtcs = 0x1; - break; - case 2: - default: - encoder->possible_crtcs = 0x3; - break; - case 4: - encoder->possible_crtcs = 0xf; - break; - case 6: - encoder->possible_crtcs = 0x3f; - break; - } - - drm_encoder_init(dev, &radeon_encoder->base, &radeon_dp_mst_enc_funcs, - DRM_MODE_ENCODER_DPMST, NULL); - drm_encoder_helper_add(encoder, &radeon_mst_helper_funcs); - - mst_enc = radeon_encoder->enc_priv; - mst_enc->connector = connector; - mst_enc->primary = to_radeon_encoder(enc_master); - radeon_encoder->is_mst_encoder = true; - return radeon_encoder; -} - -int -radeon_dp_mst_init(struct radeon_connector *radeon_connector) -{ - struct drm_device *dev = radeon_connector->base.dev; - int max_link_rate; - - if (!radeon_connector->ddc_bus->has_aux) - return 0; - - if (radeon_connector_is_dp12_capable(&radeon_connector->base)) - max_link_rate = 0x14; - else - max_link_rate = 0x0a; - - radeon_connector->mst_mgr.cbs = &mst_cbs; - return drm_dp_mst_topology_mgr_init(&radeon_connector->mst_mgr, dev, - &radeon_connector->ddc_bus->aux, 16, 6, - 4, drm_dp_bw_code_to_link_rate(max_link_rate), - radeon_connector->base.base.id); -} - -int -radeon_dp_mst_probe(struct radeon_connector *radeon_connector) -{ - struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv; - struct drm_device *dev = radeon_connector->base.dev; - struct radeon_device *rdev = dev->dev_private; - int ret; - u8 msg[1]; - - if (!radeon_mst) - return 0; - - if (!ASIC_IS_DCE5(rdev)) - return 0; - - if (dig_connector->dpcd[DP_DPCD_REV] < 0x12) - return 0; - - ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_MSTM_CAP, msg, - 1); - if (ret) { - if (msg[0] & DP_MST_CAP) { - DRM_DEBUG_KMS("Sink is MST capable\n"); - dig_connector->is_mst = true; - } else { - DRM_DEBUG_KMS("Sink is not MST capable\n"); - dig_connector->is_mst = false; - } - - } - drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr, - dig_connector->is_mst); - return dig_connector->is_mst; -} - -int -radeon_dp_mst_check_status(struct radeon_connector *radeon_connector) -{ - struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv; - int retry; - - if (dig_connector->is_mst) { - u8 esi[16] = { 0 }; - int dret; - int ret = 0; - bool handled; - - dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, - DP_SINK_COUNT_ESI, esi, 8); -go_again: - if (dret == 8) { - DRM_DEBUG_KMS("got esi %3ph\n", esi); - ret = drm_dp_mst_hpd_irq(&radeon_connector->mst_mgr, esi, &handled); - - if (handled) { - for (retry = 0; retry < 3; retry++) { - int wret; - wret = drm_dp_dpcd_write(&radeon_connector->ddc_bus->aux, - DP_SINK_COUNT_ESI + 1, &esi[1], 3); - if (wret == 3) - break; - } - - dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, - DP_SINK_COUNT_ESI, esi, 8); - if (dret == 8) { - DRM_DEBUG_KMS("got esi2 %3ph\n", esi); - goto go_again; - } - } else - ret = 0; - - return ret; - } else { - DRM_DEBUG_KMS("failed to get ESI - device may have failed %d\n", ret); - dig_connector->is_mst = false; - drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr, - dig_connector->is_mst); - /* send a hotplug event */ - } - } - return -EINVAL; -} - -#if defined(CONFIG_DEBUG_FS) - -static int radeon_debugfs_mst_info_show(struct seq_file *m, void *unused) -{ - struct radeon_device *rdev = (struct radeon_device *)m->private; - struct drm_device *dev = rdev->ddev; - struct drm_connector *connector; - struct radeon_connector *radeon_connector; - struct radeon_connector_atom_dig *dig_connector; - int i; - - drm_modeset_lock_all(dev); - list_for_each_entry(connector, &dev->mode_config.connector_list, head) { - if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort) - continue; - - radeon_connector = to_radeon_connector(connector); - dig_connector = radeon_connector->con_priv; - if (radeon_connector->is_mst_connector) - continue; - if (!dig_connector->is_mst) - continue; - drm_dp_mst_dump_topology(m, &radeon_connector->mst_mgr); - - for (i = 0; i < radeon_connector->enabled_attribs; i++) - seq_printf(m, "attrib %d: %d %d\n", i, - radeon_connector->cur_stream_attribs[i].fe, - radeon_connector->cur_stream_attribs[i].slots); - } - drm_modeset_unlock_all(dev); - return 0; -} - -DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_mst_info); -#endif - -void radeon_mst_debugfs_init(struct radeon_device *rdev) -{ -#if defined(CONFIG_DEBUG_FS) - struct dentry *root = rdev->ddev->primary->debugfs_root; - - debugfs_create_file("radeon_mst_info", 0444, root, rdev, - &radeon_debugfs_mst_info_fops); - -#endif -} diff --git a/drivers/gpu/drm/radeon/radeon_drv.c b/drivers/gpu/drm/radeon/radeon_drv.c index 956c72b5aa33..a28d5ceab628 100644 --- a/drivers/gpu/drm/radeon/radeon_drv.c +++ b/drivers/gpu/drm/radeon/radeon_drv.c @@ -172,7 +172,6 @@ int radeon_use_pflipirq = 2; int radeon_bapm = -1; int radeon_backlight = -1; int radeon_auxch = -1; -int radeon_mst = 0; int radeon_uvd = 1; int radeon_vce = 1; @@ -263,9 +262,6 @@ module_param_named(backlight, radeon_backlight, int, 0444); MODULE_PARM_DESC(auxch, "Use native auxch experimental support (1 = enable, 0 = disable, -1 = auto)"); module_param_named(auxch, radeon_auxch, int, 0444); -MODULE_PARM_DESC(mst, "DisplayPort MST experimental support (1 = enable, 0 = disable)"); -module_param_named(mst, radeon_mst, int, 0444); - MODULE_PARM_DESC(uvd, "uvd enable/disable uvd support (1 = enable, 0 = disable)"); module_param_named(uvd, radeon_uvd, int, 0444); diff --git a/drivers/gpu/drm/radeon/radeon_encoders.c b/drivers/gpu/drm/radeon/radeon_encoders.c index 46549d5179ee..35c535e48b8d 100644 --- a/drivers/gpu/drm/radeon/radeon_encoders.c +++ b/drivers/gpu/drm/radeon/radeon_encoders.c @@ -244,16 +244,7 @@ radeon_get_connector_for_encoder(struct drm_encoder *encoder) list_for_each_entry(connector, &dev->mode_config.connector_list, head) { radeon_connector = to_radeon_connector(connector); - if (radeon_encoder->is_mst_encoder) { - struct radeon_encoder_mst *mst_enc; - - if (!radeon_connector->is_mst_connector) - continue; - - mst_enc = radeon_encoder->enc_priv; - if (mst_enc->connector == radeon_connector->mst_port) - return connector; - } else if (radeon_encoder->active_device & radeon_connector->devices) + if (radeon_encoder->active_device & radeon_connector->devices) return connector; } return NULL; @@ -399,9 +390,6 @@ bool radeon_dig_monitor_is_duallink(struct drm_encoder *encoder, case DRM_MODE_CONNECTOR_DVID: case DRM_MODE_CONNECTOR_HDMIA: case DRM_MODE_CONNECTOR_DisplayPort: - if (radeon_connector->is_mst_connector) - return false; - dig_connector = radeon_connector->con_priv; if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) || (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c index 3907785d0798..da2173435edd 100644 --- a/drivers/gpu/drm/radeon/radeon_irq_kms.c +++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c @@ -100,16 +100,8 @@ static void radeon_hotplug_work_func(struct work_struct *work) static void radeon_dp_work_func(struct work_struct *work) { - struct radeon_device *rdev = container_of(work, struct radeon_device, - dp_work); - struct drm_device *dev = rdev->ddev; - struct drm_mode_config *mode_config = &dev->mode_config; - struct drm_connector *connector; - - /* this should take a mutex */ - list_for_each_entry(connector, &mode_config->connector_list, head) - radeon_connector_hotplug(connector); } + /** * radeon_driver_irq_preinstall_kms - drm irq preinstall callback * diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h index 3485e7f142e9..fc98361bd7a3 100644 --- a/drivers/gpu/drm/radeon/radeon_mode.h +++ b/drivers/gpu/drm/radeon/radeon_mode.h @@ -31,7 +31,6 @@ #define RADEON_MODE_H #include <drm/display/drm_dp_helper.h> -#include <drm/display/drm_dp_mst_helper.h> #include <drm/drm_crtc.h> #include <drm/drm_edid.h> #include <drm/drm_encoder.h> @@ -440,24 +439,12 @@ struct radeon_encoder_atom_dig { int panel_mode; struct radeon_afmt *afmt; struct r600_audio_pin *pin; - int active_mst_links; }; struct radeon_encoder_atom_dac { enum radeon_tv_std tv_std; }; -struct radeon_encoder_mst { - int crtc; - struct radeon_encoder *primary; - struct radeon_connector *connector; - struct drm_dp_mst_port *port; - int pbn; - int fe; - bool fe_from_be; - bool enc_active; -}; - struct radeon_encoder { struct drm_encoder base; uint32_t encoder_enum; @@ -479,8 +466,6 @@ struct radeon_encoder { enum radeon_output_csc output_csc; bool can_mst; uint32_t offset; - bool is_mst_encoder; - /* front end for this mst encoder */ }; struct radeon_connector_atom_dig { @@ -491,7 +476,6 @@ struct radeon_connector_atom_dig { int dp_clock; int dp_lane_count; bool edp_on; - bool is_mst; }; struct radeon_gpio_rec { @@ -535,11 +519,6 @@ enum radeon_connector_dither { RADEON_FMT_DITHER_ENABLE = 1, }; -struct stream_attribs { - uint16_t fe; - uint16_t slots; -}; - struct radeon_connector { struct drm_connector base; uint32_t connector_id; @@ -562,14 +541,6 @@ struct radeon_connector { enum radeon_connector_audio audio; enum radeon_connector_dither dither; int pixelclock_for_modeset; - bool is_mst_connector; - struct radeon_connector *mst_port; - struct drm_dp_mst_port *port; - struct drm_dp_mst_topology_mgr mst_mgr; - - struct radeon_encoder *mst_encoder; - struct stream_attribs cur_stream_attribs[6]; - int enabled_attribs; }; #define ENCODER_MODE_IS_DP(em) (((em) == ATOM_ENCODER_MODE_DP) || \ @@ -771,8 +742,6 @@ extern void atombios_dig_transmitter_setup(struct drm_encoder *encoder, extern void atombios_dig_transmitter_setup2(struct drm_encoder *encoder, int action, uint8_t lane_num, uint8_t lane_set, int fe); -extern void atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, - int fe); extern void radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder); extern struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder); void radeon_atom_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le); @@ -990,15 +959,6 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id); int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled); -/* mst */ -int radeon_dp_mst_init(struct radeon_connector *radeon_connector); -int radeon_dp_mst_probe(struct radeon_connector *radeon_connector); -int radeon_dp_mst_check_status(struct radeon_connector *radeon_connector); -void radeon_mst_debugfs_init(struct radeon_device *rdev); -void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode); - -void radeon_setup_mst_connector(struct drm_device *dev); - int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx); void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx); #endif -- 2.35.3
Lyude Paul
2022-Jun-07 19:29 UTC
[Nouveau] [RESEND RFC 18/18] drm/display/dp_mst: Move all payload info into the atomic state
Now that we've finally gotten rid of the non-atomic MST users leftover in the kernel, we can finally get rid of all of the legacy payload code we have and move as much as possible into the MST atomic state structs. The main purpose of this is to make the MST code a lot less confusing to work on, as there's a lot of duplicated logic that doesn't really need to be here. As well, this should make introducing features like fallback link retraining and DSC support far easier. Since the old payload code was pretty gnarly and there's a Lot of changes here, I expect this might be a bit difficult to review. So to make things as easy as possible for reviewers, I'll sum up how both the old and new code worked here (it took me a while to figure this out too!). The old MST code basically worked by maintaining two different payload tables - proposed_vcpis, and payloads. proposed_vcpis would hold the modified payload we wanted to push to the topology, while payloads held the payload table that was currently programmed in hardware. Modifications to proposed_vcpis would be handled through drm_dp_allocate_vcpi(), drm_dp_mst_deallocate_vcpi(), and drm_dp_mst_reset_vcpi_slots(). Then, they would be pushed via drm_dp_mst_update_payload_step1() and drm_dp_mst_update_payload_step2(). Furthermore, it's important to note how adding and removing VC payloads actually worked with drm_dp_mst_update_payload_step1(). When a VC payload is removed from the VC table, all VC payloads which come after the removed VC payload's slots must have their time slots shifted towards the start of the table. The old code handles this by looping through the entire payload table and recomputing the start slot for every payload in the topology from scratch. While very much overkill, this ends up doing the right thing because we always order the VCPIs for payloads from first to last starting timeslot. It's important to also note that drm_dp_mst_update_payload_step2() isn't actually limited to updating a single payload - the driver can use it to queue up multiple payload changes so that as many of them can be sent as possible before waiting for the ACT. drm_dp_mst_update_payload_step2() is pretty self explanatory and basically the same between the old and new code, save for the fact we don't have a second step for deleting payloads anymore -and thus rename it to drm_dp_mst_add_payload_step2(). The new payload code stores all of the current payload info within the MST atomic state and computes as much of the state as possible ahead of time. This has the one exception of the starting timeslots for payloads, which can't be determined at atomic check time since the starting time slots will vary depending on what order CRTCs are enabled in the atomic state - which varies from driver to driver. These are still stored in the atomic MST state, but are only copied from the old MST state during atomic commit time. Likewise, this is when new start slots are determined. Adding/removing payloads now works much more closely to how things are described in the spec. When we delete a payload, we loop through the current list of payloads and update the start slots for any payloads whose time slots came after the payload we just deleted. Determining the starting time slots for new payloads being added is done by simply keeping track of where the end of the VC table is in drm_dp_mst_topology_mgr->next_start_slot. Additionally, it's worth noting that we no longer have a single update_payload() function. Instead, we now have drm_dp_mst_add_payload_step1|2() and drm_dp_mst_remove_payload(). As such, it's now left it up to the driver to figure out when to add or remove payloads. The driver already knows when it's disabling/enabling CRTCs, so it also already knows when payloads should be added or removed. And, this doesn't interfere with the ability to queue up multiple payload changes before waiting for the ACT. Signed-off-by: Lyude Paul <lyude at redhat.com> Cc: Wayne Lin <Wayne.Lin at amd.com> Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> Cc: Jani Nikula <jani.nikula at intel.com> Cc: Imre Deak <imre.deak at intel.com> Cc: Daniel Vetter <daniel.vetter at ffwll.ch> Cc: Sean Paul <sean at poorly.run> --- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 56 +- .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 107 +-- .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 85 +-- .../amd/display/include/link_service_types.h | 7 + drivers/gpu/drm/display/drm_dp_mst_topology.c | 699 ++++++------------ drivers/gpu/drm/i915/display/intel_dp_mst.c | 64 +- drivers/gpu/drm/i915/display/intel_hdcp.c | 24 +- drivers/gpu/drm/nouveau/dispnv50/disp.c | 163 ++-- include/drm/display/drm_dp_mst_helper.h | 178 ++--- 9 files changed, 536 insertions(+), 847 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index ac8648e3c1c9..93d572ea3c48 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -7378,6 +7378,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, const struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode; struct drm_dp_mst_topology_mgr *mst_mgr; struct drm_dp_mst_port *mst_port; + struct drm_dp_mst_topology_state *mst_state; enum dc_color_depth color_depth; int clock, bpp = 0; bool is_y420 = false; @@ -7391,6 +7392,13 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, if (!crtc_state->connectors_changed && !crtc_state->mode_changed) return 0; + mst_state = drm_atomic_get_mst_topology_state(state, mst_mgr); + if (IS_ERR(mst_state)) + return PTR_ERR(mst_state); + + if (!mst_state->pbn_div) + mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link); + if (!state->duplicated) { int max_bpc = conn_state->max_requested_bpc; is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) && @@ -7402,11 +7410,10 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, clock = adjusted_mode->clock; dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false); } - dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_time_slots(state, - mst_mgr, - mst_port, - dm_new_connector_state->pbn, - dm_mst_get_pbn_divider(aconnector->dc_link)); + + dm_new_connector_state->vcpi_slots + drm_dp_atomic_find_time_slots(state, mst_mgr, mst_port, + dm_new_connector_state->pbn); if (dm_new_connector_state->vcpi_slots < 0) { DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots); return dm_new_connector_state->vcpi_slots; @@ -7476,18 +7483,12 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state, dm_conn_state->pbn = pbn; dm_conn_state->vcpi_slots = slot_num; - drm_dp_mst_atomic_enable_dsc(state, - aconnector->port, - dm_conn_state->pbn, - 0, + drm_dp_mst_atomic_enable_dsc(state, aconnector->port, dm_conn_state->pbn, false); continue; } - vcpi = drm_dp_mst_atomic_enable_dsc(state, - aconnector->port, - pbn, pbn_div, - true); + vcpi = drm_dp_mst_atomic_enable_dsc(state, aconnector->port, pbn, true); if (vcpi < 0) return vcpi; @@ -10943,8 +10944,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state; #if defined(CONFIG_DRM_AMD_DC_DCN) struct dsc_mst_fairness_vars vars[MAX_PIPES]; - struct drm_dp_mst_topology_state *mst_state; - struct drm_dp_mst_topology_mgr *mgr; #endif trace_amdgpu_dm_atomic_check_begin(state); @@ -11180,33 +11179,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev, lock_and_validation_needed = true; } -#if defined(CONFIG_DRM_AMD_DC_DCN) - /* set the slot info for each mst_state based on the link encoding format */ - for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { - struct amdgpu_dm_connector *aconnector; - struct drm_connector *connector; - struct drm_connector_list_iter iter; - u8 link_coding_cap; - - if (!mgr->mst_state ) - continue; - - drm_connector_list_iter_begin(dev, &iter); - drm_for_each_connector_iter(connector, &iter) { - int id = connector->index; - - if (id == mst_state->mgr->conn_base_id) { - aconnector = to_amdgpu_dm_connector(connector); - link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link); - drm_dp_mst_update_slots(mst_state, link_coding_cap); - - break; - } - } - drm_connector_list_iter_end(&iter); - - } -#endif /** * Streams and planes are reset when there are changes that affect * bandwidth. Anything that affects bandwidth needs to go through diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c index 1eaacab0334b..f843fd86787f 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c @@ -27,6 +27,7 @@ #include <linux/acpi.h> #include <linux/i2c.h> +#include <drm/drm_atomic.h> #include <drm/drm_probe_helper.h> #include <drm/amdgpu_drm.h> #include <drm/drm_edid.h> @@ -154,40 +155,32 @@ enum dc_edid_status dm_helpers_parse_edid_caps( } static void -fill_dc_mst_payload_table_from_drm(struct amdgpu_dm_connector *aconnector, - struct dc_dp_mst_stream_allocation_table *proposed_table) +fill_dc_mst_payload_table_from_drm(struct drm_dp_mst_topology_state *mst_state, + struct amdgpu_dm_connector *aconnector, + struct dc_dp_mst_stream_allocation_table *table) { + struct dc_dp_mst_stream_allocation_table new_table = { 0 }; + struct dc_dp_mst_stream_allocation *sa; + struct drm_dp_mst_atomic_payload *payload; int i; - struct drm_dp_mst_topology_mgr *mst_mgr - &aconnector->mst_port->mst_mgr; - mutex_lock(&mst_mgr->payload_lock); - - proposed_table->stream_count = 0; - - /* number of active streams */ - for (i = 0; i < mst_mgr->max_payloads; i++) { - if (mst_mgr->payloads[i].num_slots == 0) - break; /* end of vcp_id table */ - - ASSERT(mst_mgr->payloads[i].payload_state !- DP_PAYLOAD_DELETE_LOCAL); - - if (mst_mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL || - mst_mgr->payloads[i].payload_state =- DP_PAYLOAD_REMOTE) { - - struct dc_dp_mst_stream_allocation *sa - &proposed_table->stream_allocations[ - proposed_table->stream_count]; - - sa->slot_count = mst_mgr->payloads[i].num_slots; - sa->vcp_id = mst_mgr->proposed_vcpis[i]->vcpi; - proposed_table->stream_count++; - } + /* Copy over payloads */ + list_for_each_entry(payload, &mst_state->payloads, next) { + if (payload->delete) + continue; + + sa = &new_table.stream_allocations[new_table.stream_count]; + sa->slot_count = payload->time_slots; + sa->vcp_id = payload->vcpi; + sa->port = payload->port; + drm_dp_mst_get_port_malloc(sa->port); + new_table.stream_count++; } - mutex_unlock(&mst_mgr->payload_lock); + /* Release the old table, and copy over the new one */ + for (i = 0; i < table->stream_count; i++) + drm_dp_mst_put_port_malloc(table->stream_allocations[i].port); + *table = new_table; } void dm_helpers_dp_update_branch_info( @@ -205,11 +198,9 @@ bool dm_helpers_dp_mst_write_payload_allocation_table( bool enable) { struct amdgpu_dm_connector *aconnector; - struct dm_connector_state *dm_conn_state; + struct drm_dp_mst_topology_state *mst_state; + struct drm_dp_mst_atomic_payload *payload; struct drm_dp_mst_topology_mgr *mst_mgr; - struct drm_dp_mst_port *mst_port; - bool ret; - u8 link_coding_cap = DP_8b_10b_ENCODING; aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; /* Accessing the connector state is required for vcpi_slots allocation @@ -220,40 +211,21 @@ bool dm_helpers_dp_mst_write_payload_allocation_table( if (!aconnector || !aconnector->mst_port) return false; - dm_conn_state = to_dm_connector_state(aconnector->base.state); - mst_mgr = &aconnector->mst_port->mst_mgr; - - if (!mst_mgr->mst_state) - return false; - - mst_port = aconnector->port; - -#if defined(CONFIG_DRM_AMD_DC_DCN) - link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link); -#endif - - if (enable) { - - ret = drm_dp_mst_allocate_vcpi(mst_mgr, mst_port, - dm_conn_state->pbn, - dm_conn_state->vcpi_slots); - if (!ret) - return false; - - } else { - drm_dp_mst_reset_vcpi_slots(mst_mgr, mst_port); - } + mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); /* It's OK for this to fail */ - drm_dp_update_payload_part1(mst_mgr, (link_coding_cap == DP_CAP_ANSI_128B132B) ? 0:1); + payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port); + if (enable) + drm_dp_add_payload_part1(mst_mgr, mst_state, payload); + else + drm_dp_remove_payload(mst_mgr, mst_state, payload); /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or * AUX message. The sequence is slot 1-63 allocated sequence for each * stream. AMD ASIC stream slot allocation should follow the same * sequence. copy DRM MST allocation to dc */ - - fill_dc_mst_payload_table_from_drm(aconnector, proposed_table); + fill_dc_mst_payload_table_from_drm(mst_state, aconnector, proposed_table); return true; } @@ -310,26 +282,23 @@ bool dm_helpers_dp_mst_send_payload_allocation( bool enable) { struct amdgpu_dm_connector *aconnector; + struct drm_dp_mst_topology_state *mst_state; struct drm_dp_mst_topology_mgr *mst_mgr; - struct drm_dp_mst_port *mst_port; + struct drm_dp_mst_atomic_payload *payload; aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; if (!aconnector || !aconnector->mst_port) return false; - mst_port = aconnector->port; - mst_mgr = &aconnector->mst_port->mst_mgr; - - if (!mst_mgr->mst_state) - return false; + mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); /* It's OK for this to fail */ - drm_dp_update_payload_part2(mst_mgr); - - if (!enable) - drm_dp_mst_deallocate_vcpi(mst_mgr, mst_port); + if (enable) { + payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port); + drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload); + } return true; } diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c index b447c453b58d..18de4a98df40 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c @@ -501,15 +501,8 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm, dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap); aconnector->mst_mgr.cbs = &dm_mst_cbs; - drm_dp_mst_topology_mgr_init( - &aconnector->mst_mgr, - adev_to_drm(dm->adev), - &aconnector->dm_dp_aux.aux, - 16, - 4, - max_link_enc_cap.lane_count, - drm_dp_bw_code_to_link_rate(max_link_enc_cap.link_rate), - aconnector->connector_id); + drm_dp_mst_topology_mgr_init(&aconnector->mst_mgr, adev_to_drm(dm->adev), + &aconnector->dm_dp_aux.aux, 16, 4, aconnector->connector_id); drm_connector_attach_dp_subconnector_property(&aconnector->base); } @@ -614,6 +607,7 @@ static int bpp_x16_from_pbn(struct dsc_mst_fairness_params param, int pbn) } static void increase_dsc_bpp(struct drm_atomic_state *state, + struct drm_dp_mst_topology_state *mst_state, struct dc_link *dc_link, struct dsc_mst_fairness_params *params, struct dsc_mst_fairness_vars *vars, @@ -626,12 +620,9 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, int min_initial_slack; int next_index; int remaining_to_increase = 0; - int pbn_per_timeslot; int link_timeslots_used; int fair_pbn_alloc; - pbn_per_timeslot = dm_mst_get_pbn_divider(dc_link); - for (i = 0; i < count; i++) { if (vars[i + k].dsc_enabled) { initial_slack[i] @@ -662,17 +653,17 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, link_timeslots_used = 0; for (i = 0; i < count; i++) - link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, pbn_per_timeslot); + link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, mst_state->pbn_div); - fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot; + fair_pbn_alloc + (63 - link_timeslots_used) / remaining_to_increase * mst_state->pbn_div; if (initial_slack[next_index] > fair_pbn_alloc) { vars[next_index].pbn += fair_pbn_alloc; if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, - vars[next_index].pbn, - pbn_per_timeslot) < 0) + vars[next_index].pbn) < 0) return; if (!drm_dp_mst_atomic_check(state)) { vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn); @@ -681,8 +672,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, - vars[next_index].pbn, - pbn_per_timeslot) < 0) + vars[next_index].pbn) < 0) return; } } else { @@ -690,8 +680,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, - vars[next_index].pbn, - pbn_per_timeslot) < 0) + vars[next_index].pbn) < 0) return; if (!drm_dp_mst_atomic_check(state)) { vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16; @@ -700,8 +689,7 @@ static void increase_dsc_bpp(struct drm_atomic_state *state, if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, - vars[next_index].pbn, - pbn_per_timeslot) < 0) + vars[next_index].pbn) < 0) return; } } @@ -757,8 +745,7 @@ static void try_disable_dsc(struct drm_atomic_state *state, if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, - vars[next_index].pbn, - dm_mst_get_pbn_divider(dc_link)) < 0) + vars[next_index].pbn) < 0) return; if (!drm_dp_mst_atomic_check(state)) { @@ -769,8 +756,7 @@ static void try_disable_dsc(struct drm_atomic_state *state, if (drm_dp_atomic_find_time_slots(state, params[next_index].port->mgr, params[next_index].port, - vars[next_index].pbn, - dm_mst_get_pbn_divider(dc_link)) < 0) + vars[next_index].pbn) < 0) return; } @@ -783,17 +769,27 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, struct dc_state *dc_state, struct dc_link *dc_link, struct dsc_mst_fairness_vars *vars, + struct drm_dp_mst_topology_mgr *mgr, int *link_vars_start_index) { - int i, k; struct dc_stream_state *stream; struct dsc_mst_fairness_params params[MAX_PIPES]; struct amdgpu_dm_connector *aconnector; + struct drm_dp_mst_topology_state *mst_state = drm_atomic_get_mst_topology_state(state, mgr); int count = 0; + int i, k; bool debugfs_overwrite = false; memset(params, 0, sizeof(params)); + if (IS_ERR(mst_state)) + return false; + + mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link); +#if defined(CONFIG_DRM_AMD_DC_DCN) + drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link)); +#endif + /* Set up params */ for (i = 0; i < dc_state->stream_count; i++) { struct dc_dsc_policy dsc_policy = {0}; @@ -852,11 +848,8 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); vars[i + k].dsc_enabled = false; vars[i + k].bpp_x16 = 0; - if (drm_dp_atomic_find_time_slots(state, - params[i].port->mgr, - params[i].port, - vars[i + k].pbn, - dm_mst_get_pbn_divider(dc_link)) < 0) + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port, + vars[i + k].pbn) < 0) return false; } if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) { @@ -870,21 +863,15 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps); vars[i + k].dsc_enabled = true; vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16; - if (drm_dp_atomic_find_time_slots(state, - params[i].port->mgr, - params[i].port, - vars[i + k].pbn, - dm_mst_get_pbn_divider(dc_link)) < 0) + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, + params[i].port, vars[i + k].pbn) < 0) return false; } else { vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps); vars[i + k].dsc_enabled = false; vars[i + k].bpp_x16 = 0; - if (drm_dp_atomic_find_time_slots(state, - params[i].port->mgr, - params[i].port, - vars[i + k].pbn, - dm_mst_get_pbn_divider(dc_link)) < 0) + if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, + params[i].port, vars[i + k].pbn) < 0) return false; } } @@ -892,7 +879,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state, return false; /* Optimize degree of compression */ - increase_dsc_bpp(state, dc_link, params, vars, count, k); + increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, k); try_disable_dsc(state, dc_link, params, vars, count, k); @@ -1036,9 +1023,9 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state, continue; mutex_lock(&aconnector->mst_mgr.lock); - if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, - vars, &link_vars_start_index)) { - mutex_unlock(&aconnector->mst_mgr.lock); + if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars, + &aconnector->mst_mgr, + &link_vars_start_index)) { return false; } mutex_unlock(&aconnector->mst_mgr.lock); @@ -1095,10 +1082,8 @@ static bool continue; mutex_lock(&aconnector->mst_mgr.lock); - if (!compute_mst_dsc_configs_for_link(state, - dc_state, - stream->link, - vars, + if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars, + &aconnector->mst_mgr, &link_vars_start_index)) { mutex_unlock(&aconnector->mst_mgr.lock); return false; diff --git a/drivers/gpu/drm/amd/display/include/link_service_types.h b/drivers/gpu/drm/amd/display/include/link_service_types.h index 91bffc5bf52c..143baab54b41 100644 --- a/drivers/gpu/drm/amd/display/include/link_service_types.h +++ b/drivers/gpu/drm/amd/display/include/link_service_types.h @@ -250,12 +250,19 @@ union dpcd_training_lane_set { * _ONLY_ be filled out from DM and then passed to DC, do NOT use these for _any_ kind of atomic * state calculations in DM, or you will break something. */ + +struct drm_dp_mst_port; + /* DP MST stream allocation (payload bandwidth number) */ struct dc_dp_mst_stream_allocation { uint8_t vcp_id; /* number of slots required for the DP stream in * transport packet */ uint8_t slot_count; + /* The MST port this is on, this is used to associate DC MST payloads with their + * respective DRM payloads allocations, and can be ignored on non-Linux. + */ + struct drm_dp_mst_port *port; }; /* DP MST stream allocation table */ diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c index 10d26a7e028c..d8e32269585b 100644 --- a/drivers/gpu/drm/display/drm_dp_mst_topology.c +++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c @@ -67,8 +67,7 @@ static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr, static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port); static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr, - int id, - struct drm_dp_payload *payload); + int id, u8 start_slot, u8 num_slots); static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, @@ -1234,57 +1233,6 @@ build_query_stream_enc_status(struct drm_dp_sideband_msg_tx *msg, u8 stream_id, return 0; } -static int drm_dp_mst_assign_payload_id(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_vcpi *vcpi) -{ - int ret, vcpi_ret; - - mutex_lock(&mgr->payload_lock); - ret = find_first_zero_bit(&mgr->payload_mask, mgr->max_payloads + 1); - if (ret > mgr->max_payloads) { - ret = -EINVAL; - drm_dbg_kms(mgr->dev, "out of payload ids %d\n", ret); - goto out_unlock; - } - - vcpi_ret = find_first_zero_bit(&mgr->vcpi_mask, mgr->max_payloads + 1); - if (vcpi_ret > mgr->max_payloads) { - ret = -EINVAL; - drm_dbg_kms(mgr->dev, "out of vcpi ids %d\n", ret); - goto out_unlock; - } - - set_bit(ret, &mgr->payload_mask); - set_bit(vcpi_ret, &mgr->vcpi_mask); - vcpi->vcpi = vcpi_ret + 1; - mgr->proposed_vcpis[ret - 1] = vcpi; -out_unlock: - mutex_unlock(&mgr->payload_lock); - return ret; -} - -static void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr *mgr, - int vcpi) -{ - int i; - - if (vcpi == 0) - return; - - mutex_lock(&mgr->payload_lock); - drm_dbg_kms(mgr->dev, "putting payload %d\n", vcpi); - clear_bit(vcpi - 1, &mgr->vcpi_mask); - - for (i = 0; i < mgr->max_payloads; i++) { - if (mgr->proposed_vcpis[i] && - mgr->proposed_vcpis[i]->vcpi == vcpi) { - mgr->proposed_vcpis[i] = NULL; - clear_bit(i + 1, &mgr->payload_mask); - } - } - mutex_unlock(&mgr->payload_lock); -} - static bool check_txmsg_state(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_sideband_msg_tx *txmsg) { @@ -1737,7 +1685,7 @@ drm_dp_mst_dump_port_topology_history(struct drm_dp_mst_port *port) {} #define save_port_topology_ref(port, type) #endif -static struct drm_dp_mst_atomic_payload * +struct drm_dp_mst_atomic_payload * drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state, struct drm_dp_mst_port *port) { @@ -1749,6 +1697,7 @@ drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state, return NULL; } +EXPORT_SYMBOL(drm_atomic_get_mst_payload_state); static void drm_dp_destroy_mst_branch_device(struct kref *kref) { @@ -3272,6 +3221,8 @@ int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, struct drm_dp_query_stream_enc_status_ack_reply *status) { + struct drm_dp_mst_topology_state *state; + struct drm_dp_mst_atomic_payload *payload; struct drm_dp_sideband_msg_tx *txmsg; u8 nonce[7]; int ret; @@ -3288,6 +3239,10 @@ int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, get_random_bytes(nonce, sizeof(nonce)); + drm_modeset_lock(&mgr->base.lock, NULL); + state = to_drm_dp_mst_topology_state(mgr->base.state); + payload = drm_atomic_get_mst_payload_state(state, port); + /* * "Source device targets the QUERY_STREAM_ENCRYPTION_STATUS message * transaction at the MST Branch device directly connected to the @@ -3295,7 +3250,7 @@ int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, */ txmsg->dst = mgr->mst_primary; - build_query_stream_enc_status(txmsg, port->vcpi.vcpi, nonce); + build_query_stream_enc_status(txmsg, payload->vcpi, nonce); drm_dp_queue_down_tx(mgr, txmsg); @@ -3312,6 +3267,7 @@ int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, memcpy(status, &txmsg->reply.u.enc_status, sizeof(*status)); out: + drm_modeset_unlock(&mgr->base.lock); drm_dp_mst_topology_put_port(port); out_get_port: kfree(txmsg); @@ -3320,219 +3276,161 @@ int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr, EXPORT_SYMBOL(drm_dp_send_query_stream_enc_status); static int drm_dp_create_payload_step1(struct drm_dp_mst_topology_mgr *mgr, - int id, - struct drm_dp_payload *payload) + struct drm_dp_mst_atomic_payload *payload) { - int ret; - - ret = drm_dp_dpcd_write_payload(mgr, id, payload); - if (ret < 0) { - payload->payload_state = 0; - return ret; - } - payload->payload_state = DP_PAYLOAD_LOCAL; - return 0; + return drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, + payload->time_slots); } static int drm_dp_create_payload_step2(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, - int id, - struct drm_dp_payload *payload) + struct drm_dp_mst_atomic_payload *payload) { int ret; + struct drm_dp_mst_port *port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); - ret = drm_dp_payload_send_msg(mgr, port, id, port->vcpi.pbn); - if (ret < 0) - return ret; - payload->payload_state = DP_PAYLOAD_REMOTE; + if (!port) + return -EIO; + + ret = drm_dp_payload_send_msg(mgr, port, payload->vcpi, payload->pbn); + drm_dp_mst_topology_put_port(port); return ret; } static int drm_dp_destroy_payload_step1(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, - int id, - struct drm_dp_payload *payload) + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_atomic_payload *payload) { + drm_dbg_kms(mgr->dev, "\n"); - /* it's okay for these to fail */ - if (port) { - drm_dp_payload_send_msg(mgr, port, id, 0); - } - drm_dp_dpcd_write_payload(mgr, id, payload); - payload->payload_state = DP_PAYLOAD_DELETE_LOCAL; - return 0; -} + /* it's okay for these to fail */ + drm_dp_payload_send_msg(mgr, payload->port, payload->vcpi, 0); + drm_dp_dpcd_write_payload(mgr, payload->vcpi, payload->vc_start_slot, 0); -static int drm_dp_destroy_payload_step2(struct drm_dp_mst_topology_mgr *mgr, - int id, - struct drm_dp_payload *payload) -{ - payload->payload_state = 0; return 0; } /** - * drm_dp_update_payload_part1() - Execute payload update part 1 - * @mgr: manager to use. - * @start_slot: this is the cur slot + * drm_dp_add_payload_part1() - Execute payload update part 1 + * @mgr: Manager to use. + * @mst_state: The MST atomic state + * @payload: The payload to write * - * NOTE: start_slot is a temporary workaround for non-atomic drivers, - * this will be removed when non-atomic mst helpers are moved out of the helper + * Determines the starting time slot for the given payload, and programs the VCPI for this payload + * into hardware. * - * This iterates over all proposed virtual channels, and tries to - * allocate space in the link for them. For 0->slots transitions, - * this step just writes the VCPI to the MST device. For slots->0 - * transitions, this writes the updated VCPIs and removes the - * remote VC payloads. + * After calling this, the driver should generate ACT and payload packets. Note that the driver may + * generate ACT once per payload change, or only once after all new payloads in a given atomic state + * have been added. * - * after calling this the driver should generate ACT and payload - * packets. + * Returns: 0 on success, error code on failure. In the event that this fails, + * @payload.vc_start_slot will also be set to -1. */ -int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_slot) +int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr, + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_atomic_payload *payload) { - struct drm_dp_payload req_payload; struct drm_dp_mst_port *port; - int i, j; - int cur_slots = start_slot; - - mutex_lock(&mgr->payload_lock); - for (i = 0; i < mgr->max_payloads; i++) { - struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i]; - struct drm_dp_payload *payload = &mgr->payloads[i]; - bool put_port = false; - - /* solve the current payloads - compare to the hw ones - - update the hw view */ - req_payload.start_slot = cur_slots; - if (vcpi) { - port = container_of(vcpi, struct drm_dp_mst_port, - vcpi); - - /* Validated ports don't matter if we're releasing - * VCPI - */ - if (vcpi->num_slots) { - port = drm_dp_mst_topology_get_port_validated( - mgr, port); - if (!port) { - if (vcpi->num_slots == payload->num_slots) { - cur_slots += vcpi->num_slots; - payload->start_slot = req_payload.start_slot; - continue; - } else { - drm_dbg_kms(mgr->dev, - "Fail:set payload to invalid sink"); - mutex_unlock(&mgr->payload_lock); - return -EINVAL; - } - } - put_port = true; - } + int ret; - req_payload.num_slots = vcpi->num_slots; - req_payload.vcpi = vcpi->vcpi; - } else { - port = NULL; - req_payload.num_slots = 0; - } + port = drm_dp_mst_topology_get_port_validated(mgr, payload->port); + if (!port) + return 0; - payload->start_slot = req_payload.start_slot; - /* work out what is required to happen with this payload */ - if (payload->num_slots != req_payload.num_slots) { - - /* need to push an update for this payload */ - if (req_payload.num_slots) { - drm_dp_create_payload_step1(mgr, vcpi->vcpi, - &req_payload); - payload->num_slots = req_payload.num_slots; - payload->vcpi = req_payload.vcpi; - - } else if (payload->num_slots) { - payload->num_slots = 0; - drm_dp_destroy_payload_step1(mgr, port, - payload->vcpi, - payload); - req_payload.payload_state - payload->payload_state; - payload->start_slot = 0; - } - payload->payload_state = req_payload.payload_state; - } - cur_slots += req_payload.num_slots; + if (mgr->payload_count == 0) + mgr->next_start_slot = mst_state->start_slot; - if (put_port) - drm_dp_mst_topology_put_port(port); + payload->vc_start_slot = mgr->next_start_slot; + ret = drm_dp_create_payload_step1(mgr, payload); + drm_dp_mst_topology_put_port(port); + if (ret < 0) { + drm_warn(mgr->dev, "Failed to create MST payload for port %p: %d\n", + payload->port, ret); + payload->vc_start_slot = -1; + return ret; } - for (i = 0; i < mgr->max_payloads; /* do nothing */) { - if (mgr->payloads[i].payload_state != DP_PAYLOAD_DELETE_LOCAL) { - i++; - continue; - } + mgr->payload_count++; + mgr->next_start_slot += payload->time_slots; - drm_dbg_kms(mgr->dev, "removing payload %d\n", i); - for (j = i; j < mgr->max_payloads - 1; j++) { - mgr->payloads[j] = mgr->payloads[j + 1]; - mgr->proposed_vcpis[j] = mgr->proposed_vcpis[j + 1]; + return 0; +} +EXPORT_SYMBOL(drm_dp_add_payload_part1); - if (mgr->proposed_vcpis[j] && - mgr->proposed_vcpis[j]->num_slots) { - set_bit(j + 1, &mgr->payload_mask); - } else { - clear_bit(j + 1, &mgr->payload_mask); - } - } +/** + * drm_dp_remove_payload() - Remove an MST payload + * @mgr: Manager to use. + * @mst_state: The MST atomic state + * @payload: The payload to write + * + * Removes a payload from an MST topology if it was successfully assigned a start slot. Also updates + * the starting time slots of all other payloads which would have been shifted towards the start of + * the VC table as a result. + * + * After calling this, the driver should generate ACT and payload packets. Note that the driver may + * generate ACT once per payload change, or only once after all payloads for a given atomic state + * have been removed. + */ +void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr, + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_atomic_payload *payload) +{ + struct drm_dp_mst_atomic_payload *pos; - memset(&mgr->payloads[mgr->max_payloads - 1], 0, - sizeof(struct drm_dp_payload)); - mgr->proposed_vcpis[mgr->max_payloads - 1] = NULL; - clear_bit(mgr->max_payloads, &mgr->payload_mask); + /* We failed to make the payload, so nothing to do */ + if (payload->vc_start_slot == -1) + return; + + drm_dp_destroy_payload_step1(mgr, mst_state, payload); + + list_for_each_entry(pos, &mst_state->payloads, next) { + if (pos != payload && pos->vc_start_slot > payload->vc_start_slot) + pos->vc_start_slot -= payload->time_slots; } - mutex_unlock(&mgr->payload_lock); + payload->vc_start_slot = -1; - return 0; + mgr->payload_count--; + mgr->next_start_slot -= payload->time_slots; } -EXPORT_SYMBOL(drm_dp_update_payload_part1); +EXPORT_SYMBOL(drm_dp_remove_payload); /** - * drm_dp_update_payload_part2() - Execute payload update part 2 - * @mgr: manager to use. + * drm_dp_add_payload_part2() - Execute payload update part 2 + * @mgr: Manager to use. + * @state: The global atomic state + * @payload: The payload to update + * + * If @payload was successfully assigned a starting time slot by drm_dp_add_payload_part1(), this + * function will send the sideband messages to finish allocating this payload. * - * This iterates over all proposed virtual channels, and tries to - * allocate space in the link for them. For 0->slots transitions, - * this step writes the remote VC payload commands. For slots->0 - * this just resets some internal state. + * Returns: 0 on success, negative error code on failure. */ -int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr) +int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr, + struct drm_atomic_state *state, + struct drm_dp_mst_atomic_payload *payload) { - struct drm_dp_mst_port *port; - int i; int ret = 0; - mutex_lock(&mgr->payload_lock); - for (i = 0; i < mgr->max_payloads; i++) { - - if (!mgr->proposed_vcpis[i]) - continue; - - port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); + /* Skip failed payloads */ + if (payload->vc_start_slot == -1) { + drm_dbg_kms(state->dev, "Part 1 of payload creation for %s failed, skipping part 2\n", + payload->port->connector->name); + return -EIO; + } - drm_dbg_kms(mgr->dev, "payload %d %d\n", i, mgr->payloads[i].payload_state); - if (mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL) { - ret = drm_dp_create_payload_step2(mgr, port, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]); - } else if (mgr->payloads[i].payload_state == DP_PAYLOAD_DELETE_LOCAL) { - ret = drm_dp_destroy_payload_step2(mgr, mgr->proposed_vcpis[i]->vcpi, &mgr->payloads[i]); - } - if (ret) { - mutex_unlock(&mgr->payload_lock); - return ret; - } + ret = drm_dp_create_payload_step2(mgr, payload); + if (ret < 0) { + if (!payload->delete) + drm_err(mgr->dev, "Step 2 of creating MST payload for %p failed: %d\n", + payload->port, ret); + else + drm_dbg_kms(mgr->dev, "Step 2 of removing MST payload for %p failed: %d\n", + payload->port, ret); } - mutex_unlock(&mgr->payload_lock); - return 0; + + return ret; } -EXPORT_SYMBOL(drm_dp_update_payload_part2); +EXPORT_SYMBOL(drm_dp_add_payload_part2); static int drm_dp_send_dpcd_read(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port, @@ -3712,7 +3610,6 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms int ret = 0; struct drm_dp_mst_branch *mstb = NULL; - mutex_lock(&mgr->payload_lock); mutex_lock(&mgr->lock); if (mst_state == mgr->mst_state) goto out_unlock; @@ -3720,10 +3617,6 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms mgr->mst_state = mst_state; /* set the device into MST mode */ if (mst_state) { - struct drm_dp_payload reset_pay; - int lane_count; - int link_rate; - WARN_ON(mgr->mst_primary); /* get dpcd info */ @@ -3734,16 +3627,6 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms goto out_unlock; } - lane_count = min_t(int, mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK, mgr->max_lane_count); - link_rate = min_t(int, drm_dp_bw_code_to_link_rate(mgr->dpcd[1]), mgr->max_link_rate); - mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr, - link_rate, - lane_count); - if (mgr->pbn_div == 0) { - ret = -EINVAL; - goto out_unlock; - } - /* add initial branch device at LCT 1 */ mstb = drm_dp_add_mst_branch_device(1, NULL); if (mstb == NULL) { @@ -3763,9 +3646,8 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms if (ret < 0) goto out_unlock; - reset_pay.start_slot = 0; - reset_pay.num_slots = 0x3f; - drm_dp_dpcd_write_payload(mgr, 0, &reset_pay); + /* Write reset payload */ + drm_dp_dpcd_write_payload(mgr, 0, 0, 0x3f); queue_work(system_long_wq, &mgr->work); @@ -3777,19 +3659,11 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms /* this can fail if the device is gone */ drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0); ret = 0; - memset(mgr->payloads, 0, - mgr->max_payloads * sizeof(mgr->payloads[0])); - memset(mgr->proposed_vcpis, 0, - mgr->max_payloads * sizeof(mgr->proposed_vcpis[0])); - mgr->payload_mask = 0; - set_bit(0, &mgr->payload_mask); - mgr->vcpi_mask = 0; mgr->payload_id_table_cleared = false; } out_unlock: mutex_unlock(&mgr->lock); - mutex_unlock(&mgr->payload_lock); if (mstb) drm_dp_mst_topology_put_mstb(mstb); return ret; @@ -4310,62 +4184,18 @@ struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_ } EXPORT_SYMBOL(drm_dp_mst_get_edid); -/** - * drm_dp_find_vcpi_slots() - Find time slots for this PBN value - * @mgr: manager to use - * @pbn: payload bandwidth to convert into slots. - * - * Calculate the number of time slots that will be required for the given PBN - * value. This function is deprecated, and should not be used in atomic - * drivers. - * - * RETURNS: - * The total slots required for this port, or error. - */ -int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, - int pbn) -{ - int num_slots; - - num_slots = DIV_ROUND_UP(pbn, mgr->pbn_div); - - /* max. time slots - one slot for MTP header */ - if (num_slots > 63) - return -ENOSPC; - return num_slots; -} -EXPORT_SYMBOL(drm_dp_find_vcpi_slots); - -static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_vcpi *vcpi, int pbn, int slots) -{ - int ret; - - vcpi->pbn = pbn; - vcpi->aligned_pbn = slots * mgr->pbn_div; - vcpi->num_slots = slots; - - ret = drm_dp_mst_assign_payload_id(mgr, vcpi); - if (ret < 0) - return ret; - return 0; -} - /** * drm_dp_atomic_find_time_slots() - Find and add time slots to the state * @state: global atomic state * @mgr: MST topology manager for the port * @port: port to find time slots for * @pbn: bandwidth required for the mode in PBN - * @pbn_div: divider for DSC mode that takes FEC into account * - * Allocates time slots to @port, replacing any previous timeslot allocations it - * may have had. Any atomic drivers which support MST must call this function - * in their &drm_encoder_helper_funcs.atomic_check() callback to change the - * current timeslot allocation for the new state, but only when - * &drm_crtc_state.mode_changed or &drm_crtc_state.connectors_changed is set - * to ensure compatibility with userspace applications that still use the - * legacy modesetting UAPI. + * Allocates time slots to @port, replacing any previous time slot allocations it may + * have had. Any atomic drivers which support MST must call this function in + * their &drm_encoder_helper_funcs.atomic_check() callback unconditionally to + * change the current time slot allocation for the new state, and ensure the MST + * atomic state is added whenever the state of payloads in the topology changes. * * Allocations set by this function are not checked against the bandwidth * restraints of @mgr until the driver calls drm_dp_mst_atomic_check(). @@ -4384,8 +4214,7 @@ static int drm_dp_init_vcpi(struct drm_dp_mst_topology_mgr *mgr, */ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, int pbn, - int pbn_div) + struct drm_dp_mst_port *port, int pbn) { struct drm_dp_mst_topology_state *topology_state; struct drm_dp_mst_atomic_payload *payload = NULL; @@ -4418,10 +4247,7 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, } } - if (pbn_div <= 0) - pbn_div = mgr->pbn_div; - - req_slots = DIV_ROUND_UP(pbn, pbn_div); + req_slots = DIV_ROUND_UP(pbn, topology_state->pbn_div); drm_dbg_atomic(mgr->dev, "[CONNECTOR:%d:%s] [MST PORT:%p] TU %d -> %d\n", port->connector->base.id, port->connector->name, @@ -4430,7 +4256,7 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, port->connector->base.id, port->connector->name, port, prev_bw, pbn); - /* Add the new allocation to the state */ + /* Add the new allocation to the state, note the VCPI isn't assigned until the end */ if (!payload) { payload = kzalloc(sizeof(*payload), GFP_KERNEL); if (!payload) @@ -4438,6 +4264,7 @@ int drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, drm_dp_mst_get_port_malloc(port); payload->port = port; + payload->vc_start_slot = -1; list_add(&payload->next, &topology_state->payloads); } payload->time_slots = req_slots; @@ -4454,10 +4281,12 @@ EXPORT_SYMBOL(drm_dp_atomic_find_time_slots); * @port: The port to release the time slots from * * Releases any time slots that have been allocated to a port in the atomic - * state. Any atomic drivers which support MST must call this function in - * their &drm_connector_helper_funcs.atomic_check() callback when the - * connector will no longer have VCPI allocated (e.g. because its CRTC was - * removed) when it had VCPI allocated in the previous atomic state. + * state. Any atomic drivers which support MST must call this function + * unconditionally in their &drm_connector_helper_funcs.atomic_check() callback. + * This helper will check whether time slots would be released by the new state and + * respond accordingly, along with ensuring the MST state is always added to the + * atomic state whenever a new state would modify the state of payloads on the + * topology. * * It is OK to call this even if @port has been removed from the system. * Additionally, it is OK to call this function multiple times on the same @@ -4519,6 +4348,7 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state, drm_dp_mst_put_port_malloc(port); payload->pbn = 0; payload->delete = true; + topology_state->payload_mask &= ~BIT(payload->vcpi - 1); } return 0; @@ -4569,7 +4399,8 @@ int drm_dp_mst_atomic_setup_commit(struct drm_atomic_state *state) EXPORT_SYMBOL(drm_dp_mst_atomic_setup_commit); /** - * drm_dp_mst_atomic_wait_for_dependencies() - Wait for all pending commits on MST topologies + * drm_dp_mst_atomic_wait_for_dependencies() - Wait for all pending commits on MST topologies, + * prepare new MST state for commit * @state: global atomic state * * Goes through any MST topologies in this atomic state, and waits for any pending commits which @@ -4587,17 +4418,30 @@ EXPORT_SYMBOL(drm_dp_mst_atomic_setup_commit); */ void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state) { - struct drm_dp_mst_topology_state *old_mst_state; + struct drm_dp_mst_topology_state *old_mst_state, *new_mst_state; struct drm_dp_mst_topology_mgr *mgr; + struct drm_dp_mst_atomic_payload *old_payload, *new_payload; int i, j, ret; - for_each_old_mst_mgr_in_state(state, mgr, old_mst_state, i) { + for_each_oldnew_mst_mgr_in_state(state, mgr, old_mst_state, new_mst_state, i) { for (j = 0; j < old_mst_state->num_commit_deps; j++) { ret = drm_crtc_commit_wait(old_mst_state->commit_deps[j]); if (ret < 0) drm_err(state->dev, "Failed to wait for %s: %d\n", old_mst_state->commit_deps[j]->crtc->name, ret); } + + /* Now that previous state is committed, it's safe to copy over the start slot + * assignments + */ + list_for_each_entry(old_payload, &old_mst_state->payloads, next) { + if (old_payload->delete) + continue; + + new_payload = drm_atomic_get_mst_payload_state(new_mst_state, + old_payload->port); + new_payload->vc_start_slot = old_payload->vc_start_slot; + } } } EXPORT_SYMBOL(drm_dp_mst_atomic_wait_for_dependencies); @@ -4682,110 +4526,8 @@ void drm_dp_mst_update_slots(struct drm_dp_mst_topology_state *mst_state, uint8_ } EXPORT_SYMBOL(drm_dp_mst_update_slots); -/** - * drm_dp_mst_allocate_vcpi() - Allocate a virtual channel - * @mgr: manager for this port - * @port: port to allocate a virtual channel for. - * @pbn: payload bandwidth number to request - * @slots: returned number of slots for this PBN. - */ -bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, int pbn, int slots) -{ - int ret; - - if (slots < 0) - return false; - - port = drm_dp_mst_topology_get_port_validated(mgr, port); - if (!port) - return false; - - if (port->vcpi.vcpi > 0) { - drm_dbg_kms(mgr->dev, - "payload: vcpi %d already allocated for pbn %d - requested pbn %d\n", - port->vcpi.vcpi, port->vcpi.pbn, pbn); - if (pbn == port->vcpi.pbn) { - drm_dp_mst_topology_put_port(port); - return true; - } - } - - ret = drm_dp_init_vcpi(mgr, &port->vcpi, pbn, slots); - if (ret) { - drm_dbg_kms(mgr->dev, "failed to init time slots=%d ret=%d\n", - DIV_ROUND_UP(pbn, mgr->pbn_div), ret); - drm_dp_mst_topology_put_port(port); - goto out; - } - drm_dbg_kms(mgr->dev, "initing vcpi for pbn=%d slots=%d\n", pbn, port->vcpi.num_slots); - - /* Keep port allocated until its payload has been removed */ - drm_dp_mst_get_port_malloc(port); - drm_dp_mst_topology_put_port(port); - return true; -out: - return false; -} -EXPORT_SYMBOL(drm_dp_mst_allocate_vcpi); - -int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) -{ - int slots = 0; - - port = drm_dp_mst_topology_get_port_validated(mgr, port); - if (!port) - return slots; - - slots = port->vcpi.num_slots; - drm_dp_mst_topology_put_port(port); - return slots; -} -EXPORT_SYMBOL(drm_dp_mst_get_vcpi_slots); - -/** - * drm_dp_mst_reset_vcpi_slots() - Reset number of slots to 0 for VCPI - * @mgr: manager for this port - * @port: unverified pointer to a port. - * - * This just resets the number of slots for the ports VCPI for later programming. - */ -void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port) -{ - /* - * A port with VCPI will remain allocated until its VCPI is - * released, no verified ref needed - */ - - port->vcpi.num_slots = 0; -} -EXPORT_SYMBOL(drm_dp_mst_reset_vcpi_slots); - -/** - * drm_dp_mst_deallocate_vcpi() - deallocate a VCPI - * @mgr: manager for this port - * @port: port to deallocate vcpi for - * - * This can be called unconditionally, regardless of whether - * drm_dp_mst_allocate_vcpi() succeeded or not. - */ -void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port) -{ - if (!port->vcpi.vcpi) - return; - - drm_dp_mst_put_payload_id(mgr, port->vcpi.vcpi); - port->vcpi.num_slots = 0; - port->vcpi.pbn = 0; - port->vcpi.aligned_pbn = 0; - port->vcpi.vcpi = 0; - drm_dp_mst_put_port_malloc(port); -} -EXPORT_SYMBOL(drm_dp_mst_deallocate_vcpi); - static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr, - int id, struct drm_dp_payload *payload) + int id, u8 start_slot, u8 num_slots) { u8 payload_alloc[3], status; int ret; @@ -4795,8 +4537,8 @@ static int drm_dp_dpcd_write_payload(struct drm_dp_mst_topology_mgr *mgr, DP_PAYLOAD_TABLE_UPDATED); payload_alloc[0] = id; - payload_alloc[1] = payload->start_slot; - payload_alloc[2] = payload->num_slots; + payload_alloc[1] = start_slot; + payload_alloc[2] = num_slots; ret = drm_dp_dpcd_write(mgr->aux, DP_PAYLOAD_ALLOCATE_SET, payload_alloc, 3); if (ret != 3) { @@ -5011,8 +4753,9 @@ static void fetch_monitor_name(struct drm_dp_mst_topology_mgr *mgr, void drm_dp_mst_dump_topology(struct seq_file *m, struct drm_dp_mst_topology_mgr *mgr) { - int i; - struct drm_dp_mst_port *port; + struct drm_dp_mst_topology_state *state; + struct drm_dp_mst_atomic_payload *payload; + int i, ret; mutex_lock(&mgr->lock); if (mgr->mst_primary) @@ -5021,36 +4764,35 @@ void drm_dp_mst_dump_topology(struct seq_file *m, /* dump VCPIs */ mutex_unlock(&mgr->lock); - mutex_lock(&mgr->payload_lock); - seq_printf(m, "\n*** VCPI Info ***\n"); - seq_printf(m, "payload_mask: %lx, vcpi_mask: %lx, max_payloads: %d\n", mgr->payload_mask, mgr->vcpi_mask, mgr->max_payloads); + ret = drm_modeset_lock_single_interruptible(&mgr->base.lock); + if (ret < 0) + return; + + state = to_drm_dp_mst_topology_state(mgr->base.state); + seq_printf(m, "\n*** Atomic state info ***\n"); + seq_printf(m, "payload_mask: %x, max_payloads: %d, start_slot: %u, pbn_div: %d\n", + state->payload_mask, mgr->max_payloads, state->start_slot, state->pbn_div); - seq_printf(m, "\n| idx | port # | vcp_id | # slots | sink name |\n"); + seq_printf(m, "\n| idx | port | vcpi | slots | pbn | dsc | sink name |\n"); for (i = 0; i < mgr->max_payloads; i++) { - if (mgr->proposed_vcpis[i]) { + list_for_each_entry(payload, &state->payloads, next) { char name[14]; - port = container_of(mgr->proposed_vcpis[i], struct drm_dp_mst_port, vcpi); - fetch_monitor_name(mgr, port, name, sizeof(name)); - seq_printf(m, "%10d%10d%10d%10d%20s\n", + if (payload->vcpi != i || payload->delete) + continue; + + fetch_monitor_name(mgr, payload->port, name, sizeof(name)); + seq_printf(m, " %5d %6d %6d %02d - %02d %5d %5s %19s\n", i, - port->port_num, - port->vcpi.vcpi, - port->vcpi.num_slots, + payload->port->port_num, + payload->vcpi, + payload->vc_start_slot, + payload->vc_start_slot + payload->time_slots - 1, + payload->pbn, + payload->dsc_enabled ? "Y" : "N", (*name != 0) ? name : "Unknown"); - } else - seq_printf(m, "%6d - Unused\n", i); - } - seq_printf(m, "\n*** Payload Info ***\n"); - seq_printf(m, "| idx | state | start slot | # slots |\n"); - for (i = 0; i < mgr->max_payloads; i++) { - seq_printf(m, "%10d%10d%15d%10d\n", - i, - mgr->payloads[i].payload_state, - mgr->payloads[i].start_slot, - mgr->payloads[i].num_slots); + } } - mutex_unlock(&mgr->payload_lock); seq_printf(m, "\n*** DPCD Info ***\n"); mutex_lock(&mgr->lock); @@ -5097,7 +4839,7 @@ void drm_dp_mst_dump_topology(struct seq_file *m, out: mutex_unlock(&mgr->lock); - + drm_modeset_unlock(&mgr->base.lock); } EXPORT_SYMBOL(drm_dp_mst_dump_topology); @@ -5418,9 +5160,22 @@ drm_dp_mst_atomic_check_payload_alloc_limits(struct drm_dp_mst_topology_mgr *mgr mgr, mst_state, mgr->max_payloads); return -EINVAL; } + + /* Assign a VCPI */ + if (!payload->vcpi) { + payload->vcpi = ffz(mst_state->payload_mask) + 1; + drm_dbg_atomic(mgr->dev, "[MST PORT:%p] assigned VCPI #%d\n", + payload->port, payload->vcpi); + mst_state->payload_mask |= BIT(payload->vcpi - 1); + } } - drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p TU avail=%d used=%d\n", - mgr, mst_state, avail_slots, mst_state->total_avail_slots - avail_slots); + + if (!payload_count) + mst_state->pbn_div = 0; + + drm_dbg_atomic(mgr->dev, "[MST MGR:%p] mst state %p TU pbn_div=%d avail=%d used=%d\n", + mgr, mst_state, mst_state->pbn_div, avail_slots, + mst_state->total_avail_slots - avail_slots); return 0; } @@ -5491,7 +5246,6 @@ EXPORT_SYMBOL(drm_dp_mst_add_affected_dsc_crtcs); * @state: Pointer to the new drm_atomic_state * @port: Pointer to the affected MST Port * @pbn: Newly recalculated bw required for link with DSC enabled - * @pbn_div: Divider to calculate correct number of pbn per slot * @enable: Boolean flag to enable or disable DSC on the port * * This function enables DSC on the given Port @@ -5502,8 +5256,7 @@ EXPORT_SYMBOL(drm_dp_mst_add_affected_dsc_crtcs); */ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, struct drm_dp_mst_port *port, - int pbn, int pbn_div, - bool enable) + int pbn, bool enable) { struct drm_dp_mst_topology_state *mst_state; struct drm_dp_mst_atomic_payload *payload; @@ -5529,7 +5282,7 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, } if (enable) { - time_slots = drm_dp_atomic_find_time_slots(state, port->mgr, port, pbn, pbn_div); + time_slots = drm_dp_atomic_find_time_slots(state, port->mgr, port, pbn); drm_dbg_atomic(state->dev, "[MST PORT:%p] Enabling DSC flag, reallocating %d time slots on the port\n", port, time_slots); @@ -5542,6 +5295,7 @@ int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, return time_slots; } EXPORT_SYMBOL(drm_dp_mst_atomic_enable_dsc); + /** * drm_dp_mst_atomic_check - Check that the new state of an MST topology in an * atomic update is valid @@ -5599,7 +5353,6 @@ EXPORT_SYMBOL(drm_dp_mst_topology_state_funcs); /** * drm_atomic_get_mst_topology_state: get MST topology state - * * @state: global atomic state * @mgr: MST topology manager, also the private object in this case * @@ -5619,6 +5372,31 @@ struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_a } EXPORT_SYMBOL(drm_atomic_get_mst_topology_state); +/** + * drm_atomic_get_new_mst_topology_state: get new MST topology state in atomic state, if any + * @state: global atomic state + * @mgr: MST topology manager, also the private object in this case + * + * This function wraps drm_atomic_get_priv_obj_state() passing in the MST atomic + * state vtable so that the private object state returned is that of a MST + * topology object. + * + * Returns: + * + * The MST topology state, or NULL if there's no topology state for this MST mgr + * in the global atomic state + */ +struct drm_dp_mst_topology_state * +drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state, + struct drm_dp_mst_topology_mgr *mgr) +{ + struct drm_private_state *priv_state + drm_atomic_get_new_private_obj_state(state, &mgr->base); + + return priv_state ? to_dp_mst_topology_state(priv_state) : NULL; +} +EXPORT_SYMBOL(drm_atomic_get_new_mst_topology_state); + /** * drm_dp_mst_topology_mgr_init - initialise a topology manager * @mgr: manager struct to initialise @@ -5626,8 +5404,6 @@ EXPORT_SYMBOL(drm_atomic_get_mst_topology_state); * @aux: DP helper aux channel to talk to this device * @max_dpcd_transaction_bytes: hw specific DPCD transaction limit * @max_payloads: maximum number of payloads this GPU can source - * @max_lane_count: maximum number of lanes this GPU supports - * @max_link_rate: maximum link rate per lane this GPU supports in kHz * @conn_base_id: the connector object ID the MST device is connected to. * * Return 0 for success, or negative error code on failure @@ -5635,14 +5411,12 @@ EXPORT_SYMBOL(drm_atomic_get_mst_topology_state); int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, struct drm_device *dev, struct drm_dp_aux *aux, int max_dpcd_transaction_bytes, int max_payloads, - int max_lane_count, int max_link_rate, int conn_base_id) { struct drm_dp_mst_topology_state *mst_state; mutex_init(&mgr->lock); mutex_init(&mgr->qlock); - mutex_init(&mgr->payload_lock); mutex_init(&mgr->delayed_destroy_lock); mutex_init(&mgr->up_req_lock); mutex_init(&mgr->probe_lock); @@ -5672,19 +5446,7 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, mgr->aux = aux; mgr->max_dpcd_transaction_bytes = max_dpcd_transaction_bytes; mgr->max_payloads = max_payloads; - mgr->max_lane_count = max_lane_count; - mgr->max_link_rate = max_link_rate; mgr->conn_base_id = conn_base_id; - if (max_payloads + 1 > sizeof(mgr->payload_mask) * 8 || - max_payloads + 1 > sizeof(mgr->vcpi_mask) * 8) - return -EINVAL; - mgr->payloads = kcalloc(max_payloads, sizeof(struct drm_dp_payload), GFP_KERNEL); - if (!mgr->payloads) - return -ENOMEM; - mgr->proposed_vcpis = kcalloc(max_payloads, sizeof(struct drm_dp_vcpi *), GFP_KERNEL); - if (!mgr->proposed_vcpis) - return -ENOMEM; - set_bit(0, &mgr->payload_mask); mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL); if (mst_state == NULL) @@ -5717,19 +5479,12 @@ void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr) destroy_workqueue(mgr->delayed_destroy_wq); mgr->delayed_destroy_wq = NULL; } - mutex_lock(&mgr->payload_lock); - kfree(mgr->payloads); - mgr->payloads = NULL; - kfree(mgr->proposed_vcpis); - mgr->proposed_vcpis = NULL; - mutex_unlock(&mgr->payload_lock); mgr->dev = NULL; mgr->aux = NULL; drm_atomic_private_obj_fini(&mgr->base); mgr->funcs = NULL; mutex_destroy(&mgr->delayed_destroy_lock); - mutex_destroy(&mgr->payload_lock); mutex_destroy(&mgr->qlock); mutex_destroy(&mgr->lock); mutex_destroy(&mgr->up_req_lock); diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c index 4b0af3c26176..ec389e3caf24 100644 --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c @@ -52,6 +52,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, struct drm_atomic_state *state = crtc_state->uapi.state; struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); struct intel_dp *intel_dp = &intel_mst->primary->dp; + struct drm_dp_mst_topology_state *mst_state; struct intel_connector *connector to_intel_connector(conn_state->connector); struct drm_i915_private *i915 = to_i915(connector->base.dev); @@ -60,22 +61,28 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder, bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N); int bpp, slots = -EINVAL; + mst_state = drm_atomic_get_mst_topology_state(state, &intel_dp->mst_mgr); + if (IS_ERR(mst_state)) + return PTR_ERR(mst_state); + crtc_state->lane_count = limits->max_lane_count; crtc_state->port_clock = limits->max_rate; + // TODO: Handle pbn_div changes by adding a new MST helper + if (!mst_state->pbn_div) { + mst_state->pbn_div = drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr, + limits->max_rate, + limits->max_lane_count); + } + for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) { crtc_state->pipe_bpp = bpp; crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock, crtc_state->pipe_bpp, false); - slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr, - connector->port, - crtc_state->pbn, - drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr, - crtc_state->port_clock, - crtc_state->lane_count)); + connector->port, crtc_state->pbn); if (slots == -EDEADLK) return slots; if (slots >= 0) @@ -360,21 +367,17 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state, struct intel_dp *intel_dp = &dig_port->dp; struct intel_connector *connector to_intel_connector(old_conn_state->connector); + struct drm_dp_mst_topology_state *mst_state + drm_atomic_get_mst_topology_state(&state->base, &intel_dp->mst_mgr); struct drm_i915_private *i915 = to_i915(connector->base.dev); - int start_slot = intel_dp_is_uhbr(old_crtc_state) ? 0 : 1; - int ret; drm_dbg_kms(&i915->drm, "active links %d\n", intel_dp->active_mst_links); intel_hdcp_disable(intel_mst->connector); - drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port); - - ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot); - if (ret) { - drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret); - } + drm_dp_remove_payload(&intel_dp->mst_mgr, mst_state, + drm_atomic_get_mst_payload_state(mst_state, connector->port)); intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state); } @@ -402,8 +405,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state, intel_disable_transcoder(old_crtc_state); - drm_dp_update_payload_part2(&intel_dp->mst_mgr); - clear_act_sent(encoder, old_crtc_state); intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder), @@ -411,8 +412,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state, wait_for_act_sent(encoder, old_crtc_state); - drm_dp_mst_deallocate_vcpi(&intel_dp->mst_mgr, connector->port); - intel_ddi_disable_transcoder_func(old_crtc_state); if (DISPLAY_VER(dev_priv) >= 9) @@ -479,7 +478,8 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state, struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); struct intel_connector *connector to_intel_connector(conn_state->connector); - int start_slot = intel_dp_is_uhbr(pipe_config) ? 0 : 1; + struct drm_dp_mst_topology_state *mst_state + drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); int ret; bool first_mst_stream; @@ -505,16 +505,13 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state, dig_port->base.pre_enable(state, &dig_port->base, pipe_config, NULL); - ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr, - connector->port, - pipe_config->pbn, - pipe_config->dp_m_n.tu); - if (!ret) - drm_err(&dev_priv->drm, "failed to allocate vcpi\n"); - intel_dp->active_mst_links++; - ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot); + ret = drm_dp_add_payload_part1(&intel_dp->mst_mgr, mst_state, + drm_atomic_get_mst_payload_state(mst_state, connector->port)); + if (ret < 0) + drm_err(&dev_priv->drm, "Failed to create MST payload for %s: %d\n", + connector->base.name, ret); /* * Before Gen 12 this is not done as part of @@ -537,7 +534,10 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state, struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder); struct intel_digital_port *dig_port = intel_mst->primary; struct intel_dp *intel_dp = &dig_port->dp; + struct intel_connector *connector = to_intel_connector(conn_state->connector); struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); + struct drm_dp_mst_topology_state *mst_state + drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr); enum transcoder trans = pipe_config->cpu_transcoder; drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder); @@ -565,7 +565,8 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state, wait_for_act_sent(encoder, pipe_config); - drm_dp_update_payload_part2(&intel_dp->mst_mgr); + drm_dp_add_payload_part2(&intel_dp->mst_mgr, &state->base, + drm_atomic_get_mst_payload_state(mst_state, connector->port)); if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable) intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0, @@ -948,8 +949,6 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id) struct intel_dp *intel_dp = &dig_port->dp; enum port port = dig_port->base.port; int ret; - int max_source_rate - intel_dp->source_rates[intel_dp->num_source_rates - 1]; if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp)) return 0; @@ -965,10 +964,7 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id) /* create encoders */ intel_dp_create_fake_mst_encoders(dig_port); ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm, - &intel_dp->aux, 16, 3, - dig_port->max_lanes, - max_source_rate, - conn_base_id); + &intel_dp->aux, 16, 3, conn_base_id); if (ret) { intel_dp->mst_mgr.cbs = NULL; return ret; diff --git a/drivers/gpu/drm/i915/display/intel_hdcp.c b/drivers/gpu/drm/i915/display/intel_hdcp.c index 8ea66a2e1b09..7dbc9f0bb24f 100644 --- a/drivers/gpu/drm/i915/display/intel_hdcp.c +++ b/drivers/gpu/drm/i915/display/intel_hdcp.c @@ -30,8 +30,30 @@ static int intel_conn_to_vcpi(struct intel_connector *connector) { + struct drm_dp_mst_topology_mgr *mgr; + struct drm_dp_mst_atomic_payload *payload; + struct drm_dp_mst_topology_state *mst_state; + int vcpi = 0; + /* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */ - return connector->port ? connector->port->vcpi.vcpi : 0; + if (!connector->port) + return 0; + mgr = connector->port->mgr; + + drm_modeset_lock(&mgr->base.lock, NULL); + mst_state = to_drm_dp_mst_topology_state(mgr->base.state); + payload = drm_atomic_get_mst_payload_state(mst_state, connector->port); + if (drm_WARN_ON(mgr->dev, !payload)) + goto out; + + vcpi = payload->vcpi; + if (drm_WARN_ON(mgr->dev, vcpi < 0)) { + vcpi = 0; + goto out; + } +out: + drm_modeset_unlock(&mgr->base.lock); + return vcpi; } /* diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c index 57f74cfcdebf..e8f4c806fa39 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -937,6 +937,7 @@ struct nv50_msto { struct nv50_head *head; struct nv50_mstc *mstc; bool disabled; + bool enabled; }; struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder) @@ -952,57 +953,37 @@ struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder) return msto->mstc->mstm->outp; } -static struct drm_dp_payload * -nv50_msto_payload(struct nv50_msto *msto) -{ - struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); - struct nv50_mstc *mstc = msto->mstc; - struct nv50_mstm *mstm = mstc->mstm; - int vcpi = mstc->port->vcpi.vcpi, i; - - WARN_ON(!mutex_is_locked(&mstm->mgr.payload_lock)); - - NV_ATOMIC(drm, "%s: vcpi %d\n", msto->encoder.name, vcpi); - for (i = 0; i < mstm->mgr.max_payloads; i++) { - struct drm_dp_payload *payload = &mstm->mgr.payloads[i]; - NV_ATOMIC(drm, "%s: %d: vcpi %d start 0x%02x slots 0x%02x\n", - mstm->outp->base.base.name, i, payload->vcpi, - payload->start_slot, payload->num_slots); - } - - for (i = 0; i < mstm->mgr.max_payloads; i++) { - struct drm_dp_payload *payload = &mstm->mgr.payloads[i]; - if (payload->vcpi == vcpi) - return payload; - } - - return NULL; -} - static void -nv50_msto_cleanup(struct nv50_msto *msto) +nv50_msto_cleanup(struct drm_atomic_state *state, + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_topology_mgr *mgr, + struct nv50_msto *msto) { struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); - struct nv50_mstc *mstc = msto->mstc; - struct nv50_mstm *mstm = mstc->mstm; - - if (!msto->disabled) - return; + struct drm_dp_mst_atomic_payload *payload + drm_atomic_get_mst_payload_state(mst_state, msto->mstc->port); NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name); - drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port); - - msto->mstc = NULL; - msto->disabled = false; + if (msto->disabled) { + msto->mstc = NULL; + msto->disabled = false; + } else if (msto->enabled) { + drm_dp_add_payload_part2(mgr, state, payload); + msto->enabled = false; + } } static void -nv50_msto_prepare(struct nv50_msto *msto) +nv50_msto_prepare(struct drm_atomic_state *state, + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_topology_mgr *mgr, + struct nv50_msto *msto) { struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev); struct nv50_mstc *mstc = msto->mstc; struct nv50_mstm *mstm = mstc->mstm; + struct drm_dp_mst_atomic_payload *payload; struct { struct nv50_disp_mthd_v1 base; struct nv50_disp_sor_dp_mst_vcpi_v0 vcpi; @@ -1014,17 +995,21 @@ nv50_msto_prepare(struct nv50_msto *msto) (0x0100 << msto->head->base.index), }; - mutex_lock(&mstm->mgr.payload_lock); - NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name); - if (mstc->port->vcpi.vcpi > 0) { - struct drm_dp_payload *payload = nv50_msto_payload(msto); - if (payload) { - args.vcpi.start_slot = payload->start_slot; - args.vcpi.num_slots = payload->num_slots; - args.vcpi.pbn = mstc->port->vcpi.pbn; - args.vcpi.aligned_pbn = mstc->port->vcpi.aligned_pbn; - } + + payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port); + + // TODO: Figure out if we want to do a better job of handling VCPI allocation failures here? + if (msto->disabled) { + drm_dp_remove_payload(mgr, mst_state, payload); + } else { + if (msto->enabled) + drm_dp_add_payload_part1(mgr, mst_state, payload); + + args.vcpi.start_slot = payload->vc_start_slot; + args.vcpi.num_slots = payload->time_slots; + args.vcpi.pbn = payload->pbn; + args.vcpi.aligned_pbn = payload->time_slots * mst_state->pbn_div; } NV_ATOMIC(drm, "%s: %s: %02x %02x %04x %04x\n", @@ -1033,7 +1018,6 @@ nv50_msto_prepare(struct nv50_msto *msto) args.vcpi.pbn, args.vcpi.aligned_pbn); nvif_mthd(&drm->display->disp.object, 0, &args, sizeof(args)); - mutex_unlock(&mstm->mgr.payload_lock); } static int @@ -1043,6 +1027,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder, { struct drm_atomic_state *state = crtc_state->state; struct drm_connector *connector = conn_state->connector; + struct drm_dp_mst_topology_state *mst_state; struct nv50_mstc *mstc = nv50_mstc(connector); struct nv50_mstm *mstm = mstc->mstm; struct nv50_head_atom *asyh = nv50_head_atom(crtc_state); @@ -1070,8 +1055,18 @@ nv50_msto_atomic_check(struct drm_encoder *encoder, false); } - slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port, - asyh->dp.pbn, 0); + mst_state = drm_atomic_get_mst_topology_state(state, &mstm->mgr); + if (IS_ERR(mst_state)) + return PTR_ERR(mst_state); + + if (!mst_state->pbn_div) { + struct nouveau_encoder *outp = mstc->mstm->outp; + + mst_state->pbn_div = drm_dp_get_vc_payload_bw(&mstm->mgr, + outp->dp.link_bw, outp->dp.link_nr); + } + + slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port, asyh->dp.pbn); if (slots < 0) return slots; @@ -1103,7 +1098,6 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st struct drm_connector *connector; struct drm_connector_list_iter conn_iter; u8 proto; - bool r; drm_connector_list_iter_begin(encoder->dev, &conn_iter); drm_for_each_connector_iter(connector, &conn_iter) { @@ -1118,10 +1112,6 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st if (WARN_ON(!mstc)) return; - r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, asyh->dp.pbn, asyh->dp.tu); - if (!r) - DRM_DEBUG_KMS("Failed to allocate VCPI\n"); - if (!mstm->links++) nv50_outp_acquire(mstm->outp, false /*XXX: MST audio.*/); @@ -1134,6 +1124,7 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st nv50_dp_bpc_to_depth(asyh->or.bpc)); msto->mstc = mstc; + msto->enabled = true; mstm->modified = true; } @@ -1144,8 +1135,6 @@ nv50_msto_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *s struct nv50_mstc *mstc = msto->mstc; struct nv50_mstm *mstm = mstc->mstm; - drm_dp_mst_reset_vcpi_slots(&mstm->mgr, mstc->port); - mstm->outp->update(mstm->outp, msto->head->base.index, NULL, 0, 0); mstm->modified = true; if (!--mstm->links) @@ -1365,7 +1354,9 @@ nv50_mstc_new(struct nv50_mstm *mstm, struct drm_dp_mst_port *port, } static void -nv50_mstm_cleanup(struct nv50_mstm *mstm) +nv50_mstm_cleanup(struct drm_atomic_state *state, + struct drm_dp_mst_topology_state *mst_state, + struct nv50_mstm *mstm) { struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev); struct drm_encoder *encoder; @@ -1373,14 +1364,12 @@ nv50_mstm_cleanup(struct nv50_mstm *mstm) NV_ATOMIC(drm, "%s: mstm cleanup\n", mstm->outp->base.base.name); drm_dp_check_act_status(&mstm->mgr); - drm_dp_update_payload_part2(&mstm->mgr); - drm_for_each_encoder(encoder, mstm->outp->base.base.dev) { if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) { struct nv50_msto *msto = nv50_msto(encoder); struct nv50_mstc *mstc = msto->mstc; if (mstc && mstc->mstm == mstm) - nv50_msto_cleanup(msto); + nv50_msto_cleanup(state, mst_state, &mstm->mgr, msto); } } @@ -1388,20 +1377,34 @@ nv50_mstm_cleanup(struct nv50_mstm *mstm) } static void -nv50_mstm_prepare(struct nv50_mstm *mstm) +nv50_mstm_prepare(struct drm_atomic_state *state, + struct drm_dp_mst_topology_state *mst_state, + struct nv50_mstm *mstm) { struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev); struct drm_encoder *encoder; NV_ATOMIC(drm, "%s: mstm prepare\n", mstm->outp->base.base.name); - drm_dp_update_payload_part1(&mstm->mgr, 1); + /* Disable payloads first */ drm_for_each_encoder(encoder, mstm->outp->base.base.dev) { if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) { struct nv50_msto *msto = nv50_msto(encoder); struct nv50_mstc *mstc = msto->mstc; - if (mstc && mstc->mstm == mstm) - nv50_msto_prepare(msto); + if (mstc && mstc->mstm == mstm && msto->disabled) + nv50_msto_prepare(state, mst_state, &mstm->mgr, msto); + } + } + + /* Add payloads for new heads, while also updating the start slots of any unmodified (but + * active) heads that may have had their VC slots shifted left after the previous step + */ + drm_for_each_encoder(encoder, mstm->outp->base.base.dev) { + if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) { + struct nv50_msto *msto = nv50_msto(encoder); + struct nv50_mstc *mstc = msto->mstc; + if (mstc && mstc->mstm == mstm && !msto->disabled) + nv50_msto_prepare(state, mst_state, &mstm->mgr, msto); } } @@ -1598,9 +1601,7 @@ nv50_mstm_new(struct nouveau_encoder *outp, struct drm_dp_aux *aux, int aux_max, mstm->mgr.cbs = &nv50_mstm; ret = drm_dp_mst_topology_mgr_init(&mstm->mgr, dev, aux, aux_max, - max_payloads, outp->dcb->dpconf.link_nr, - drm_dp_bw_code_to_link_rate(outp->dcb->dpconf.link_bw), - conn_base_id); + max_payloads, conn_base_id); if (ret) return ret; @@ -2045,20 +2046,20 @@ nv50_pior_create(struct drm_connector *connector, struct dcb_output *dcbe) static void nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock) { + struct drm_dp_mst_topology_mgr *mgr; + struct drm_dp_mst_topology_state *mst_state; struct nouveau_drm *drm = nouveau_drm(state->dev); struct nv50_disp *disp = nv50_disp(drm->dev); struct nv50_core *core = disp->core; struct nv50_mstm *mstm; - struct drm_encoder *encoder; + int i; NV_ATOMIC(drm, "commit core %08x\n", interlock[NV50_DISP_INTERLOCK_BASE]); - drm_for_each_encoder(encoder, drm->dev) { - if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) { - mstm = nouveau_encoder(encoder)->dp.mstm; - if (mstm && mstm->modified) - nv50_mstm_prepare(mstm); - } + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { + mstm = nv50_mstm(mgr); + if (mstm->modified) + nv50_mstm_prepare(state, mst_state, mstm); } core->func->ntfy_init(disp->sync, NV50_DISP_CORE_NTFY); @@ -2067,12 +2068,10 @@ nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock) disp->core->chan.base.device)) NV_ERROR(drm, "core notifier timeout\n"); - drm_for_each_encoder(encoder, drm->dev) { - if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) { - mstm = nouveau_encoder(encoder)->dp.mstm; - if (mstm && mstm->modified) - nv50_mstm_cleanup(mstm); - } + for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) { + mstm = nv50_mstm(mgr); + if (mstm->modified) + nv50_mstm_cleanup(state, mst_state, mstm); } } diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h index ecd130028337..6317831705ab 100644 --- a/include/drm/display/drm_dp_mst_helper.h +++ b/include/drm/display/drm_dp_mst_helper.h @@ -48,20 +48,6 @@ struct drm_dp_mst_topology_ref_history { struct drm_dp_mst_branch; -/** - * struct drm_dp_vcpi - Virtual Channel Payload Identifier - * @vcpi: Virtual channel ID. - * @pbn: Payload Bandwidth Number for this channel - * @aligned_pbn: PBN aligned with slot size - * @num_slots: number of slots for this PBN - */ -struct drm_dp_vcpi { - int vcpi; - int pbn; - int aligned_pbn; - int num_slots; -}; - /** * struct drm_dp_mst_port - MST port * @port_num: port number @@ -142,7 +128,6 @@ struct drm_dp_mst_port { struct drm_dp_aux aux; /* i2c bus for this port? */ struct drm_dp_mst_branch *parent; - struct drm_dp_vcpi vcpi; struct drm_connector *connector; struct drm_dp_mst_topology_mgr *mgr; @@ -527,19 +512,6 @@ struct drm_dp_mst_topology_cbs { void (*poll_hpd_irq)(struct drm_dp_mst_topology_mgr *mgr); }; -#define DP_MAX_PAYLOAD (sizeof(unsigned long) * 8) - -#define DP_PAYLOAD_LOCAL 1 -#define DP_PAYLOAD_REMOTE 2 -#define DP_PAYLOAD_DELETE_LOCAL 3 - -struct drm_dp_payload { - int payload_state; - int start_slot; - int num_slots; - int vcpi; -}; - #define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base) /** @@ -551,6 +523,35 @@ struct drm_dp_payload { struct drm_dp_mst_atomic_payload { /** @port: The MST port assigned to this payload */ struct drm_dp_mst_port *port; + + /** + * @vc_start_slot: The time slot that this payload starts on. Because payload start slots + * can't be determined ahead of time, the contents of this value are UNDEFINED at atomic + * check time. This shouldn't usually matter, as the start slot should never be relevant for + * atomic state computations. + * + * Since this value is determined at commit time instead of check time, this value is + * protected by the MST helpers ensuring that async commits operating on the given topology + * never run in parallel. In the event that a driver does need to read this value (e.g. to + * inform hardware of the starting timeslot for a payload), the driver may either: + * + * * Read this field during the atomic commit after + * drm_dp_mst_atomic_wait_for_dependencies() has been called, which will ensure the + * previous MST states payload start slots have been copied over to the new state. Note + * that a new start slot won't be assigned/removed from this payload until + * drm_dp_add_payload_part1()/drm_dp_remove_payload() have been called. + * * Acquire the MST modesetting lock, and then wait for any pending MST-related commits to + * get committed to hardware by calling drm_crtc_commit_wait() on each of the + * &drm_crtc_commit structs in &drm_dp_mst_topology_state.commit_deps. + * + * If neither of the two above solutions suffice (e.g. the driver needs to read the start + * slot in the middle of an atomic commit without waiting for some reason), then drivers + * should cache this value themselves after changing payloads. + */ + s8 vc_start_slot; + + /** @vcpi: The Virtual Channel Payload Identifier */ + u8 vcpi; /** @time_slots: The number of timeslots allocated to this payload */ int time_slots; /** @pbn: The payload bandwidth for this payload */ @@ -574,8 +575,6 @@ struct drm_dp_mst_topology_state { /** @base: Base private state for atomic */ struct drm_private_state base; - /** @payloads: The list of payloads being created/destroyed in this state */ - struct list_head payloads; /** @mgr: The topology manager */ struct drm_dp_mst_topology_mgr *mgr; @@ -592,10 +591,21 @@ struct drm_dp_mst_topology_state { /** @num_commit_deps: The number of CRTC commits in @commit_deps */ size_t num_commit_deps; + /** @payload_mask: A bitmask of allocated VCPIs, used for VCPI assignments */ + u32 payload_mask; + /** @payloads: The list of payloads being created/destroyed in this state */ + struct list_head payloads; + /** @total_avail_slots: The total number of slots this topology can handle (63 or 64) */ u8 total_avail_slots; /** @start_slot: The first usable time slot in this topology (1 or 0) */ u8 start_slot; + + /** + * @pbn_div: The current PBN divisor for this topology. The driver is expected to fill this + * out itself. + */ + int pbn_div; }; #define to_dp_mst_topology_mgr(x) container_of(x, struct drm_dp_mst_topology_mgr, base) @@ -635,14 +645,6 @@ struct drm_dp_mst_topology_mgr { * @max_payloads: maximum number of payloads the GPU can generate. */ int max_payloads; - /** - * @max_lane_count: maximum number of lanes the GPU can drive. - */ - int max_lane_count; - /** - * @max_link_rate: maximum link rate per lane GPU can output, in kHz. - */ - int max_link_rate; /** * @conn_base_id: DRM connector ID this mgr is connected to. Only used * to build the MST connector path value. @@ -685,6 +687,20 @@ struct drm_dp_mst_topology_mgr { */ bool payload_id_table_cleared : 1; + /** + * @payload_count: The number of currently active payloads in hardware. This value is only + * intended to be used internally by MST helpers for payload tracking, and is only safe to + * read/write from the atomic commit (not check) context. + */ + u8 payload_count; + + /** + * @next_start_slot: The starting timeslot to use for new VC payloads. This value is used + * internally by MST helpers for payload tracking, and is only safe to read/write from the + * atomic commit (not check) context. + */ + u8 next_start_slot; + /** * @mst_primary: Pointer to the primary/first branch device. */ @@ -698,10 +714,6 @@ struct drm_dp_mst_topology_mgr { * @sink_count: Sink count from DEVICE_SERVICE_IRQ_VECTOR_ESI0. */ u8 sink_count; - /** - * @pbn_div: PBN to slots divisor. - */ - int pbn_div; /** * @funcs: Atomic helper callbacks @@ -718,32 +730,6 @@ struct drm_dp_mst_topology_mgr { */ struct list_head tx_msg_downq; - /** - * @payload_lock: Protect payload information. - */ - struct mutex payload_lock; - /** - * @proposed_vcpis: Array of pointers for the new VCPI allocation. The - * VCPI structure itself is &drm_dp_mst_port.vcpi, and the size of - * this array is determined by @max_payloads. - */ - struct drm_dp_vcpi **proposed_vcpis; - /** - * @payloads: Array of payloads. The size of this array is determined - * by @max_payloads. - */ - struct drm_dp_payload *payloads; - /** - * @payload_mask: Elements of @payloads actually in use. Since - * reallocation of active outputs isn't possible gaps can be created by - * disabling outputs out of order compared to how they've been enabled. - */ - unsigned long payload_mask; - /** - * @vcpi_mask: Similar to @payload_mask, but for @proposed_vcpis. - */ - unsigned long vcpi_mask; - /** * @tx_waitq: Wait to queue stall for the tx worker. */ @@ -815,9 +801,7 @@ struct drm_dp_mst_topology_mgr { int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, struct drm_device *dev, struct drm_dp_aux *aux, int max_dpcd_transaction_bytes, - int max_payloads, - int max_lane_count, int max_link_rate, - int conn_base_id); + int max_payloads, int conn_base_id); void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr); @@ -840,28 +824,17 @@ int drm_dp_get_vc_payload_bw(const struct drm_dp_mst_topology_mgr *mgr, int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc); -bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, int pbn, int slots); - -int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); - - -void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port); - void drm_dp_mst_update_slots(struct drm_dp_mst_topology_state *mst_state, uint8_t link_encoding_cap); -void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port); - - -int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, - int pbn); - - -int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_slot); - - -int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr); +int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr, + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_atomic_payload *payload); +int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr, + struct drm_atomic_state *state, + struct drm_dp_mst_atomic_payload *payload); +void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr, + struct drm_dp_mst_topology_state *mst_state, + struct drm_dp_mst_atomic_payload *payload); int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr); @@ -883,17 +856,22 @@ int drm_dp_mst_connector_late_register(struct drm_connector *connector, void drm_dp_mst_connector_early_unregister(struct drm_connector *connector, struct drm_dp_mst_port *port); -struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, - struct drm_dp_mst_topology_mgr *mgr); +struct drm_dp_mst_topology_state * +drm_atomic_get_mst_topology_state(struct drm_atomic_state *state, + struct drm_dp_mst_topology_mgr *mgr); +struct drm_dp_mst_topology_state * +drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state, + struct drm_dp_mst_topology_mgr *mgr); +struct drm_dp_mst_atomic_payload * +drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state, + struct drm_dp_mst_port *port); int __must_check drm_dp_atomic_find_time_slots(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr, - struct drm_dp_mst_port *port, int pbn, - int pbn_div); + struct drm_dp_mst_port *port, int pbn); int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state, struct drm_dp_mst_port *port, - int pbn, int pbn_div, - bool enable); + int pbn, bool enable); int __must_check drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state, struct drm_dp_mst_topology_mgr *mgr); @@ -917,6 +895,12 @@ void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port); struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port); +static inline struct drm_dp_mst_topology_state * +to_drm_dp_mst_topology_state(struct drm_private_state *state) +{ + return container_of(state, struct drm_dp_mst_topology_state, base); +} + extern const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs; /** -- 2.35.3
Jani Nikula
2022-Jun-29 13:33 UTC
[Nouveau] [RESEND RFC 00/18] drm/display/dp_mst: Drop Radeon MST support, make MST atomic-only
On Tue, 07 Jun 2022, Lyude Paul <lyude at redhat.com> wrote:> Ugh, thanks ./scripts/get_maintainers.pl for confusing and breaking > git-send email <<. Sorry for the resend everyone. > > For quite a while we've been carrying around a lot of legacy modesetting > code in the MST helpers that has been rather annoying to keep around, > and very often gets in the way of trying to implement additional > functionality in MST such as fallback link rate retraining, dynamic BPC > management and DSC support, etc. because of the fact that we can't rely > on atomic for everything. > > Luckily, we only actually have one user of the legacy MST code in the > kernel - radeon. Originally I was thinking of trying to maintain this > code and keep it around in some form, but I'm pretty unconvinced anyone > is actually using this. My reasoning for that is because I've seen > nearly no issues regarding MST on radeon for quite a while now - despite > the fact my local testing seems to indicate it's quite broken. This > isn't too surprising either, as MST support in radeon.ko is gated behind > a module parameter that isn't enabled by default. This isn't to say I > wouldn't be open to alternative suggestions, but I'd rather not be the > one to have to spend time on that if at all possible! Plus, I already > floated the idea of dropping this code by AMD folks a few times and > didn't get much resistance. > > As well, this series has some basic refactoring that I did along the way > and some bugs I had to fix in order to get my atomic-only MST code > working. Most of this is pretty straight forward and simply renaming > things to more closely match the DisplayPort specification, as I think > this will also make maintaining this code a lot easier in the long run > (I've gotten myself confused way too many times because of this). > > So far I've tested this on all three MST drivers: amdgpu, i915 and > nouveau, along with making sure that removing the radeon MST code > doesn't break anything else. The one thing I very much could use help > with regarding testing though is making sure that this works with > amdgpu's DSC support on MST. > > So, with this we should be using the atomic state as much as possible > with MST modesetting, hooray!I admit and regret I haven't had the time to go through this in detail and thought. I've glanced over it on a few occasions, and I don't have anything against it really. Acked-by: Jani Nikula <jani.nikula at intel.com> Before merging, please ensure you've sent the entire series Cc'd intel-gfx, and got a green light from CI.> > Cc: Wayne Lin <Wayne.Lin at amd.com> > Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> > Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> > Cc: Jani Nikula <jani.nikula at intel.com> > Cc: Imre Deak <imre.deak at intel.com> > Cc: Daniel Vetter <daniel.vetter at ffwll.ch> > Cc: Sean Paul <sean at poorly.run> > > Lyude Paul (18): > drm/amdgpu/dc/mst: Rename dp_mst_stream_allocation(_table) > drm/amdgpu/dm/mst: Rename get_payload_table() > drm/display/dp_mst: Rename drm_dp_mst_vcpi_allocation > drm/display/dp_mst: Call them time slots, not VCPI slots > drm/display/dp_mst: Fix confusing docs for > drm_dp_atomic_release_time_slots() > drm/display/dp_mst: Add some missing kdocs for atomic MST structs > drm/display/dp_mst: Add helper for finding payloads in atomic MST > state > drm/display/dp_mst: Add nonblocking helpers for DP MST > drm/display/dp_mst: Don't open code modeset checks for releasing time > slots > drm/display/dp_mst: Fix modeset tracking in > drm_dp_atomic_release_vcpi_slots() > drm/nouveau/kms: Cache DP encoders in nouveau_connector > drm/nouveau/kms: Pull mst state in for all modesets > drm/display/dp_mst: Add helpers for serializing SST <-> MST > transitions > drm/display/dp_mst: Drop all ports from topology on CSNs before > queueing link address work > drm/display/dp_mst: Skip releasing payloads if last connected port > isn't connected > drm/display/dp_mst: Maintain time slot allocations when deleting > payloads > drm/radeon: Drop legacy MST support > drm/display/dp_mst: Move all payload info into the atomic state > > .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 72 +- > .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c | 111 +- > .../display/amdgpu_dm/amdgpu_dm_mst_types.c | 126 +- > drivers/gpu/drm/amd/display/dc/core/dc_link.c | 10 +- > drivers/gpu/drm/amd/display/dc/dm_helpers.h | 4 +- > .../amd/display/include/link_service_types.h | 18 +- > drivers/gpu/drm/display/drm_dp_mst_topology.c | 1160 ++++++++--------- > drivers/gpu/drm/i915/display/intel_display.c | 11 + > drivers/gpu/drm/i915/display/intel_dp.c | 9 + > drivers/gpu/drm/i915/display/intel_dp_mst.c | 91 +- > drivers/gpu/drm/i915/display/intel_hdcp.c | 24 +- > drivers/gpu/drm/nouveau/dispnv50/disp.c | 202 ++- > drivers/gpu/drm/nouveau/dispnv50/disp.h | 2 + > drivers/gpu/drm/nouveau/nouveau_connector.c | 18 +- > drivers/gpu/drm/nouveau/nouveau_connector.h | 3 + > drivers/gpu/drm/radeon/Makefile | 2 +- > drivers/gpu/drm/radeon/atombios_crtc.c | 11 +- > drivers/gpu/drm/radeon/atombios_encoders.c | 59 - > drivers/gpu/drm/radeon/radeon_atombios.c | 2 - > drivers/gpu/drm/radeon/radeon_connectors.c | 61 +- > drivers/gpu/drm/radeon/radeon_device.c | 1 - > drivers/gpu/drm/radeon/radeon_dp_mst.c | 778 ----------- > drivers/gpu/drm/radeon/radeon_drv.c | 4 - > drivers/gpu/drm/radeon/radeon_encoders.c | 14 +- > drivers/gpu/drm/radeon/radeon_irq_kms.c | 10 +- > drivers/gpu/drm/radeon/radeon_mode.h | 40 - > include/drm/display/drm_dp_mst_helper.h | 230 ++-- > 27 files changed, 991 insertions(+), 2082 deletions(-) > delete mode 100644 drivers/gpu/drm/radeon/radeon_dp_mst.c-- Jani Nikula, Intel Open Source Graphics Center
Lyude Paul
2022-Jul-28 22:21 UTC
[Nouveau] [RESEND RFC 00/18] drm/display/dp_mst: Drop Radeon MST support, make MST atomic-only
Sorry for taking a while on respinning this! I've been busy with trying to review as much nouveau patches as possible before we passed the deadline for getting pulled into the kernel, since we've got quite a lot of pending patches coming up. The pull deadline we had from Dave has passed at this point though, so I should have a chance to respin this in the next few business days. On Tue, 2022-06-07 at 15:29 -0400, Lyude Paul wrote:> Ugh, thanks ./scripts/get_maintainers.pl for confusing and breaking > git-send email <<. Sorry for the resend everyone. > > For quite a while we've been carrying around a lot of legacy modesetting > code in the MST helpers that has been rather annoying to keep around, > and very often gets in the way of trying to implement additional > functionality in MST such as fallback link rate retraining, dynamic BPC > management and DSC support, etc. because of the fact that we can't rely > on atomic for everything. > > Luckily, we only actually have one user of the legacy MST code in the > kernel - radeon. Originally I was thinking of trying to maintain this > code and keep it around in some form, but I'm pretty unconvinced anyone > is actually using this. My reasoning for that is because I've seen > nearly no issues regarding MST on radeon for quite a while now - despite > the fact my local testing seems to indicate it's quite broken. This > isn't too surprising either, as MST support in radeon.ko is gated behind > a module parameter that isn't enabled by default. This isn't to say I > wouldn't be open to alternative suggestions, but I'd rather not be the > one to have to spend time on that if at all possible! Plus, I already > floated the idea of dropping this code by AMD folks a few times and > didn't get much resistance. > > As well, this series has some basic refactoring that I did along the way > and some bugs I had to fix in order to get my atomic-only MST code > working. Most of this is pretty straight forward and simply renaming > things to more closely match the DisplayPort specification, as I think > this will also make maintaining this code a lot easier in the long run > (I've gotten myself confused way too many times because of this). > > So far I've tested this on all three MST drivers: amdgpu, i915 and > nouveau, along with making sure that removing the radeon MST code > doesn't break anything else. The one thing I very much could use help > with regarding testing though is making sure that this works with > amdgpu's DSC support on MST. > > So, with this we should be using the atomic state as much as possible > with MST modesetting, hooray! > > Cc: Wayne Lin <Wayne.Lin at amd.com> > Cc: Ville Syrj?l? <ville.syrjala at linux.intel.com> > Cc: Fangzhi Zuo <Jerry.Zuo at amd.com> > Cc: Jani Nikula <jani.nikula at intel.com> > Cc: Imre Deak <imre.deak at intel.com> > Cc: Daniel Vetter <daniel.vetter at ffwll.ch> > Cc: Sean Paul <sean at poorly.run> > > Lyude Paul (18): > ? drm/amdgpu/dc/mst: Rename dp_mst_stream_allocation(_table) > ? drm/amdgpu/dm/mst: Rename get_payload_table() > ? drm/display/dp_mst: Rename drm_dp_mst_vcpi_allocation > ? drm/display/dp_mst: Call them time slots, not VCPI slots > ? drm/display/dp_mst: Fix confusing docs for > ??? drm_dp_atomic_release_time_slots() > ? drm/display/dp_mst: Add some missing kdocs for atomic MST structs > ? drm/display/dp_mst: Add helper for finding payloads in atomic MST > ??? state > ? drm/display/dp_mst: Add nonblocking helpers for DP MST > ? drm/display/dp_mst: Don't open code modeset checks for releasing time > ??? slots > ? drm/display/dp_mst: Fix modeset tracking in > ??? drm_dp_atomic_release_vcpi_slots() > ? drm/nouveau/kms: Cache DP encoders in nouveau_connector > ? drm/nouveau/kms: Pull mst state in for all modesets > ? drm/display/dp_mst: Add helpers for serializing SST <-> MST > ??? transitions > ? drm/display/dp_mst: Drop all ports from topology on CSNs before > ??? queueing link address work > ? drm/display/dp_mst: Skip releasing payloads if last connected port > ??? isn't connected > ? drm/display/dp_mst: Maintain time slot allocations when deleting > ??? payloads > ? drm/radeon: Drop legacy MST support > ? drm/display/dp_mst: Move all payload info into the atomic state > > ?.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c |?? 72 +- > ?.../amd/display/amdgpu_dm/amdgpu_dm_helpers.c |? 111 +- > ?.../display/amdgpu_dm/amdgpu_dm_mst_types.c?? |? 126 +- > ?drivers/gpu/drm/amd/display/dc/core/dc_link.c |?? 10 +- > ?drivers/gpu/drm/amd/display/dc/dm_helpers.h?? |??? 4 +- > ?.../amd/display/include/link_service_types.h? |?? 18 +- > ?drivers/gpu/drm/display/drm_dp_mst_topology.c | 1160 ++++++++--------- > ?drivers/gpu/drm/i915/display/intel_display.c? |?? 11 + > ?drivers/gpu/drm/i915/display/intel_dp.c?????? |??? 9 + > ?drivers/gpu/drm/i915/display/intel_dp_mst.c?? |?? 91 +- > ?drivers/gpu/drm/i915/display/intel_hdcp.c???? |?? 24 +- > ?drivers/gpu/drm/nouveau/dispnv50/disp.c?????? |? 202 ++- > ?drivers/gpu/drm/nouveau/dispnv50/disp.h?????? |??? 2 + > ?drivers/gpu/drm/nouveau/nouveau_connector.c?? |?? 18 +- > ?drivers/gpu/drm/nouveau/nouveau_connector.h?? |??? 3 + > ?drivers/gpu/drm/radeon/Makefile?????????????? |??? 2 +- > ?drivers/gpu/drm/radeon/atombios_crtc.c??????? |?? 11 +- > ?drivers/gpu/drm/radeon/atombios_encoders.c??? |?? 59 - > ?drivers/gpu/drm/radeon/radeon_atombios.c????? |??? 2 - > ?drivers/gpu/drm/radeon/radeon_connectors.c??? |?? 61 +- > ?drivers/gpu/drm/radeon/radeon_device.c??????? |??? 1 - > ?drivers/gpu/drm/radeon/radeon_dp_mst.c??????? |? 778 ----------- > ?drivers/gpu/drm/radeon/radeon_drv.c?????????? |??? 4 - > ?drivers/gpu/drm/radeon/radeon_encoders.c????? |?? 14 +- > ?drivers/gpu/drm/radeon/radeon_irq_kms.c?????? |?? 10 +- > ?drivers/gpu/drm/radeon/radeon_mode.h????????? |?? 40 - > ?include/drm/display/drm_dp_mst_helper.h?????? |? 230 ++-- > ?27 files changed, 991 insertions(+), 2082 deletions(-) > ?delete mode 100644 drivers/gpu/drm/radeon/radeon_dp_mst.c >-- Cheers, Lyude Paul (she/her) Software Engineer at Red Hat