search for: drm_gpu_schedul

Displaying 15 results from an estimated 15 matches for "drm_gpu_schedul".

Did you mean: drm_gpu_scheduler
2025 Jan 22
5
[PATCH] drm/sched: Use struct for drm_sched_init() params
...agination/pvr_queue.c index c4f08432882b..03a2ce1a88e7 100644 --- a/drivers/gpu/drm/imagination/pvr_queue.c +++ b/drivers/gpu/drm/imagination/pvr_queue.c @@ -1211,10 +1211,13 @@ struct pvr_queue *pvr_queue_create(struct pvr_context *ctx, }; struct pvr_device *pvr_dev = ctx->pvr_dev; struct drm_gpu_scheduler *sched; + struct drm_sched_init_params sched_params; struct pvr_queue *queue; int ctx_state_size, err; void *cpu_map; + memset(&sched_params, 0, sizeof(struct drm_sched_init_params)); + if (WARN_ON(type >= sizeof(props))) return ERR_PTR(-EINVAL); @@ -1282,12 +1285,18 @@ stru...
2024 Jul 12
1
[PATCH v2] drm/nouveau: Improve variable names in nouveau_sched_init()
...veau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c index 32fa2e273965..ba4139288a6d 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.c +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -404,7 +404,7 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm, { struct drm_gpu_scheduler *drm_sched = &sched->base; struct drm_sched_entity *entity = &sched->entity; - long job_hang_limit = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); + const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); int ret; if (!wq) { @@ -418,7 +418,7 @@ nouveau_sched_i...
2024 Jul 11
1
[PATCH] drm/nouveau: Improve variable names in nouveau_sched_init()
...veau/nouveau_sched.c b/drivers/gpu/drm/nouveau/nouveau_sched.c index 32fa2e273965..ee1f49056737 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.c +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -404,7 +404,8 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm, { struct drm_gpu_scheduler *drm_sched = &sched->base; struct drm_sched_entity *entity = &sched->entity; - long job_hang_limit = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); + const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); + const unsigned int hang_limit = 0; int ret; if (!wq)...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...EC to scale without it. > > ?From #nouveau: > > <gfxstrand> CTSing now > <gfxstrand> It looks like it's still going to take 1.5 hours. > > I may have an idea what could be the issue, let me explain. > > Currently, there is a single drm_gpu_scheduler having a > drm_sched_entity > per client (for VM_BIND jobs) and a drm_sched_entity per channel (for > EXEC jobs). > > For VM_BIND jobs the corresponding PT[E]s are allocated before the job > is pushed to the corresponding drm_sched_entity. The PT[E]s are &gt...
2025 Jan 22
1
[PATCH] drm/sched: Use struct for drm_sched_init() params
...; --- a/drivers/gpu/drm/panthor/panthor_sched.c > >> +++ b/drivers/gpu/drm/panthor/panthor_sched.c > >> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group, > >> const struct drm_panthor_queue_create *args) > >> { > >> struct drm_gpu_scheduler *drm_sched; > >> + struct drm_sched_init_params sched_params; > > > > nit: Could we use a struct initializer instead of a > > memset(0)+field-assignment? > > > > struct drm_sched_init_params sched_params = { Actually, you can even make it const if it...
2025 Jan 23
1
[PATCH] drm/sched: Use struct for drm_sched_init() params
...anthor/panthor_sched.c > > >> +++ b/drivers/gpu/drm/panthor/panthor_sched.c > > >> @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group *group, > > >> const struct drm_panthor_queue_create *args) > > >> { > > >> struct drm_gpu_scheduler *drm_sched; > > >> + struct drm_sched_init_params sched_params; > > > > > > nit: Could we use a struct initializer instead of a > > > memset(0)+field-assignment? > > > > > > struct drm_sched_init_params sched_params = { > > Actu...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...; > that we can get EXEC to scale without it. > > From #nouveau: > > <gfxstrand> CTSing now > <gfxstrand> It looks like it's still going to take 1.5 hours. > > I may have an idea what could be the issue, let me explain. > > Currently, there is a single drm_gpu_scheduler having a drm_sched_entity > per client (for VM_BIND jobs) and a drm_sched_entity per channel (for > EXEC jobs). > > For VM_BIND jobs the corresponding PT[E]s are allocated before the job > is pushed to the corresponding drm_sched_entity. The PT[E]s are freed by > the schedulers...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...tself. It would still be good to see if we can find a way to >> reduce the cross-process drag in the implementation but that's a perf >> optimization we can do later. > > From the kernel side I think the only thing we could really do is to > temporarily run a secondary drm_gpu_scheduler instance, one for VM_BINDs > and one for EXECs until we got the new page table handling in place. > > However, the UMD could avoid such conditions more effectively, since it > controls the address space. Namely, avoid re-using the same region of > the address space right away i...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...ve, we need to either add that or prove > that we can get EXEC to scale without it. From #nouveau: <g?fxstrand> CTSing now <g?fxstrand> It looks like it's still going to take 1.5 hours. I may have an idea what could be the issue, let me explain. Currently, there is a single drm_gpu_scheduler having a drm_sched_entity per client (for VM_BIND jobs) and a drm_sched_entity per channel (for EXEC jobs). For VM_BIND jobs the corresponding PT[E]s are allocated before the job is pushed to the corresponding drm_sched_entity. The PT[E]s are freed by the schedulers free() callback pushing w...
2023 Jul 31
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...be good to see if we can find a way to > >> reduce the cross-process drag in the implementation but that's a perf > >> optimization we can do later. > > > > From the kernel side I think the only thing we could really do is to > > temporarily run a secondary drm_gpu_scheduler instance, one for VM_BINDs > > and one for EXECs until we got the new page table handling in place. > > > > However, the UMD could avoid such conditions more effectively, since it > > controls the address space. Namely, avoid re-using the same region of > > the addre...
2025 Jan 23
0
[PATCH] drm/sched: Use struct for drm_sched_init() params
...2025 08:33:01 +0100 Philipp Stanner <phasta at mailbox.org> wrote: > On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote: > > On Wed, 22 Jan 2025 15:08:20 +0100 > > Philipp Stanner <phasta at kernel.org> wrote: > > > > > ?int drm_sched_init(struct drm_gpu_scheduler *sched, > > > - ?? const struct drm_sched_backend_ops *ops, > > > - ?? struct workqueue_struct *submit_wq, > > > - ?? u32 num_rqs, u32 credit_limit, unsigned int hang_limit, > > > - ?? long timeout, struct workqueue_struct *timeout_wq, > > > - ?? atomi...
2025 Jan 23
0
[PATCH] drm/sched: Use struct for drm_sched_init() params
On Thu, Jan 23, 2025 at 08:33:01AM +0100, Philipp Stanner wrote: > On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote: > > On Wed, 22 Jan 2025 15:08:20 +0100 > > Philipp Stanner <phasta at kernel.org> wrote: > > > > > ?int drm_sched_init(struct drm_gpu_scheduler *sched, > > > - ?? const struct drm_sched_backend_ops *ops, > > > - ?? struct workqueue_struct *submit_wq, > > > - ?? u32 num_rqs, u32 credit_limit, unsigned int hang_limit, > > > - ?? long timeout, struct workqueue_struct *timeout_wq, > > > - ?? atomi...
2025 Jan 23
0
[PATCH] drm/sched: Use struct for drm_sched_init() params
...00, Philipp Stanner wrote: > > > On Wed, 2025-01-22 at 18:16 +0100, Boris Brezillon wrote: > > > > On Wed, 22 Jan 2025 15:08:20 +0100 > > > > Philipp Stanner <phasta at kernel.org> wrote: > > > > > > > > > ?int drm_sched_init(struct drm_gpu_scheduler *sched, > > > > > - ?? const struct drm_sched_backend_ops *ops, > > > > > - ?? struct workqueue_struct *submit_wq, > > > > > - ?? u32 num_rqs, u32 credit_limit, unsigned int hang_limit, > > > > > - ?? long timeout, struct workqueue_struc...
2024 Feb 16
1
[PATCH] nouveau: offload fence uevents work to workqueue
...; > I think it'd be safer to just establish not to use the kernel global wq for executing > > > work in the fence signalling critical path. > > > > > > We could also run into similar problems with a dedicated wq, e.g. when drivers share > > > a wq between drm_gpu_scheduler instances (see [1]), however, I'm not sure we can catch > > > that with lockdep. > > > > I think if you want to fix it perfectly you'd need to set the max number > > of wq to the number of engines (or for dynamic/fw scheduled engines to the > > number of...
2024 Feb 02
3
[PATCH 1/2] drm/nouveau: don't fini scheduler if not initialized
...ed = kzalloc(sizeof(*sched), GFP_KERNEL); + if (!sched) + return -ENOMEM; + + ret = nouveau_sched_init(sched, drm, wq, credit_limit); + if (ret) { + kfree(sched); + return ret; + } + + *psched = sched; + + return 0; +} + + +static void nouveau_sched_fini(struct nouveau_sched *sched) { struct drm_gpu_scheduler *drm_sched = &sched->base; @@ -471,3 +494,14 @@ nouveau_sched_fini(struct nouveau_sched *sched) if (sched->wq) destroy_workqueue(sched->wq); } + +void +nouveau_sched_destroy(struct nouveau_sched **psched) +{ + struct nouveau_sched *sched = *psched; + + nouveau_sched_fini(sched)...