Displaying 6 results from an estimated 6 matches for "drm_sched_entity".
2024 Jul 12
1
[PATCH v2] drm/nouveau: Improve variable names in nouveau_sched_init()
...c
index 32fa2e273965..ba4139288a6d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sched.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
@@ -404,7 +404,7 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
{
struct drm_gpu_scheduler *drm_sched = &sched->base;
struct drm_sched_entity *entity = &sched->entity;
- long job_hang_limit = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
+ const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
int ret;
if (!wq) {
@@ -418,7 +418,7 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,...
2024 Jul 11
1
[PATCH] drm/nouveau: Improve variable names in nouveau_sched_init()
...c
index 32fa2e273965..ee1f49056737 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sched.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sched.c
@@ -404,7 +404,8 @@ nouveau_sched_init(struct nouveau_sched *sched, struct nouveau_drm *drm,
{
struct drm_gpu_scheduler *drm_sched = &sched->base;
struct drm_sched_entity *entity = &sched->entity;
- long job_hang_limit = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
+ const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS);
+ const unsigned int hang_limit = 0;
int ret;
if (!wq) {
@@ -418,7 +419,7 @@ nouveau_sched_init(struct nouveau_sched...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...hat or prove
> that we can get EXEC to scale without it.
From #nouveau:
<g?fxstrand> CTSing now
<g?fxstrand> It looks like it's still going to take 1.5 hours.
I may have an idea what could be the issue, let me explain.
Currently, there is a single drm_gpu_scheduler having a drm_sched_entity
per client (for VM_BIND jobs) and a drm_sched_entity per channel (for
EXEC jobs).
For VM_BIND jobs the corresponding PT[E]s are allocated before the job
is pushed to the corresponding drm_sched_entity. The PT[E]s are freed by
the schedulers free() callback pushing work to a single threaded
wo...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...to scale without it.
>
> From #nouveau:
>
> <gfxstrand> CTSing now
> <gfxstrand> It looks like it's still going to take 1.5 hours.
>
> I may have an idea what could be the issue, let me explain.
>
> Currently, there is a single drm_gpu_scheduler having a drm_sched_entity
> per client (for VM_BIND jobs) and a drm_sched_entity per channel (for
> EXEC jobs).
>
> For VM_BIND jobs the corresponding PT[E]s are allocated before the job
> is pushed to the corresponding drm_sched_entity. The PT[E]s are freed by
> the schedulers free() callback pushing work...
2023 Jul 25
1
[PATCH drm-misc-next v8 11/12] drm/nouveau: implement new VM_BIND uAPI
...?From #nouveau:
>
> <gfxstrand> CTSing now
> <gfxstrand> It looks like it's still going to take 1.5 hours.
>
> I may have an idea what could be the issue, let me explain.
>
> Currently, there is a single drm_gpu_scheduler having a
> drm_sched_entity
> per client (for VM_BIND jobs) and a drm_sched_entity per channel (for
> EXEC jobs).
>
> For VM_BIND jobs the corresponding PT[E]s are allocated before the job
> is pushed to the corresponding drm_sched_entity. The PT[E]s are
> freed by
> the scheduler...
2020 Feb 07
11
[PATCH 0/6] drm: Provide a simple encoder
Many DRM drivers implement an encoder with an empty implementation. This
patchset adds drm_simple_encoder_init() and drm_simple_encoder_create(),
which can be used by drivers instead. Except for the destroy callback, the
simple encoder's implementation is empty.
The patchset also converts 4 encoder instances to use the simple-encoder
helpers. But there are at least 11 other drivers which can