Daniel Vetter
2014-Jul-23 09:30 UTC
[Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences
On Wed, Jul 23, 2014 at 11:27 AM, Christian K?nig <christian.koenig at amd.com> wrote:> You submit a job to the hardware and then block the job to wait for radeon > to be finished? Well than this would indeed require a hardware reset, but > wouldn't that make the whole problem even worse? > > I mean currently we block one userspace process to wait for other hardware > to be finished with a buffer, but what you are describing here blocks the > whole hardware to wait for other hardware which in the end blocks all > userspace process accessing the hardware.There is nothing new here with prime - if one context hangs the gpu it blocks everyone else on i915.> Talking about alternative approaches wouldn't it be simpler to just offload > the waiting to a different kernel or userspace thread?Well this is exactly what we'll do once we have the scheduler. But this is an orthogonal issue imo. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Christian König
2014-Jul-23 09:36 UTC
[Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences
Am 23.07.2014 11:30, schrieb Daniel Vetter:> On Wed, Jul 23, 2014 at 11:27 AM, Christian K?nig > <christian.koenig at amd.com> wrote: >> You submit a job to the hardware and then block the job to wait for radeon >> to be finished? Well than this would indeed require a hardware reset, but >> wouldn't that make the whole problem even worse? >> >> I mean currently we block one userspace process to wait for other hardware >> to be finished with a buffer, but what you are describing here blocks the >> whole hardware to wait for other hardware which in the end blocks all >> userspace process accessing the hardware. > There is nothing new here with prime - if one context hangs the gpu it > blocks everyone else on i915. > >> Talking about alternative approaches wouldn't it be simpler to just offload >> the waiting to a different kernel or userspace thread? > Well this is exactly what we'll do once we have the scheduler. But > this is an orthogonal issue imo.Mhm, could have the scheduler first? Cause that sounds like reducing the necessary fence interface to just a fence->wait function. Christian.> -Daniel
Maarten Lankhorst
2014-Jul-23 09:38 UTC
[Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences
op 23-07-14 11:36, Christian K?nig schreef:> Am 23.07.2014 11:30, schrieb Daniel Vetter: >> On Wed, Jul 23, 2014 at 11:27 AM, Christian K?nig >> <christian.koenig at amd.com> wrote: >>> You submit a job to the hardware and then block the job to wait for radeon >>> to be finished? Well than this would indeed require a hardware reset, but >>> wouldn't that make the whole problem even worse? >>> >>> I mean currently we block one userspace process to wait for other hardware >>> to be finished with a buffer, but what you are describing here blocks the >>> whole hardware to wait for other hardware which in the end blocks all >>> userspace process accessing the hardware. >> There is nothing new here with prime - if one context hangs the gpu it >> blocks everyone else on i915. >> >>> Talking about alternative approaches wouldn't it be simpler to just offload >>> the waiting to a different kernel or userspace thread? >> Well this is exactly what we'll do once we have the scheduler. But >> this is an orthogonal issue imo. > > Mhm, could have the scheduler first? > > Cause that sounds like reducing the necessary fence interface to just a fence->wait function.You would also lose benefits like having a 'perf timechart' for gpu's. ~Maarten
Daniel Vetter
2014-Jul-23 09:39 UTC
[Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences
On Wed, Jul 23, 2014 at 11:36 AM, Christian K?nig <christian.koenig at amd.com> wrote:> Am 23.07.2014 11:30, schrieb Daniel Vetter: > >> On Wed, Jul 23, 2014 at 11:27 AM, Christian K?nig >> <christian.koenig at amd.com> wrote: >>> >>> You submit a job to the hardware and then block the job to wait for >>> radeon >>> to be finished? Well than this would indeed require a hardware reset, but >>> wouldn't that make the whole problem even worse? >>> >>> I mean currently we block one userspace process to wait for other >>> hardware >>> to be finished with a buffer, but what you are describing here blocks the >>> whole hardware to wait for other hardware which in the end blocks all >>> userspace process accessing the hardware. >> >> There is nothing new here with prime - if one context hangs the gpu it >> blocks everyone else on i915. >> >>> Talking about alternative approaches wouldn't it be simpler to just >>> offload >>> the waiting to a different kernel or userspace thread? >> >> Well this is exactly what we'll do once we have the scheduler. But >> this is an orthogonal issue imo. > > > Mhm, could have the scheduler first? > > Cause that sounds like reducing the necessary fence interface to just a > fence->wait function.The scheduler needs to keep track of a lot of fences, so I think we'll have to register callbacks, not a simple wait function. We must keep track of all the non-i915 fences for all oustanding batches. Also, the scheduler doesn't eliminate the hw queue, only keep it much slower so that we can sneak in higher priority things. Really, scheduler or not is orthogonal. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch
Maybe Matching Threads
- [PATCH 09/17] drm/radeon: use common fence implementation for fences
- [PATCH 09/17] drm/radeon: use common fence implementation for fences
- [PATCH 09/17] drm/radeon: use common fence implementation for fences
- [PATCH 09/17] drm/radeon: use common fence implementation for fences
- [PATCH 09/17] drm/radeon: use common fence implementation for fences