similar to: [PATCH 25/65] drm/nouveau: Drop mutex_lock_nested for atomic

Displaying 20 results from an estimated 1000 matches similar to: "[PATCH 25/65] drm/nouveau: Drop mutex_lock_nested for atomic"

2020 Aug 02
2
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
Purely conjecture, but I think the original locking inversion with the legacy page flip code between flipping and ttm's bo move function shoudn't exist anymore with atomic: With atomic the bo pinning and actual modeset commit is completely separated in the code patsh. This annotation was originally added in commit 060810d7abaabcab282e062c595871d661561400 Author: Ben Skeggs <bskeggs at
2020 Aug 03
0
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
Op 02-08-2020 om 20:18 schreef Daniel Vetter: > Purely conjecture, but I think the original locking inversion with the > legacy page flip code between flipping and ttm's bo move function > shoudn't exist anymore with atomic: With atomic the bo pinning and > actual modeset commit is completely separated in the code patsh. > > This annotation was originally added in >
2020 Sep 29
0
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
On Thu, Sep 17, 2020 at 3:15 PM Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > Ben, did you have a chance to look at this? Ping -Daniel > On Mon, Aug 3, 2020 at 1:22 PM Maarten Lankhorst > <maarten.lankhorst at linux.intel.com> wrote: > > > > Op 02-08-2020 om 20:18 schreef Daniel Vetter: > > > Purely conjecture, but I think the original locking
2020 Sep 17
2
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
Ben, did you have a chance to look at this? -Daniel On Mon, Aug 3, 2020 at 1:22 PM Maarten Lankhorst <maarten.lankhorst at linux.intel.com> wrote: > > Op 02-08-2020 om 20:18 schreef Daniel Vetter: > > Purely conjecture, but I think the original locking inversion with the > > legacy page flip code between flipping and ttm's bo move function > > shoudn't exist
2020 Sep 30
0
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
On Wed, Sep 30, 2020 at 10:45:05AM +1000, Ben Skeggs wrote: > On Wed, 30 Sep 2020 at 00:52, Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > > > On Thu, Sep 17, 2020 at 3:15 PM Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > > > > > Ben, did you have a chance to look at this? > > > > Ping > > -Daniel > > > > >
2020 Sep 30
1
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
On Wed, 30 Sep 2020 at 19:37, Daniel Vetter <daniel at ffwll.ch> wrote: > > On Wed, Sep 30, 2020 at 10:45:05AM +1000, Ben Skeggs wrote: > > On Wed, 30 Sep 2020 at 00:52, Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > > > > > On Thu, Sep 17, 2020 at 3:15 PM Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > > > > > > >
2020 Sep 30
2
[PATCH] drm/nouveau: Drop mutex_lock_nested for atomic
On Wed, 30 Sep 2020 at 00:52, Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > On Thu, Sep 17, 2020 at 3:15 PM Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > > > Ben, did you have a chance to look at this? > > Ping > -Daniel > > > On Mon, Aug 3, 2020 at 1:22 PM Maarten Lankhorst > > <maarten.lankhorst at linux.intel.com> wrote:
2013 Jul 01
1
[PATCH] drm/nouveau: fix locking in nouveau_crtc_page_flip
This is a bit messed up because chan->cli->mutex is a different class, depending on whether it is the global drm client or not. This is because the global cli->mutex lock can be taken for eviction, so locking it before pinning the buffer objects may result in a deadlock. The locking order from outer to inner is: - &cli->mutex - ttm_bo - &drm_client_lock (global
2018 Jan 23
5
[PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to hold mutexes of all virtqueues. This may confuse lockdep to report a possible deadlock because of trying to hold locks belong to same class. Switch to use mutex_lock_nested() to avoid false positive. Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") Reported-by: syzbot+dbb7c1161485e61b0241 at
2018 Jan 23
5
[PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to hold mutexes of all virtqueues. This may confuse lockdep to report a possible deadlock because of trying to hold locks belong to same class. Switch to use mutex_lock_nested() to avoid false positive. Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") Reported-by: syzbot+dbb7c1161485e61b0241 at
2018 Jan 24
1
[PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
On Wed, Jan 24, 2018 at 04:38:30PM -0500, David Miller wrote: > From: Jason Wang <jasowang at redhat.com> > Date: Tue, 23 Jan 2018 17:27:25 +0800 > > > We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to > > hold mutexes of all virtqueues. This may confuse lockdep to report a > > possible deadlock because of trying to hold locks belong to same >
2018 Jan 24
0
[PATCH net 1/2] vhost: use mutex_lock_nested() in vhost_dev_lock_vqs()
From: Jason Wang <jasowang at redhat.com> Date: Tue, 23 Jan 2018 17:27:25 +0800 > We used to call mutex_lock() in vhost_dev_lock_vqs() which tries to > hold mutexes of all virtqueues. This may confuse lockdep to report a > possible deadlock because of trying to hold locks belong to same > class. Switch to use mutex_lock_nested() to avoid false positive. > > Fixes:
2014 May 14
0
[RFC PATCH v1 07/16] drm/nouveau: rework to new fence interface
From: Maarten Lankhorst <maarten.lankhorst at ubuntu.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst at canonical.com> --- drivers/gpu/drm/nouveau/core/core/event.c | 4 drivers/gpu/drm/nouveau/nouveau_bo.c | 6 drivers/gpu/drm/nouveau/nouveau_display.c | 4 drivers/gpu/drm/nouveau/nouveau_fence.c | 434 ++++++++++++++++++++---------
2019 Oct 01
0
[PATCH net v3] vsock: Fix a lockdep warning in __vsock_release()
On Mon, Sep 30, 2019 at 06:43:50PM +0000, Dexuan Cui wrote: > Lockdep is unhappy if two locks from the same class are held. > > Fix the below warning for hyperv and virtio sockets (vmci socket code > doesn't have the issue) by using lock_sock_nested() when __vsock_release() > is called recursively: > > ============================================ > WARNING: possible
2019 Sep 26
0
[PATCH net v2] vsock: Fix a lockdep warning in __vsock_release()
Hi Dexuan, On Thu, Sep 26, 2019 at 01:11:27AM +0000, Dexuan Cui wrote: > Lockdep is unhappy if two locks from the same class are held. > > Fix the below warning for hyperv and virtio sockets (vmci socket code > doesn't have the issue) by using lock_sock_nested() when __vsock_release() > is called recursively: > > ============================================ >
2023 Jan 28
1
[PATCH] nouveau: explicitly wait on the fence in nouveau_bo_move_m2mf
Hi Greg, I'm not the reporter, so would like to confirm him explicitly, but I believe I can give some context: On Sat, Jan 28, 2023 at 06:51:08PM +0100, Greg KH wrote: > On Sat, Jan 28, 2023 at 03:49:59PM +0100, Computer Enthusiastic wrote: > > Hello, > > > > The patch "[Nouveau] [PATCH] nouveau: explicitly wait on the fence in > > nouveau_bo_move_m2mf"
2023 Jan 30
1
[PATCH] nouveau: explicitly wait on the fence in nouveau_bo_move_m2mf
On Sun, Jan 29, 2023 at 10:36:31PM +0100, Computer Enthusiastic wrote: > Hello Greg, > Hello Salvatore, > > On 28/01/2023 20:49, Salvatore Bonaccorso wrote: > > Hi Greg, > > > > I'm not the reporter, so would like to confirm him explicitly, but I > > believe I can give some context: > > > > On Sat, Jan 28, 2023 at 06:51:08PM +0100, Greg KH
2018 Jun 30
0
[PATCH net-next v3 2/4] net: vhost: replace magic number of lock annotation
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> Use the VHOST_NET_VQ_XXX as a subclass for mutex_lock_nested. Signed-off-by: Tonghao Zhang <zhangtonghao at didichuxing.com> --- drivers/vhost/net.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index e7cf7d2..62bb8e8 100644 --- a/drivers/vhost/net.c +++
2019 Sep 27
0
[PATCH net v2] vsock: Fix a lockdep warning in __vsock_release()
On Fri, Sep 27, 2019 at 05:37:20AM +0000, Dexuan Cui wrote: > > From: linux-hyperv-owner at vger.kernel.org > > <linux-hyperv-owner at vger.kernel.org> On Behalf Of Stefano Garzarella > > Sent: Thursday, September 26, 2019 12:48 AM > > > > Hi Dexuan, > > > > On Thu, Sep 26, 2019 at 01:11:27AM +0000, Dexuan Cui wrote: > > > ... > >
2011 Sep 01
3
DOM0 Hang on a large box....
Hi, I''m looking at a system hang on a large box: 160 cpus, 2TB. Dom0 is booted with 160 vcpus (don''t ask me why :)), and an HVM guest is started with over 1.5T RAM and 128 vcpus. The system hangs without much activity after couple hours. Xen 4.0.2 and 2.6.32 based 64bit dom0. During hang I discovered: Most of dom0 vcpus are in double_lock_balance spinning on one of the locks: