search for: mutexing

Displaying 20 results from an estimated 3417 matches for "mutexing".

Did you mean: mutating
2003 Jul 31
3
Mutex problem in sip?
Hello, CVS 07/31/03. Test with 130+ PSTN-to-SIP calls. Asterisk gets locked ... grep -e "Error" -e "eventually" p-console chan_sip.c line 1453 (sip_alloc): Error obtaining mutex: Device or resource busy chan_sip.c line 1453 (sip_alloc): Got it eventually... chan_sip.c line 1453 (sip_alloc): Error obtaining mutex: Device or resource busy chan_sip.c line 1453 (sip_alloc): Got
2023 Mar 02
1
[PATCH v2 7/8] vdpa_sim: replace the spinlock with a mutex to protect the state
The spinlock we use to protect the state of the simulator is sometimes held for a long time (for example, when devices handle requests). This also prevents us from calling functions that might sleep (such as kthread_flush_work() in the next patch), and thus having to release and retake the lock. For these reasons, let's replace the spinlock with a mutex that gives us more flexibility.
2009 May 22
0
[LLVMdev] CMake build maturity [was: Re: Arm port]
Hi, just chiming in here... Óscar Fuentes wrote: > [...] > > This is a simple guide for using cmake with LLVM: > > http://www.llvm.org/docs/CMake.html > > The makefiles distributed with LLVM have nothing to do with cmake. >From the few times I tried building LLVM with CMake I got the impression that it wasn't completely mature yet (the "TODO" sections in
2018 Nov 30
3
[PATCH] vhost: fix IOTLB locking
Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq lock to improve scalability, but introduced a possible deadlock in vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. As long as we hold dev->mutex to prevent an ioctl
2018 Nov 30
3
[PATCH] vhost: fix IOTLB locking
Commit 78139c94dc8c ("net: vhost: lock the vqs one by one") moved the vq lock to improve scalability, but introduced a possible deadlock in vhost-iotlb. vhost_iotlb_notify_vq() now takes vq->mutex while holding the device's IOTLB spinlock. And on the vhost_iotlb_miss() path, the spinlock is taken while holding vq->mutex. As long as we hold dev->mutex to prevent an ioctl
2014 Jun 05
2
[PATCH 1/2] vhost: move acked_features to VQs
Refactor code to make sure features are only accessed under VQ mutex. This makes everything simpler, no need for RCU here anymore. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- This is on top of the recent pull request that I sent. drivers/vhost/vhost.h | 11 +++-------- drivers/vhost/net.c | 8 +++----- drivers/vhost/scsi.c | 22 +++++++++++++---------
2014 Jun 05
2
[PATCH 1/2] vhost: move acked_features to VQs
Refactor code to make sure features are only accessed under VQ mutex. This makes everything simpler, no need for RCU here anymore. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- This is on top of the recent pull request that I sent. drivers/vhost/vhost.h | 11 +++-------- drivers/vhost/net.c | 8 +++----- drivers/vhost/scsi.c | 22 +++++++++++++---------
2013 Feb 18
4
[PATCH for discussion only 0/3] Implement mutexes to limit number of concurrent instances of libguestfs.
These three patches (for discussion only, NOT to be applied) implement a mutex system that lets the user limit the number of libguestfs instances that can be launched per host. There are two uses that I have identified for this: firstly so we can enable parallel-tests (the default in automake >= 1.13) without blowing up the host. Secondly oVirt has raised concerns about how to limit the
2018 Nov 29
2
[REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by one
Hi, On 25/09/2018 13:36, xiangxia.m.yue at gmail.com wrote: > From: Tonghao Zhang <xiangxia.m.yue at gmail.com> > > This patch changes the way that lock all vqs > at the same, to lock them one by one. It will > be used for next patch to avoid the deadlock. > > Signed-off-by: Tonghao Zhang <xiangxia.m.yue at gmail.com> > Acked-by: Jason Wang <jasowang at
2018 Nov 29
2
[REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by one
Hi, On 25/09/2018 13:36, xiangxia.m.yue at gmail.com wrote: > From: Tonghao Zhang <xiangxia.m.yue at gmail.com> > > This patch changes the way that lock all vqs > at the same, to lock them one by one. It will > be used for next patch to avoid the deadlock. > > Signed-off-by: Tonghao Zhang <xiangxia.m.yue at gmail.com> > Acked-by: Jason Wang <jasowang at
2013 May 07
5
[PATCH 0/4] vhost private_data rcu removal
Asias He (4): vhost-net: Always access vq->private_data under vq mutex vhost-test: Always access vq->private_data under vq mutex vhost-scsi: Always access vq->private_data under vq mutex vhost: Remove custom vhost rcu usage drivers/vhost/net.c | 37 ++++++++++++++++--------------------- drivers/vhost/scsi.c | 17 ++++++----------- drivers/vhost/test.c | 20
2013 May 07
5
[PATCH 0/4] vhost private_data rcu removal
Asias He (4): vhost-net: Always access vq->private_data under vq mutex vhost-test: Always access vq->private_data under vq mutex vhost-scsi: Always access vq->private_data under vq mutex vhost: Remove custom vhost rcu usage drivers/vhost/net.c | 37 ++++++++++++++++--------------------- drivers/vhost/scsi.c | 17 ++++++----------- drivers/vhost/test.c | 20
2018 Dec 11
2
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
On Mon, Dec 10, 2018 at 05:44:52PM +0800, Jason Wang wrote: > When we try to do rx busy polling in tx path in commit 441abde4cd84 > ("net: vhost: add rx busy polling in tx path"), we lock rx vq mutex > after tx vq mutex is held. This may lead deadlock so we try to lock vq > one by one in commit 78139c94dc8c ("net: vhost: lock the vqs one by > one"). With this
2018 Dec 11
2
[PATCH net 2/4] vhost_net: rework on the lock ordering for busy polling
On Mon, Dec 10, 2018 at 05:44:52PM +0800, Jason Wang wrote: > When we try to do rx busy polling in tx path in commit 441abde4cd84 > ("net: vhost: add rx busy polling in tx path"), we lock rx vq mutex > after tx vq mutex is held. This may lead deadlock so we try to lock vq > one by one in commit 78139c94dc8c ("net: vhost: lock the vqs one by > one"). With this
2006 Sep 04
2
The real reason why Sync and Mutex behave differently
As I''ve mentioned before, Sync and Mutex are very similar, and Mutex is very simple. Their locking algorithm (for exclusive locking) is essentially identical. And in some detailed examinations of Mutex''s behavior, there''s nothing superficially wrong with it. It''s pure ruby, so there are no funny memory allocations at the C level, and it essentially operates
2014 Jun 02
4
[PULL 2/2] vhost: replace rcu with mutex
On Tue, 2014-06-03 at 00:30 +0300, Michael S. Tsirkin wrote: > All memory accesses are done under some VQ mutex. > So lock/unlock all VQs is a faster equivalent of synchronize_rcu() > for memory access changes. > Some guests cause a lot of these changes, so it's helpful > to make them faster. > > Reported-by: "Gonglei (Arei)" <arei.gonglei at huawei.com>
2014 Jun 02
4
[PULL 2/2] vhost: replace rcu with mutex
On Tue, 2014-06-03 at 00:30 +0300, Michael S. Tsirkin wrote: > All memory accesses are done under some VQ mutex. > So lock/unlock all VQs is a faster equivalent of synchronize_rcu() > for memory access changes. > Some guests cause a lot of these changes, so it's helpful > to make them faster. > > Reported-by: "Gonglei (Arei)" <arei.gonglei at huawei.com>
2014 Jun 07
4
[LLVMdev] Multi-threading and mutexes in LLVM
On Fri, Jun 6, 2014 at 10:57 PM, Kostya Serebryany <kcc at google.com> wrote: > As for the deadlocks, indeed it is possible to add deadlock detection > directly to std::mutex and std::spinlock code. > It may even end up being more efficient than a standalone deadlock > detector -- > but only if we can add an extra word to the mutex/spinlock object. > The deadlock
2014 Jun 05
1
[PATCH v2 1/2] vhost: move acked_features to VQs
Refactor code to make sure features are only accessed under VQ mutex. This makes everything simpler, no need for RCU here anymore. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Note: this is on top of my last pull request drivers/vhost/vhost.h | 11 +++-------- drivers/vhost/net.c | 8 +++----- drivers/vhost/scsi.c | 22 +++++++++++++--------- drivers/vhost/test.c | 9
2014 Jun 05
1
[PATCH v2 1/2] vhost: move acked_features to VQs
Refactor code to make sure features are only accessed under VQ mutex. This makes everything simpler, no need for RCU here anymore. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Note: this is on top of my last pull request drivers/vhost/vhost.h | 11 +++-------- drivers/vhost/net.c | 8 +++----- drivers/vhost/scsi.c | 22 +++++++++++++--------- drivers/vhost/test.c | 9