similar to: [LLVMdev] Multi-threading and mutexes in LLVM

Displaying 20 results from an estimated 10000 matches similar to: "[LLVMdev] Multi-threading and mutexes in LLVM"

2020 Apr 12
7
LLVM multithreading support
Hi all, I was looking at the profile for a tool I’m working on, and noticed that it is spending 10% of its time doing locking related stuff. The structure of the tool is that it reading in a ton of stuff (e.g. one moderate example I’m working with is 40M of input) into MLIR, then uses its multithreaded pass manager to do transformations. As it happens, the structure of this is that the parsing
2015 Sep 08
2
[ThreadSanitizer] Get deadlocks working
+thread-sanitizer mailing list On Mon, Sep 7, 2015 at 6:41 PM, Vaivaswatha Nagaraj <vn at compilertree.com> wrote: > Hi, > > I am interested in understand the compiler-rt thread sanitizer tool and have > recently started experimenting with it. In particular, I'm interested in the > deadlock detector. > > I see that deadlock detection currently don't work. (I
2020 Apr 12
3
LLVM multithreading support
On Apr 12, 2020, at 2:23 PM, Eli Friedman <efriedma at quicinc.com> wrote: > > Yes, the llvm::Smart* family of locks still exist. But very few places are using them outside of MLIR; it’s more common to just use plain std::mutex. > > That said, I don’t think it’s really a good idea to use them, even if they were fixed to work as designed. It’s not composable: the boolean
2014 Jun 07
4
[LLVMdev] Multi-threading and mutexes in LLVM
On Fri, Jun 6, 2014 at 10:57 PM, Kostya Serebryany <kcc at google.com> wrote: > As for the deadlocks, indeed it is possible to add deadlock detection > directly to std::mutex and std::spinlock code. > It may even end up being more efficient than a standalone deadlock > detector -- > but only if we can add an extra word to the mutex/spinlock object. > The deadlock
2014 Jun 20
4
[LLVMdev] [PATCH] Replace the Execution Engine's mutex with std::recursive_mutex
OK, sounds like we're screwed. There's two options: 1. Revert and give up on C++11 threading libraries for now. 2. Do what Eric suggests. Move all the mutex usage under #ifdef LLVM_ENABLE_THREADS, and disable LLVM_ENABLE_THREADS by default on MinGW. MinGW plus LLVM_ENABLE_THREADS would become unsupported. Do people have objections to 2? I don't really like it either. On Fri, Jun
2014 Jun 20
3
[LLVMdev] [PATCH] Replace the Execution Engine's mutex with std::recursive_mutex
Sorry, I mean only disable this for THREADS-WIN32, not threads-posix. On Fri, Jun 20, 2014 at 11:14 AM, Zachary Turner <zturner at google.com> wrote: > #2 is better if we can detect threads-win32 vs threads-posix on MinGW, and > only disable this for threads-posix. We can check for > _GLIBCXX_HAS_GTHREADS, but that seems somewhat hackish, so I wonder if > there's a better
2011 Mar 22
2
[LLVMdev] LLVM optimization passes crash when running on second thread
Hello, I am trying to modify my LLVM-based compiler to perform an initial, no-optimization compilation synchronously on startup and then perform an asynchronous, optimized recompilation in the background, and I am getting in one of the optimization passes. - I am using the official release of LLVM 2.8 - I have compiled LLVM with threading enabled; I am running llvm::llvm_start_multithreaded() on
2009 Jun 16
3
[LLVMdev] UPCOMING API CHANGE: Threads and LLVM
Hey folks, As you may be aware if you've been watching llvm-commits, I've been working recently on improving the ability to use LLVM across multiple threads. While the goal for now is to be able to hack on multiple Module's in parallel, this has necessitated a larger review of how LLVM interacts with threads. In a recent(-ish) patch, I added a new API:
2014 Jun 20
3
[LLVMdev] [PATCH] Replace the Execution Engine's mutex with std::recursive_mutex
It sounds like this version of libstdc++ doesn't support std::recursive_mutex from C++11. This is really unfortunate, because we were hoping that moving to C++11 would allow us to use standard, portable threading primitives. Does this version of MinGW have any C++11 threading support? Is it just recursive_mutex that is missing, or do we have to avoid std::mutex, std::call_once, etc? lld
2011 Mar 22
0
[LLVMdev] LLVM optimization passes crash when running on second thread
On Tue, Mar 22, 2011 at 11:51 AM, Peter Zion <peter.zion at fabric-engine.com> wrote: > Hello, > > I am trying to modify my LLVM-based compiler to perform an initial, no-optimization compilation synchronously on startup and then perform an asynchronous, optimized recompilation in the background, and I am getting in one of the optimization passes. > > - I am using the official
2009 Jun 16
0
[LLVMdev] UPCOMING API CHANGE: Threads and LLVM
This question is a bit of far away future thought: There's traditionally been a fundamental assumption that static compilers are single-threaded. Many build systems assume this and support assigning compilation jobs with one job per processor. If the compiler becomes multi-threaded internally, how should the build system best schedule compilation jobs? deep On Mon, Jun 15, 2009 at 6:16 PM,
2014 Jun 09
2
[LLVMdev] Use of statics and ManagedStatics in LLVM
Based on a recent discussion[1], I started trying to remove the functions llvm_start_multithreaded() and llvm_stop_multithreaded() from the codebase. It turns out this is a little bit tricky. Consider the following scenario: During program initialization, a global static object's constructor dereferences a ManagedStatic. During dereferencing of the ManagedStatic, it needs to know whether
2014 Jun 09
2
[LLVMdev] Multi-threading and mutexes in LLVM
On Mon, Jun 9, 2014 at 1:21 PM, David Chisnall <David.Chisnall at cl.cam.ac.uk> wrote: > On 9 Jun 2014, at 10:19, Kostya Serebryany <kcc at google.com> wrote: > > > tsan's deadlock detector (as well as helgrind and many other similar > tools) detects lock order inversion, i.e. a situation which may potentially > lead to a deadlock. > > Yes, that's what
2006 May 17
1
Deadlocks in 1.2.7.1
Hello! Unfortunately we are seeing lately (2-3 times during a day) that asterisk seems to hang up somehow - no new calls can be made and sip show peers and other commands show no obvious problem. We then recompiled 1.2.7.1 with all the DEBUG_ turned on in the makefile and now we see the following messages: May 17 06:46:05 ERROR[8606]: ../include/asterisk/lock.h:236
2017 Jan 12
2
[PATCH v2 1/2] drm/nouveau: Don't enabling polling twice on runtime resume
As it turns out, on cards that actually have CRTCs on them we're already calling drm_kms_helper_poll_enable(drm_dev) from nouveau_display_resume() before we call it in nouveau_pmops_runtime_resume(). This leads us to accidentally trying to enable polling twice, which results in a potential deadlock between the RPM locks and drm_dev->mode_config.mutex if we end up trying to enable polling
2014 Jun 09
2
[LLVMdev] Multi-threading and mutexes in LLVM
> > > On FreeBSD and OS X, the underlying pthread_mutex can already do deadlock > detection, so I don't see why you'd need to add another word. The > PTHREAD_MUTEX_ERRORCHECK attribute has been part of POSIX since 1997, so > I'd expect it to be supported everywhere. > PTHREAD_MUTEX_ERRORCHECK detects the deadlock that already happened. tsan's deadlock
2017 Oct 13
4
[PATCH] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock can cause a deadlock when leak_balloon is called from virtballoon_oom_notify and tries to take same lock. To fix, split page allocation and enqueue and do allocations outside the lock. Here's a detailed analysis of the deadlock by Tetsuo Handa: In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize
2017 Oct 13
4
[PATCH] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock can cause a deadlock when leak_balloon is called from virtballoon_oom_notify and tries to take same lock. To fix, split page allocation and enqueue and do allocations outside the lock. Here's a detailed analysis of the deadlock by Tetsuo Handa: In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
On Tue, Oct 10, 2017 at 07:47:37PM +0900, Tetsuo Handa wrote: > In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to > serialize against fill_balloon(). But in fill_balloon(), > alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is > called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE] > implies __GFP_DIRECT_RECLAIM |
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
On Tue, Oct 10, 2017 at 07:47:37PM +0900, Tetsuo Handa wrote: > In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to > serialize against fill_balloon(). But in fill_balloon(), > alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is > called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE] > implies __GFP_DIRECT_RECLAIM |