search for: try_lock

Displaying 20 results from an estimated 20 matches for "try_lock".

2020 Feb 25
2
phab unit tests + libcxx tests w/concurrency
...se library concurrency features that aren't designed with some kind of thresholds. I have an "innocuous" change that was automagically tested (pre-merge!) via phabricator -- https://reviews.llvm.org/D75085 -- but it triggered a test failure in one of the "thread_mutex_class::try_lock.pass.cpp" tests. It's great and super convenient to have this test facility and I'm pretty sure I opted-in to it. I think it would be/would have been nice for it to be integrated with github PRs, but this seems functionally pretty close. Having this pre-merge check helps buoy conf...
2011 Aug 01
0
[LLVMdev] Reviving the new LLVM concurrency model
C++ and Java memory models impose restrictions for locks and unlocks, such as a thread that releases a lock must acquired the lock, or the number of locks must be larger than the number of unlocks in the same thread... for enabling some optimizations, for example, simplifying trylocks (http://www.hpl.hp.com/techreports/2008/HPL-2008-56.html), and moving some instructions inside lock acquires
2017 Oct 02
2
[RFC] [PATCH] mm,oom: Offload OOM notify callback to a kernel thread.
...an and should do that. Although I would prefer to simply > document this API as deprecated. Care to send a patch? I am quite busy > with other stuff. > > > > I do not think that making oom notifier API more complex is the way to > > > go. Can we simply change the lock to try_lock? > > > > Using mutex_trylock(&vb->balloon_lock) alone is not sufficient. Inside the > > mutex, __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation attempt is used > > which will fail to make progress due to oom_lock already held. Therefore, > > virtballoo...
2017 Oct 02
2
[RFC] [PATCH] mm,oom: Offload OOM notify callback to a kernel thread.
...an and should do that. Although I would prefer to simply > document this API as deprecated. Care to send a patch? I am quite busy > with other stuff. > > > > I do not think that making oom notifier API more complex is the way to > > > go. Can we simply change the lock to try_lock? > > > > Using mutex_trylock(&vb->balloon_lock) alone is not sufficient. Inside the > > mutex, __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation attempt is used > > which will fail to make progress due to oom_lock already held. Therefore, > > virtballoo...
2017 Oct 02
1
[RFC] [PATCH] mm,oom: Offload OOM notify callback to a kernel thread.
...e it won't ever allocate - that path is never taken > > with add_outbuf - it is for add_sgs only. > > > > IMHO the issue is balloon inflation which needs to allocate > > memory. It does it under a mutex, and oom handler tries to take the > > same mutex. > > try_lock for the oom notifier path should heal the problem then, righ? > At least for as a quick fix. IMHO it definitely fixes the deadlock. But it does not fix the bug that balloon isn't sometimes deflated on oom even though the deflate on oom flag is set. > -- > Michal Hocko > SUSE Labs
2017 Oct 02
1
[RFC] [PATCH] mm,oom: Offload OOM notify callback to a kernel thread.
...e it won't ever allocate - that path is never taken > > with add_outbuf - it is for add_sgs only. > > > > IMHO the issue is balloon inflation which needs to allocate > > memory. It does it under a mutex, and oom handler tries to take the > > same mutex. > > try_lock for the oom notifier path should heal the problem then, righ? > At least for as a quick fix. IMHO it definitely fixes the deadlock. But it does not fix the bug that balloon isn't sometimes deflated on oom even though the deflate on oom flag is set. > -- > Michal Hocko > SUSE Labs
2017 Sep 11
6
mm, virtio: possible OOM lockup at virtballoon_oom_notify()
Hello. I noticed that virtio_balloon is using register_oom_notifier() and leak_balloon() from virtballoon_oom_notify() might depend on __GFP_DIRECT_RECLAIM memory allocation. In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize against fill_balloon(). But in fill_balloon(), alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is called with
2017 Sep 11
6
mm, virtio: possible OOM lockup at virtballoon_oom_notify()
Hello. I noticed that virtio_balloon is using register_oom_notifier() and leak_balloon() from virtballoon_oom_notify() might depend on __GFP_DIRECT_RECLAIM memory allocation. In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize against fill_balloon(). But in fill_balloon(), alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is called with
2017 Oct 02
0
[RFC] [PATCH] mm,oom: Offload OOM notify callback to a kernel thread.
...p this? > > Yes - in practice it won't ever allocate - that path is never taken > with add_outbuf - it is for add_sgs only. > > IMHO the issue is balloon inflation which needs to allocate > memory. It does it under a mutex, and oom handler tries to take the > same mutex. try_lock for the oom notifier path should heal the problem then, righ? At least for as a quick fix. -- Michal Hocko SUSE Labs
2008 Sep 10
1
Updated version of patch
>Why would you keep a non-thread safe API ? I do not want to touch libshout. I am only concerned with the python bindings. Probably Brendan can answer this better but I think libshout is not thread safe for simultaneous accesses to a shout_t object. This is not a problem as the posix locking primitives are trivial to use. For python apis should in principle be absolutely thread safe.
2004 Jul 22
0
Re: FXRuby and Threads (was: radiobuttons are weird in 1.2.x series)
...k.readMsg @queueMutex.synchronize { @messageQueue.push message } end end During the FXRuby chore the queue is checked and a single message is processed, if available: def nextMessage # Pull off the first message (if there is one) if @queueMutex.try_lock message = @messageQueue.shift @queueMutex.unlock else message = nil end return message end I checked it again, and with threadsDisabled the FXDataTargets/statusbar work perfectly and communication between clients is consistently instantaneous as far as my eyes can...
2015 Feb 10
2
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On Tue, Feb 10, 2015 at 10:30 AM, Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> wrote: > On 02/10/2015 06:23 AM, Linus Torvalds wrote: >> add_smp(&lock->tickets.head, TICKET_LOCK_INC); >> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) .. >> >> into something like >> >> val =
2015 Feb 10
2
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On Tue, Feb 10, 2015 at 10:30 AM, Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> wrote: > On 02/10/2015 06:23 AM, Linus Torvalds wrote: >> add_smp(&lock->tickets.head, TICKET_LOCK_INC); >> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) .. >> >> into something like >> >> val =
2017 Oct 02
0
[RFC] [PATCH] mm,oom: Offload OOM notify callback to a kernel thread.
...t; definition. Yes, we can and should do that. Although I would prefer to simply document this API as deprecated. Care to send a patch? I am quite busy with other stuff. > > I do not think that making oom notifier API more complex is the way to > > go. Can we simply change the lock to try_lock? > > Using mutex_trylock(&vb->balloon_lock) alone is not sufficient. Inside the > mutex, __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation attempt is used > which will fail to make progress due to oom_lock already held. Therefore, > virtballoon_oom_notify() needs to g...
2010 Mar 13
2
Design: Asynchronous I/O for single/multi-dbox
...many temp files in a single transaction and only at commit stage it locks the index files and knows what the filenames will be. rollback(handle) - rollback all previous writes. close(handle) - if file was created and not committed, the temp file will be deleted - does implicit rollback ret = try_lock(handle) - this isn't an asynchronous operation! it assumes that locking state is kept in memory, so that the operation will be fast. if backend doesn't support locking or it's slow, single-dbox should be used (instead of multi-dbox), because it doesn't need locking. - returns succ...
2011 Jul 19
8
[LLVMdev] Reviving the new LLVM concurrency model
There was some discussion a while back about adding a C++0x-style memory model and atomics for LLVM a while back (http://thread.gmane.org/gmane.comp.compilers.llvm.devel/31295), but it got stalled. I'm going to try and restart progress on it. Attached are two patches; the first adds a section to LangRef with just the memory model, without directly changing the documentation or implementation
2003 Apr 16
1
pop3 coredump
...callback_context); 267 } 268 } (gdb) p index $1 = (struct mail_index *) 0x808c200 (gdb) p *index $2 = {open = 0x8054590 <maildir_index_open>, free = 0x8054abc <maildir_index_free>, set_lock = 0x805c874 <mail_index_set_lock>, try_lock = 0x805c88c <mail_index_try_lock>, set_lock_notify_callback = 0x805c8a4 <mail_index_set_lock_notify_callback>, rebuild = 0x8054f20 <maildir_index_rebuild>, fsck = 0x805de50 <mail_index_fsck>, sync_and_lock = 0x8055db4 <maildir_index_sync>, get_header = 0...
2015 Jul 04
1
[RFCv2 4/5] mm/compaction: compaction calls generic migration
On Fri, Jun 26, 2015 at 12:58 PM, Gioh Kim <gioh.kim at lge.com> wrote: > Compaction calls interfaces of driver page migration > instead of calling balloon migration directly. > > Signed-off-by: Gioh Kim <gioh.kim at lge.com> > --- > drivers/virtio/virtio_balloon.c | 1 + > mm/compaction.c | 9 +++++---- > mm/migrate.c | 21
2015 Jul 04
1
[RFCv2 4/5] mm/compaction: compaction calls generic migration
On Fri, Jun 26, 2015 at 12:58 PM, Gioh Kim <gioh.kim at lge.com> wrote: > Compaction calls interfaces of driver page migration > instead of calling balloon migration directly. > > Signed-off-by: Gioh Kim <gioh.kim at lge.com> > --- > drivers/virtio/virtio_balloon.c | 1 + > mm/compaction.c | 9 +++++---- > mm/migrate.c | 21
2008 Jun 05
14
Why not ignore stale PID files?
Hi, I have an application which is dying horrible deaths (i.e. segmentation faults) in mid-flight, in production... And of course, I should fix it. But while I find and fix the bugs, I found something I think should be different - I can work on submitting a patch, as it is quite simple, but I might be losing something on my rationale. When Mongrel segfaults, it does not -obviously- get to clean