similar to: [LLVMdev] Implementing sizeof

Displaying 20 results from an estimated 1000 matches similar to: "[LLVMdev] Implementing sizeof"

2007 Jul 27
0
[LLVMdev] Implementing sizeof
Check out http://nondot.org/sabre/LLVMNotes -Chris http://nondot.org/sabre http://llvm.org On Jul 27, 2007, at 12:00 PM, Sarah Thompson <thompson at email.arc.nasa.gov > wrote: > Hi folks, > > Assuming that I'm writing a pass and that for bizarre reasons I need > to > programmatically do the equivalent of a C/C++ sizeof on a Value (or a > Type, it doesn't
2007 Jul 27
2
[LLVMdev] Forcing JIT of all functions before execution starts (was: Implementing sizeof)
Chris Lattner wrote: > Check out http://nondot.org/sabre/LLVMNotes > > %Size = getelementptr %T* null, int 1 %SizeI = cast %T* %Size to uint How incredibly cunning. :-) Thanks for that. Next stupid question. I've put together a simple coroutine/fibre style threading system on top of the Linux setcontext/getcontext stuff, which surprisingly enough seems to work *almost*
2014 Jun 19
2
About memory index/search in multithread program
hi, Why xapian don't support memory index/search ? I know there is a method can create memory datebase, like this: Xapian::WritableDatabase db(Xapian::InMemory::open()); *But, if i use these in multithread program, i need create many datebases!!* Xapian::WritableDatabase db1(Xapian::InMemory::open()); //used in thread1 Xapian::WritableDatabase db2(Xapian::InMemory::open()); //used in
2008 Jun 26
4
Pfilestat vs. prstat
[Just starting out with DTrace and was hoping to get some guidance.] I have a "benchmark" program that I monitored with both prstat (prstat -mL -P <PID>) and pfilestat (from the DTrace toolkit). Prstat reports LAT values in the 0.1-0.2% range, but pfilestat reports "waitcpu" values in the 6-10%. Since those two numbers supposedly represent time waiting for the CPU,
2014 Apr 18
4
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
On Fri, Apr 18, 2014 at 12:13 AM, Dmitry Vyukov <dvyukov at google.com> wrote: > Hi, > > This is long thread, so I will combine several comments into single email. > > > >> - 8-bit per-thread counters, dumping into central counters on overflow. > >The overflow will happen very quickly with 8bit counter. > > Yes, but it reduces contention by 256x (a thread
2014 Apr 25
2
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
On Apr 24, 2014, at 1:33 AM, Dmitry Vyukov <dvyukov at google.com> wrote: >> >> I can see that the behavior of our current instrumentation is going to be a >> problem for the kinds of applications that you’re looking at. If you can >> find a way to get the overhead down without losing accuracy > > What are your requirements for accuracy? > Current
2008 Jun 04
1
mystery: lock up after fs dump
I wouldn't report this if not for one coincidence (which is described below). I have too little facts, so this is more of a mystery problem tale than a real problem report. There are two systems: 1. old, slow, i386, UP, 7-STABLE 2. new, fast, amd64, MP, 6.3-RELEASE Systems are located at different physical locations. What is common between them: 1. they both have the same backup strategy
2014 Apr 23
4
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
On Apr 23, 2014, at 7:31 AM, Kostya Serebryany <kcc at google.com> wrote: > I've run one proprietary benchmark that reflects a large portion of the google's server side code. > -fprofile-instr-generate leads to 14x slowdown due to counter contention. That's serious. > Admittedly, there is a single hot function that accounts for half of that slowdown, > but even if
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
On Tue, Oct 10, 2017 at 07:47:37PM +0900, Tetsuo Handa wrote: > In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to > serialize against fill_balloon(). But in fill_balloon(), > alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is > called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE] > implies __GFP_DIRECT_RECLAIM |
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
On Tue, Oct 10, 2017 at 07:47:37PM +0900, Tetsuo Handa wrote: > In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to > serialize against fill_balloon(). But in fill_balloon(), > alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is > called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE] > implies __GFP_DIRECT_RECLAIM |
2010 Apr 12
0
[LLVMdev] Proposal: stack/context switching within a thread
I created a wiki at http://code.google.com/p/llvm-stack-switch/ Right now I just copied and formatted the document as-is... I'll go back over it with your comments in mind soon. One more question, which you can answer here or there: > Point 4 is a bit confusing. Normally, it's fine for a thread to share > some of its stack space with another thread, but your wording seems to >
2017 Oct 13
4
[PATCH] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock can cause a deadlock when leak_balloon is called from virtballoon_oom_notify and tries to take same lock. To fix, split page allocation and enqueue and do allocations outside the lock. Here's a detailed analysis of the deadlock by Tetsuo Handa: In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize
2017 Oct 13
4
[PATCH] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock can cause a deadlock when leak_balloon is called from virtballoon_oom_notify and tries to take same lock. To fix, split page allocation and enqueue and do allocations outside the lock. Here's a detailed analysis of the deadlock by Tetsuo Handa: In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize
2017 Nov 08
2
[PATCH v3] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock can cause a deadlock when leak_balloon is called from virtballoon_oom_notify and tries to take same lock. To fix, split page allocation and enqueue and do allocations outside the lock. Here's a detailed analysis of the deadlock by Tetsuo Handa: In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize
2017 Nov 08
2
[PATCH v3] virtio_balloon: fix deadlock on OOM
fill_balloon doing memory allocations under balloon_lock can cause a deadlock when leak_balloon is called from virtballoon_oom_notify and tries to take same lock. To fix, split page allocation and enqueue and do allocations outside the lock. Here's a detailed analysis of the deadlock by Tetsuo Handa: In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to serialize
2009 Oct 20
3
[LLVMdev] Dereference PointerType?
Hello, I'm wondering if it's possible to dereference a PointerType. I have an AllocaInst and although I can find the number of elements allocated, (using Instruction::getOperand(0)), I can't find a way to get the size of each element. What I'd like to do is: AllocaInst *alloca; PointerType *ptr_type = dynamic_cast<PointerType*>(alloca); assert(ptr_type); Type
2018 Jun 08
2
XRay FDR mode doesn’t log main thread calls
Hello, I am initializing FDR mode and finalizing/flushing the buffers manually. XRay does not log calls from the main thread unless there is a function call after __xray_log_finalize(). This behavior is abnormal since one would expect the trace file to contain all function calls made up to the point when __xray_log_finalize() is called. To demonstrate this behavior, I have taken the test case
2010 Apr 12
4
[LLVMdev] Proposal: stack/context switching within a thread
On Sun, Apr 11, 2010 at 2:41 PM, Kenneth Uildriks <kennethuil at gmail.com> wrote: > On Sun, Apr 11, 2010 at 4:09 PM, Jeffrey Yasskin <jyasskin at google.com> wrote: >> Kenneth Uildriks <kennethuil at gmail.com> wrote: >>> As I see it, the context switching mechanism itself needs to know >>> where to point the stack register when switching.  The C
2014 Nov 05
3
[LLVMdev] How to lower the intrinsic function 'llvm.objectsize'?
The documentation of LLVM says that "The llvm.objectsize intrinsic is lowered to a constant representing the size of the object concerned". I'm attempting to lower this intrinsic function to a constant in a pass. Below is the code snippet that I wrote: for (BasicBlock::iterator i = b.begin(), ie = b.end(); (i != ie) && (block_split == false);) { IntrinsicInst *ii =
2009 Oct 20
0
[LLVMdev] Dereference PointerType?
2009/10/20 Daniel Waterworth <da.waterworth at googlemail.com> > Hello, > > I'm wondering if it's possible to dereference a PointerType. I have an > AllocaInst and although I can find the number of elements allocated, (using > Instruction::getOperand(0)), I can't find a way to get the size of each > element. What I'd like to do is: > > AllocaInst