similar to: [LLVMdev] Dynamic typing

Displaying 20 results from an estimated 2000 matches similar to: "[LLVMdev] Dynamic typing"

2005 Mar 15
2
[LLVMdev] Dynamic Creation of a simple program
Thanks for the information I am trying to use one of your examples for recursive data structures: ========================= PATypeHolder StructTy = OpaqueType::get(); std::vector<const Type*> Elts; Elts.push_back(PointerType::get(StructTy)); Elts.push_back(PointerType::get(Type::SByteTy)); StructType *NewSTy = StructType::get(Elts); // At this point, NewSTy = "{ opaque*, sbyte*
2005 Mar 15
0
[LLVMdev] Dynamic Creation of a simple program
On Tue, 15 Mar 2005, xavier wrote: > Thanks for the information > I am trying to use one of your examples for recursive data structures: > > ========================= > PATypeHolder StructTy = OpaqueType::get(); > std::vector<const Type*> Elts; > Elts.push_back(PointerType::get(StructTy)); > Elts.push_back(PointerType::get(Type::SByteTy)); > StructType *NewSTy =
2005 Mar 15
1
[LLVMdev] Dynamic Creation of a simple program
Hi, I would like to dynamically create a program at run time using the LLVM classes. I am wondering if there are some examples available. For example, for a very simple "Hello World" program, I will have something like this (pseudo code): Module M = new Module(); Function FMain = new Function("main"); M.addFunction(FMain); BasicBlock B = new BasicBlock(); FMain.add(B); //
2012 Nov 13
3
[LLVMdev] Using LLVM to serialize object state -- and performance
Switching to CodeGenOpt::None reduced the execution time from 5.74s to 0.84s. By just tweaking things randomly, changing to CodeModel::Small reduced it further to 0.22s. We have some old, ugly, pure C++ code that we're trying to replace (both because it's ugly and because it's slow). It's execution time is about 0.089s, so that's the time to beat. Hence, I'd like to
2012 Nov 14
0
[LLVMdev] Using LLVM to serialize object state -- and performance
I've been profiling more; see <https://dl.dropbox.com/u/46791180/perf.png>. One thing I'm a bit confused about is why I see a FunctionPassManager there. I use a FunctionPassManager at the end of LLVM IR code generation, write the IR to disk, then read it back later. Why is apparently another FunctionPassManager being used during the JIT'ing of the IR code? And how do I
2013 Oct 29
1
[LLVMdev] JIT'ing 2 functions with inter-dependencies
I am having problems JIT'ing 2 functions where one of them calls the other. (I am using the old JIT interface). Here is the setup: define void @func1() { entrypoint: call void @func2(void) ret void } define void @func2(void) { entrypoint: ret void } (I omit the arguments and function bodies for simplicity.) It's 'func1' that would be called from host code,
2010 Jun 18
2
[LLVMdev] Catching Signals While JIT'ing Code
I'm trying to figure out the best way to handle signals raised during the execution of LLVM's optimization passes or the JIT'ing of code prior to running it. In particular, LLVM throws unix signals instead of C++ exceptions while the header ErrorHandling.h contains the following warning (the last paragraph in particular): /// llvm_instal_error_handler - Installs a new error handler
2007 Jan 31
4
possible spam alert
The last two times I have originated message threads on R or Bioconductor I have received the message included below from someone named Patrick Connolly. Both times I was the originator of the message thread and used what I thought was a unique subject line that explained as best I could what my question was. Patrick seems to be implying that I am abusing the R and BioC help newsgroups in this
2016 Feb 05
2
MCJit Runtine Performance
On 4 February 2016 at 22:48, Morten Brodersen via llvm-dev <llvm-dev at lists.llvm.org> wrote: > Hi Rafael, > > Not easily (llc). > > Is there a way to make MCJit not use the large code model when JIT'ing? > I think Davide started adding support for the small code model. Cheers, Rafael
2013 Nov 10
2
[LLVMdev] loop vectorizer: JIT + AVX segfaults
Is it possible that the AVX support in the JIT engine or x86-64 backend is not mature? I am getting segfaults when switching from a vector length 4 to 8 in my application. I isolated the barfing function and it still segfaults in the minimal setup: The IR attached implements the following simple function: void bar(int start, int end, int ignore , bool add , bool addme , float* out, float* in) {
2009 Jan 04
2
[LLVMdev] Suggestion: Support union types in IR
On Jan 2, 2009, at 2:29 PM, Jon Harrop wrote: >>> I don't think you would want to build discriminated unions on top of >>> C-style unions though. >> >> Why? > > Uniformity when nesting and space efficiency. Users of a language > front-end > will want to nest discriminated unions, e.g. to manipulate trees. Okay, so you're just talking about boxed
2013 Nov 11
0
[LLVMdev] loop vectorizer: JIT + AVX segfaults
Do you have a stack trace of the segfault? We have two different code emitters for X86 in LLVM. The one used by the normal compiler and MCJIT and the other used by the legacy JIT. All of the test cases for AVX support go through the first one so it gets the most attention. We try to keep the legacy JIT in sync with it, but have a history of failing at that. The stack trace of the segfault may
2007 Nov 29
2
[LLVMdev] Boxing and vectors
So I now have a working first-order language that uses conventional boxing to handle polymorphism and with ints, floats and ('a -> 'a) functions. After a huge amount of detailed benchmarking in OCaml and F# I have decided that it is very important to be able to unbox complex numbers but no other compound types. As LLVM provides a vector type for power-of-two dimensionalities that
2020 Aug 07
2
JIT interaction with linkonce_odr global variables
Hello, I recently hit an issue when JIT'ing my generated IR using llvm::orc::LLJIT. My IR contains the following definition of a global variable: > $_ZZ23TestStaticVarInFunctionbE1x = comdat any > @_ZZ23TestStaticVarInFunctionbE1x = linkonce_odr dso_local global i32 123, > comdat, align 4 > And in my host process, there exists the same symbol. I would expect LLJIT to resolve the
2007 Sep 28
2
[LLVMdev] Accounting for code size
In my quest to account for memory, I've now come to the in-memory IR, and the generated code. I want to book the generated code memory against the agent that is generating the code. I see that LLVM's Function class [1] has a size function; what does this represent and can I use it to account for the space used by the in-memory IR? As for generated code, the JIT [2] class simply returns a
2009 Jan 02
2
[LLVMdev] Suggestion: Support union types in IR
On Jan 1, 2009, at 6:25 AM, Jon Harrop wrote: >> Exactly. I'm not especially interested in C-style unions, I'm >> interested >> in discriminated unions. But the actual discriminator field is easily >> represented in LLVM IR already, so there's no need to extend the IR >> to >> support them. That's why I am only asking for C-style union
2004 Dec 13
2
[LLVMdev] FP Constants spilling to memory in x86 code generation
Chris Lattner wrote: > On Mon, 6 Dec 2004, Morten Ofstad wrote: >> I guess what I'd like to know is if the process of spilling constants >> to memory could be a bit more controlled, maybe using the JIT memory >> manager and putting it in with the function stubs? > > Yes, this can and should definitely be improved. If you look at >
2012 Oct 26
3
[LLVMdev] Using LLVM to serialize object state -- and performance
I have a legacy C++ application that constructs a tree of C++ objects (an iterator tree to implement a query language). I am trying to use LLVM to "serialize" the state of this tree to disk for later loading and execution (or "compile" it to disk, if you prefer). Each of the C++ iterator objects now has a codegen() member function that adds to the LLVM code of an
2016 Feb 05
2
MCJit Runtine Performance
Hi Keno, I am talking about runtime. The performance of the generated machine code. Not the time it takes to lower the IR to machine code. We typically only JIT once (taking a few secs) and then run the generated machine code for hours. So the JIT time (IR -> machine code) doesn't impact us. Cheers Morten On 05/02/16 15:58, Keno Fischer wrote: > Actually, reading over all of this
2012 Oct 27
0
[LLVMdev] Using LLVM to serialize object state -- and performance
I'm not sure I have a clear picture of what you're JIT'ing. If any of the JIT'ed functions call other JIT'ed functions, it may be difficult to find all the dependencies of a functions and recreate them correctly on a subsequent load. Even if the JIT'ed functions only call non-JIT'ed functions, I think you'd need some confidence that the address of the called