search for: uselists

Displaying 20 results from an estimated 29 matches for "uselists".

Did you mean: uselist
2019 Jun 26
2
Representations of IR in the output of opt
I finally got back to this. It is a known and endemic issue that pops up from time to time. The issues I’m aware of so far are related to random sets being used where strict order is required. This may result in non-deterministic uselists issued by the bitcode/assembly writers. There is no great way to go about pro-active testing for this. Collecting the tests so far and running them as regression tests occasionally might serve as a feel better bandage. Neither can I think of good checks in a verifier. These bugs show up from time...
2015 May 18
2
[LLVMdev] [LSR] hoisting loop invariants in reverse order
It's not caused by "the insertion point is set to the default after". I should mention the reason somewhere earlier. "Reversing the order of arg0~3 is not intentional. The user list of pixel_idx happens to have pixel_idx+3, pixel_idx+2, and pixel_idx+1 in this order, so LSR simply follows this order when collecting the LSRFixups." I'm not an expert on uselist orders,
2011 Feb 26
1
[LLVMdev] Removing Instructions
I try to write a pass that clones a function and then removes some instruction from the clone, the clone is then added to the module (the parent of the source function). I call removeFromParent() on the appropriate instruction and it is actually removed (I see it in module's dump). However I get a failed assertion at Module Verifier which says: --- Instruction referencing instruction not
2019 May 30
2
Representations of IR in the output of opt
Hello again, > It may be desirable to sort the table before writing the bitcode out, > adding Peter to the thread for his opinion. Thanks for this! Now it seems I've been optimistic about this result. I have instrumented the test suite to check it on a wider amount of files and quickly discovered that it fails for larger optimization sequences. In particular, the default -O3 set
2017 Nov 22
2
Retrieving DbgInfoIntrinsics for a given value
Hi LLVM, If I have an llvm value "<16 x float> addrspace(1)* %in", and in the LLVM IR, there is a @llvm.dbg.value like: call void @llvm.dbg.value(metadata <16 x float> addrspace(1)* %in, i64 0, metadata !216, metadata !28), !dbg !217 How I can retrieve this @llvm.dbg.value when I have "%in"? Since Metadata is not a part of the uselist anymore, is there some way
2015 Jan 31
2
[LLVMdev] debug info for llvm::IntrinsicInst ???
When trying to display and do anything with a variable of type IntrinsicInst, gdb thinks that it's an incomplete type and kind find any member functions or even display the class. (gdb) list 1337 1332 1333 // Finish off the call including any return values. 1334 return finishCall(CLI, RetVT, NumBytes); 1335 } 1336 1337 bool MipsFastISel::fastLowerIntrinsicCall(const
2018 May 02
0
Generating function definition for function that's only called during unwinding
Hmmm... It seems like I should check out how the UseList on Value (and its child BasicBlock) work. On Tue, May 1, 2018 at 8:34 PM, Keith Wyss <wyssman at gmail.com> wrote: > Hi, > > I'm trying to understand how clang keeps track of which declarations are > called within a translation unit and decides to codegen their definitions. > > DeclBase.h has a markUsed to keep
2018 May 02
3
Generating function definition for function that's only called during unwinding
Hi, I'm trying to understand how clang keeps track of which declarations are called within a translation unit and decides to codegen their definitions. DeclBase.h has a markUsed to keep track of ODR use, and I think that the decl can be found from the symbol table via ASTContext.h (for example looking up a template via GetQualifiedTemplateName -> getAsTemplateDecl -> setIsUsed ). This
2015 Jan 31
2
[LLVMdev] debug info for llvm::IntrinsicInst ???
Ok. I'm basically just following the model of the other fast-isel ports. On 01/30/2015 09:12 PM, David Blaikie wrote: > (I'm assuming you're building LLVM with clang, in this case?) > > Looks like IntrinsicInst is one of those "lies" in LLVM that works via > type punning that's undefined behavior in C++, so it's code that > should be fixed anyway.
2015 Jan 31
0
[LLVMdev] debug info for llvm::IntrinsicInst ???
(I'm assuming you're building LLVM with clang, in this case?) Looks like IntrinsicInst is one of those "lies" in LLVM that works via type punning that's undefined behavior in C++, so it's code that should be fixed anyway. In any case, the reason this produces the debugging experience you're seeing is that LLVM (& GCC, for that matter) assumes that if your class
2013 Jul 16
0
[LLVMdev] [LLVM Dev] [Discussion] Function-based parallel LLVM backend code generation
Hi, community: For the sake of our business need, I want to enable "Function-based parallel code generation" to boost up the compilation of single module, please see the details of the design and provide your feedbacks on below aspects, thanks 1. Is this idea the proper solution for my requirement 2. This new feature will be enabled by llc -thd=N and has no impact on
2015 Jan 31
0
[LLVMdev] debug info for llvm::IntrinsicInst ???
On Fri, Jan 30, 2015 at 10:37 PM, reed kotler <rkotler at mips.com> wrote: > Ok. > > I'm basically just following the model of the other fast-isel ports. > Yeah, not your fault - just an architectural quirk. It's possible we could workaround the debug info side of this by declaring the virtual dtor (or some other virtual function - even an explicit anchor as we have
2020 May 06
2
Unexpected behavior found in Stack Coloring pass, need clarification
Hello, I have come across an unusual behavior where instruction domination rule is violated "Instruction does not dominate all its uses." It concerns with StackColoring pass present at llvm/lib/CodeGen/StackColoring.cpp. I am reaching out to the LLVM community to help me understand the cause of this issue and the working of the pass. The IR produced at the end of the pass seems to be
2015 Oct 16
2
[RFC] Clean up the way we store optional Function data
Here is a WIP patch as promised: http://reviews.llvm.org/D13829 It uses a hungoff uselist to store optional data as needed. Some early objections from Duncan: - An extra one-time malloc() is required to set personality functions. - We get and set personality functions frequently. This patch introduces a level of indirection which slows the common case down. Is this overhead
2011 Jan 05
0
[LLVMdev] Printing error with Value objects
Hi. The platform is an x86 32-bit machine running LLVM 2.4. I am trying to analyze Alias Analysis queries, and towards this end, I am trying to print out the "Value"s that form the queries. While trying to print these queries, llvm hits a segmentation fault. The fault is due to a Value which does not have its module set properly. I am using the operator<< to call the
2015 Oct 21
2
[RFC] Clean up the way we store optional Function data
I've done some measurements on this. The test program I have just calls Function::Create(), F->setPersonalityFn(), and then F->eraseFromParent() in a loop 2^20 times. Results: pre-patch --- min: 1.10s max: 1.13s avg: 1.11s post-patch --- min: 1.26s max: 1.35s avg: 1.29s So we expect to lose 0.2 seconds per 1 million functions (with personality functions) in a
2017 Sep 08
5
Performance of large llvm::ConstantDataArrays
I'm running into some pretty bad performance in llc.exe when compiling some large neural networks into code that contains some very large llvm::ConstantDataArrays, some are { size=102,760,448 }. There's a small about of actual code for processing the network, but the assembly is mostly global data. I'm finding that llc.exe memory spikes up around 30 gigabytes and the job takes 20-30
2015 Jun 25
2
[LLVMdev] Are Module / Function / Instruction iteration orders stable?
Hi guys, Suppose I have an IR file on disk, and I access it via a Module pass. Also suppose that the bitcode file hasn't changed, and no transformation passes have run. Then can I safely assume that every time my Module pass executes code like the following, it will always visit the Module's Functions, BasicBlocks, and Instructions in the same order? for (auto const & Fn : Module)
2017 Aug 06
2
Compile issues with LLVM ORC JIT
...b/gcc/x86_64-linux-gnu/6Foundcandidate GCC installation:/usr/lib/gcc/x86_64-linux-gnu/6.3.0SelectedGCC installation:/usr/lib/gcc/x86_64-linux-gnu/6.3.0Candidatemultilib:.;@m64Selectedmultilib:.;@m64"/usr/local/bin/clang-4.0"-cc1 -triple x86_64-unknown-linux-gnu -emit-llvm-bc -emit-llvm-uselists -disable-free -main-file-name contribJIT.cpp -mrelocation-model pic -pic-level 2-mthread-model posix -mdisable-fp-elim -fmath-errno -masm-verbose -mconstructor-aliases -munwind-tables -fuse-init-array -target-cpu x86-64-v -dwarf-column-info -debugger-tuning=gdb -coverage-notes-file /home/ikue...
2014 Sep 09
2
[LLVMdev] VMKit is retired (but you can help if you want!)
Oups, sorry for the mistake, llcj (not llc:)) is not more maintained! Gaël Le 10 sept. 2014 00:27, "Gaël Thomas" <gael.thomas00 at gmail.com> a écrit : > Hi Brian, > > So, I confirm, llc is not more maintained. And using vmjc is probably > the good starting point to translate Java bytecode into llvm bitcode. > > However, I think that your hack (changing the way