On Wed, Oct 20, 2004 at 11:59:45AM -0700, Yiping Fan wrote:> Yeah. We need to have more extra fields in the instruction. Fo > example, during high-level synthesis, we must schedule an instruction > to a certain control step (or cycle), and bind it to be execute on a > certain functional unit, etc.Since we're talking about "execution" and "scheduling", you are creating a back-end for LLVM, correct? In that case, we're talking about code generation. LLVM is a target-independent representation. Note that with LLVM, we are trying to separate WHAT the program is doing (the meaning of the program) from HOW it does it (which specific instructions get executed and when, and this includes scheduling). What you are trying to add to it is target-dependent (e.g. scheduling). That is not advisable on several levels, one of which is breaking the target abstraction that we have tried hard to maintain. Take a look at the X86, PowerPC, and Sparc target code generators (llvm/lib/Target/*). They are using a different representation, specifically, MachineInstr, MachineBasicBlock, and MachineFunction classes that are target-dependent (for example, they include asm opcodes and target registers). Something target-dependent such as scheduling and assignment to functional units would be done in this representation, after code generation (LLVM -> Machine code). Presumably, this (e.g. scheduling) information is not provided from the C/C++ front-end, but computed by a pass that you would write, correct? Then you can always compute this information on the fly, before any pass that needs to do something with this information needs to use it. As Reid mentioned, take a look a the Analysis interfaces and see if you can implement this as an Analysis that could be required by a pass and transparently ran for you by the PassManager.> Besides the in-memory exchange of the information, we also want > on-disk exchange. That introduces the write-out/parse-in problem.Again, if this is information that's computable from bytecode alone, you do not need to store it every time -- an Analyser pass can compute it dynamically. Also, as a reminder, if you change the LLVM representation, your new version may or may not be able to use the current set of analyses and optimizations, thus forcing you to "reinvent the wheel" in that respect. -- Misha Brukman :: http://misha.brukman.net :: http://llvm.cs.uiuc.edu
Yiping, Could you describe in a little more detail what your goals are? I agree with Reid and Misha that modifying the instruction definition is usually not advisable but to suggest alternatives, we would need to know more. Also, for some projects it could make sense to change the instruction set. --Vikram http://www.cs.uiuc.edu/~vadve http://llvm.cs.uiuc.edu/ On Oct 20, 2004, at 2:41 PM, Misha Brukman wrote:> On Wed, Oct 20, 2004 at 11:59:45AM -0700, Yiping Fan wrote: >> Yeah. We need to have more extra fields in the instruction. Fo >> example, during high-level synthesis, we must schedule an instruction >> to a certain control step (or cycle), and bind it to be execute on a >> certain functional unit, etc. > > Since we're talking about "execution" and "scheduling", you are > creating > a back-end for LLVM, correct? In that case, we're talking about code > generation. > > LLVM is a target-independent representation. Note that with LLVM, we > are trying to separate WHAT the program is doing (the meaning of the > program) from HOW it does it (which specific instructions get executed > and when, and this includes scheduling). What you are trying to add to > it is target-dependent (e.g. scheduling). That is not advisable on > several levels, one of which is breaking the target abstraction that we > have tried hard to maintain. > > Take a look at the X86, PowerPC, and Sparc target code generators > (llvm/lib/Target/*). They are using a different representation, > specifically, MachineInstr, MachineBasicBlock, and MachineFunction > classes that are target-dependent (for example, they include asm > opcodes > and target registers). Something target-dependent such as scheduling > and assignment to functional units would be done in this > representation, > after code generation (LLVM -> Machine code). > > Presumably, this (e.g. scheduling) information is not provided from the > C/C++ front-end, but computed by a pass that you would write, correct? > Then you can always compute this information on the fly, before any > pass > that needs to do something with this information needs to use it. As > Reid mentioned, take a look a the Analysis interfaces and see if you > can > implement this as an Analysis that could be required by a pass and > transparently ran for you by the PassManager. > >> Besides the in-memory exchange of the information, we also want >> on-disk exchange. That introduces the write-out/parse-in problem. > > Again, if this is information that's computable from bytecode alone, > you > do not need to store it every time -- an Analyser pass can compute it > dynamically. Also, as a reminder, if you change the LLVM > representation, your new version may or may not be able to use the > current set of analyses and optimizations, thus forcing you to > "reinvent the wheel" in that respect. > > -- > Misha Brukman :: http://misha.brukman.net :: http://llvm.cs.uiuc.edu > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://mail.cs.uiuc.edu/mailman/listinfo/llvmdev-------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 3149 bytes Desc: not available URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20041020/afd16045/attachment.bin>
Vikram, I also agree with you. I understand that target-independent representation is very valuable and important for software compilation. However, when we are doing high-level synthesis (also called behavioral/architectural synthesis), the targeting architecture is also changing. That is, we need to do architecture exploration and the IR transfromation simultaneously. For example, after a particular pass, we may need 4 ALUs to execute the program, under a certain latency constraint; then, after another optimization pass, we may end up with only 3 ALUs. The instruction-to-ALU binding will be different after this pass. The samething could happen for register allocation and binding, and other synthesis pass. Also, there are lots of synthesis passes there, such as scheduling, functional unit binding, register binding, (FU) port binding, re-scheduling and re-binding, interconnect reduction, DSP specific optimization, ... Therefore, we need some architectural information (both on-disk and in-memory) associated with the instructions after every synthesis pass. In addition, in hardware description, there are lots of bit-vector manipulations (e.g., bit extraction, concatenatoin ...), which are not represented in LLVM. We are thinking about to add some intrinsic functions to LLVM, so that the original instruction set can be untouched. Currently we use a simple CDFG (CFG, whose every node is a DFG) representation to do our synthesis. Of cource, it is not as powerful as LLVM, and we also want to use many of your transformation/analysis passes. That is why we want to move our project on top of your IR. Thanks, -Yiping ----- Original Message ----- From: Vikram Adve To: LLVM Developers Mailing List Cc: 'Zhiru Zhang' ; Guoling Han ; Yiping Fan Sent: Wednesday, October 20, 2004 3:08 PM Subject: Re: [LLVMdev] Re: LLVM Compiler Infrastructure Tutorial Yiping, Could you describe in a little more detail what your goals are? I agree with Reid and Misha that modifying the instruction definition is usually not advisable but to suggest alternatives, we would need to know more. Also, for some projects it could make sense to change the instruction set. --Vikram http://www.cs.uiuc.edu/~vadve http://llvm.cs.uiuc.edu/ On Oct 20, 2004, at 2:41 PM, Misha Brukman wrote: On Wed, Oct 20, 2004 at 11:59:45AM -0700, Yiping Fan wrote: Yeah. We need to have more extra fields in the instruction. Fo example, during high-level synthesis, we must schedule an instruction to a certain control step (or cycle), and bind it to be execute on a certain functional unit, etc. Since we're talking about "execution" and "scheduling", you are creating a back-end for LLVM, correct? In that case, we're talking about code generation. LLVM is a target-independent representation. Note that with LLVM, we are trying to separate WHAT the program is doing (the meaning of the program) from HOW it does it (which specific instructions get executed and when, and this includes scheduling). What you are trying to add to it is target-dependent (e.g. scheduling). That is not advisable on several levels, one of which is breaking the target abstraction that we have tried hard to maintain. Take a look at the X86, PowerPC, and Sparc target code generators (llvm/lib/Target/*). They are using a different representation, specifically, MachineInstr, MachineBasicBlock, and MachineFunction classes that are target-dependent (for example, they include asm opcodes and target registers). Something target-dependent such as scheduling and assignment to functional units would be done in this representation, after code generation (LLVM -> Machine code). Presumably, this (e.g. scheduling) information is not provided from the C/C++ front-end, but computed by a pass that you would write, correct? Then you can always compute this information on the fly, before any pass that needs to do something with this information needs to use it. As Reid mentioned, take a look a the Analysis interfaces and see if you can implement this as an Analysis that could be required by a pass and transparently ran for you by the PassManager. Besides the in-memory exchange of the information, we also want on-disk exchange. That introduces the write-out/parse-in problem. Again, if this is information that's computable from bytecode alone, you do not need to store it every time -- an Analyser pass can compute it dynamically. Also, as a reminder, if you change the LLVM representation, your new version may or may not be able to use the current set of analyses and optimizations, thus forcing you to "reinvent the wheel" in that respect. -- Misha Brukman :: http://misha.brukman.net :: http://llvm.cs.uiuc.edu _______________________________________________ LLVM Developers mailing list LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu http://mail.cs.uiuc.edu/mailman/listinfo/llvmdev -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20041020/4db50466/attachment.html>
Reasonably Related Threads
- [LLVMdev] Re: LLVM Compiler Infrastructure Tutorial
- [LLVMdev] Re: LLVM Compiler Infrastructure Tutorial
- [LLVMdev] Re: LLVM Compiler Infrastructure Tutorial
- [LLVMdev] Re: LLVM Compiler Infrastructure Tutorial
- [LLVMdev] Re: LLVM Compiler Infrastructure Tutorial