Displaying 20 results from an estimated 50000 matches similar to: "[LLVMdev] LLVM IR is a compiler IR"
2011 Oct 06
0
[LLVMdev] FW: LLVM IR is a compiler IR
Sorry for the noise, but this is the message I meant to send to the list rather than replying to David directly. Unfortunately, I just sent his message to me before.
From: mclagett at hotmail.com
To: greened at obbligato.org
Subject: RE: [LLVMdev] LLVM IR is a compiler IR
Date: Thu, 6 Oct 2011 19:44:11 +0000
Thanks for your prompt reply. My answers are below at the end of your message.
2011 Oct 06
0
[LLVMdev] LLVM IR is a compiler IR
Michael Clagett <mclagett at hotmail.com> writes:
> There's about 32 core op codes that constitute the basic instruction
> set and I can envision mapping each of these to some sequence of LLVM
> IR. There's also a whole lot more "extended opcodes" that are
> executed by the same core instruction execution loop but which are
> coded using the built-in Intel
2011 Sep 13
4
[LLVMdev] Strategy for leveraging llvm optimizations in vm
Hi --
I'm still very much a newbie with llvm, but am looking (hopefully) to use it to compile into native intel code a set of source that is a combination of byte codes for my own custom vm and intel code that has been coded in assembly language directly.
In an earlier exchange, I already discovered that llvm does not do any optimizations on intel assembly language code. This would be an
2011 Sep 13
0
[LLVMdev] Strategy for leveraging llvm optimizations in vm
If you x86 assembly is sufficiently simple, I don't see any reason why
you couldn't programmatically raise it back up to LLVM IR. People
have tried this in the past (qemu, I think? I can't remember), and it
usually results in some considerable slowdowns. I'd imagine that if
your asm is sufficiently restricted, such as not needing to worry
about arithmetic flags, the x87 FPU
2011 Sep 02
1
[LLVMdev] Best way to use LLVM with byte code vm
Hi --
I'm wondering if some of you old hands might offer me some guidance. I've been working for some time with a virtual machine platform that is based loosely on the instruction set of a Forth hardware processor that Charles Moore designed sometime back. He used what he called a "MISC" or "Minimal Instruction Set Computer" and the original instruction set I was
2010 Dec 02
1
[LLVMdev] Is anyone working on a Windows VmKit port?
Curious as to whether any work is in progress on this? Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20101202/0ae5d181/attachment.html>
2011 Apr 21
1
[LLVMdev] Sources on optimization and debugging
Hi Everyone --
I'm planning on using LLVM to add some optimizing compiling capability to a Byte-Code driven virtual machine that is part of my foundation platform for a series of tool products I am building. I'm still pretty new to this whole arena and am in particular curious about one important aspect: It strikes me that the more optimizations applied to code (whether at the source
2018 Mar 15
5
[RFC] llvm-exegesis: Automatic Measurement of Instruction Latency/Uops
[You can find an easier to read and more complete version of this RFC here
<https://docs.google.com/document/d/1QidaJMJUyQdRrFKD66vE1_N55whe0coQ3h1GpFzz27M/edit?ts=5aaa84ee#>
.]
Knowing instruction scheduling properties (latency, uops) is the basis for
all scheduling work done by LLVM.
Unfortunately, vendors usually release only partial (and sometimes
incorrect) information. Updating the
2004 Oct 22
6
[LLVMdev] Some question on LLVM design
Hi everybody,
I'm currently looking at LLVM as a possible back-end to a dynamic
programming system (in the tradition of Smalltalk) we are developing. I
have read most of the llvmdev archives, and I'm aware that some things
are 'planned' but not implemented yet. We are willing to contribute the
code we'll need for our project, but before I can start coding I'll have
to
2019 Jun 02
3
[PATCH 0/2] drm/nouveau/bios/init: Improve pre-PMU devinit opcode coverage
NVIDIA GPUs include a common scripting language (devinit) that can be
interpreted by a number of "engines", e.g. within a kernel-mode software
driver, the VGA BIOS or an on-board small microcontroller which provides
certain security assertions (the 'PMU').
This system allows a GPU programming sequence to be shared by multiple
entities that would not otherwise be able to execute
2011 Dec 02
18
[LLVMdev] RFC: Machine Instruction Bundle
Machine Instruction Bundle in LLVM
Hi all,
There have been quite a bit of discussions about adding machine instruction bundle to support VLIW targets. I have been pondering what the right representation should be and what kind of impact it might have on the LLVM code generator. I believe I have a fairly good plan now and would like to share with the LLVM community.
Design Criteria
1. The
2016 Nov 29
5
[RFC] Parallelizing (Target-Independent) Instruction Selection
Hi,
Though there exists lots of researches on parallelizing or scheduling
optimization passes, If you open up the time matrices of codegen(llc
-time-passes), you'll find that the most time consuming task is actually
instruction selection(40~50% of time) instead of optimization
passes(10~0%). That's why we're trying to parallelize the
(target-independent) instruction selection process
2017 Feb 01
2
RFC: Generic IR reductions
> My proposal was to have a reduction intrinsic that can infer the type by the predecessors.
> For example:
> @llvm.reduce(ext <N x double> ( add <N x float> %a, %b))
And if we don't have %b? We just want to sum all elements of %a? Something like @llvm.reduce(ext <N x double> ( add <N x float> %a, zeroinitializer))
Don't we have a problem with constant
2016 Nov 29
2
[RFC] Parallelizing (Target-Independent) Instruction Selection
> On Nov 29, 2016, at 1:14 PM, Mehdi Amini <mehdi.amini at apple.com> wrote:
>
>
>> On Nov 29, 2016, at 4:02 AM, Bekket McClane via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>
>> Hi,
>> Though there exists lots of researches on parallelizing or scheduling optimization passes, If you open up the time
2009 Apr 05
2
Problem with Dynamo-Package
Good day,
I am facing a problem when I am installing the dynamo-package and loading it. After I installed the package, I received the following warning message:
"In file.create(f.tg) :
cannot create file 'C:\PROGRA~2\R\R-28~1.1/doc/html/packages.html', reason 'Permission denied'"
and when I load the package, an error message pops up saying that "the application
2016 Nov 30
4
[RFC] Parallelizing (Target-Independent) Instruction Selection
> Mehdi Amini <mehdi.amini at apple.com> 於 2016年11月30日 上午5:14 寫道:
>
>>
>> On Nov 29, 2016, at 4:02 AM, Bekket McClane via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>
>> Hi,
>> Though there exists lots of researches on parallelizing or scheduling optimization passes, If you open up the time matrices of
2014 Jan 21
2
[LLVMdev] How to force a MachineFunctionPass to be the last one ?
Hi,
I would like to execute a MachineFunctionPass after all other passes
which modify the machine code.
In other words, if we call llc to generate assembly file, that pass
should run right before the "Assembly Printer" pass.
Is there any official way to enforce this ?
Best regards,
Sebastien
2012 Jan 11
0
[LLVMdev] RFC: Machine Instruction Bundle
Hi Evan,
I just read your proposal and the following discussion for VLIW support and want to share my experience of writing a VLIW back-end for LLVM.
I would not integrate the packetizer into the register allocator super class since it would reduce the flexibility for the back-end developer to add some optimization passes after the packetizer. Instead, I would add the packetizer as a separate
2018 Mar 15
0
[RFC] llvm-exegesis: Automatic Measurement of Instruction Latency/Uops
Sounds like a very useful tool. Thank you for contributing.
Taking a step back and looking at the big picture, combining this with
the recently contributed llvm-mca dramatically improves our scheduling
and performance analysis story. Being able to take a snippet of code on
a particular machine, measure latency/throughput/ports for each
instruction (this tool), and then analyze the entire
2017 Feb 27
3
How can I get the opcode length of an IR instruction in LLVM?
I need to get the offset and the exact length of opcode corresponding to
a particular LLVM IR instruction in x86 architecture. I believe for this
I must hack in backends.
I assume there is a way when the opcodes are being generated in x86
backend to dump their offsets and sizes. However, considering
optimizations and translation of one IR instruction to multiple
operations, I'm not sure