sounds like a good idea to me. but one of the current issues of
back-patching in the LLVM is that the back-patching is not done atomically
on some of the architectures, i.e. Intel x86. and this makes LLVM JIT not
thread-safe in lazy compilation mode. what we need to make sure is that the
"updating the resolution for a given symbol" you mentioned is done in
an
atomic fashion.
also, how much more overhead is the "updating the resolution for a given
symbol, and asking rt-dyld to re-link the executable code" in comparison to
simple overwriting an instruction ?
Xin
On Mon, Apr 4, 2011 at 1:50 PM, <llvmdev-request at cs.uiuc.edu> wrote:
> Send LLVMdev mailing list submissions to
> llvmdev at cs.uiuc.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> or, via email, send a message with subject or body 'help' to
> llvmdev-request at cs.uiuc.edu
>
> You can reach the person managing the list at
> llvmdev-owner at cs.uiuc.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of LLVMdev digest..."
>
>
> Today's Topics:
>
> 1. Re: GSOC Adaptive Compilation Framework for LLVM JIT Compiler
> (Stephen Kyle)
> 2. Re: GSOC Adaptive Compilation Framework for LLVM JIT Compiler
> (Xin Tong Utoronto)
> 3. Re: GSOC Adaptive Compilation Framework for LLVM JIT Compiler
> (Owen Anderson)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 4 Apr 2011 18:19:01 +0100
> From: Stephen Kyle <s.kyle at ed.ac.uk>
> Subject: Re: [LLVMdev] GSOC Adaptive Compilation Framework for LLVM
> JIT Compiler
> To: Xin Tong Utoronto <x.tong at utoronto.ca>
> Cc: Xin Tong <xerox.time at gmail.com>, llvmdev at cs.uiuc.edu
> Message-ID:
> <AANLkTi=z_W2Q+fRTFEf+Z9R2axfO7pDrn2zu5djiYCiZ at
mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On 29 March 2011 12:35, Xin Tong Utoronto <x.tong at utoronto.ca>
wrote:
>
> > *Project Description:*
> >
> > *
> > *
> >
> > LLVM has gained much popularity in the programming languages and
compiler
> > industry from the time it is developed. Lots of researchers have used
> LLVM
> > as frameworks for their researches and many languages have been ported
to
> > LLVM IR and interpreted, Just-in-Time compiled or statically compiled
to
> > native code. One of the current drawbacks of the LLVM JIT is the lack
of
> an
> > adaptive compilation System. All the non-adaptive bits are already
there
> in
> > LLVM: optimizing compiler with the different types of instruction
> selectors,
> > register allocators, preRA schedulers, etc. and a full set of
> optimizations
> > changeable at runtime. What's left is a system that can keep track
of and
> > dynamically look-up the hotness of methods and re-compile with more
> > expensive optimizations as the methods are executed over and over.
This
> > should improve program startup time and execution time and will bring
> great
> > benefits to all ported languages that intend to use LLVM JIT as one of
> the
> > execution methods
> >
> >
> > *Project Outline:*
> >
> > *
> > *
> >
> > Currently, the LLVM JIT serves as a management layer for the executed
> LLVM
> > IR, it manages the compiled code and calls the LLVM code generator to
do
> the
> > real work. There are levels of optimizations for the LLVM code
generator,
> > and depends on how much optimizations the code generator is asked to
do,
> the
> > time taken may vary significantly. The adaptive compilation mechanism
> should
> > be able to detect when a method is getting hot, compiling or
recompiling
> it
> > using the appropriate optimization levels. Moreover, this should
happen
> > transparently to the running application. In order to keep track of
how
> many
> > times a JITed function is called. This involves inserting
> instrumentational
> > code into the function's LLVM bitcode before it is sent to the
code
> > generator. This code will increment a counter when the function is
> called.
> > And when the counter reaches a threshold, the function gives control
back
> to
> > the LLVM JIT. Then the JIT will look at the hotness of all the methods
> and
> > find the one that triggered the recompilation threshold. The JIT can
then
> > choose to raise the level of optimization based on the algorithm below
or
> > some other algorithms developed later.
> >
> >
> > IF (getCompilationCount(method) > 50 in the last 100 samples) =
>
> Recompile
> > at Aggressive
> > ELSE Recompile at the next optimization level.
> >
> >
> > Even though the invocation counting introduces a few lines of binary,
but
> > the advantages of adaptive optimization should far overweigh the extra
> few
> > lines of binary introduced. Note the adaptive compilation framework I
> > propose here is orthogonal to the LLVM profile-guided optimizations.
The
> > profile-guided optimization is a technique used to optimize code with
> some
> > profiling or external information. But the adaptive compilation
framework
> is
> > concerned with level of optimizations instead of how the optimizations
> are
> > to be performed.
> >
> >
> > *Project Timeline:*
> >
> > *
> > *
> >
> > This is a relatively small project and does not involve a lot of
coding,
> > but good portion of the time will be spent benchmarking, tuning and
> > experimenting with different algorithms, i.e. what would be the
algorithm
> to
> > raise the compilation level when a method recompilation threshold is
> > reached, can we make this algorithm adaptive too, etc. Therefore, my
> > timeline for the project is as follow
> >
> >
> > Week 1
> > Benchmarking the current LLVM JIT compiler, measuring compilation
speed
> > differences for different levels of compilation. This piece of
> information
> > is required to understand why a heuristic will outperform others
> >
> >
> > Week 2
> > Reading LLVM Execution Engine and Code Generator code. Design the LLVM
> > adaptive compilation framework
> >
> >
> > Week 3 - 9
> > Implementing and testing the LLVM adaptive compilation framework. The
> > general idea of the compilation framework is described in project
outline
> >
> >
> > Week 10 - 13
> > Benchmarking, tuning and experimenting with different recompilation
> > algorithms. Typically benchmarking test cases would be
> >
> >
> > Week 14
> > Test and organize code. Documentation
> >
> >
> > *Overall Goals:*
> >
> >
> >
> > My main goal at the end of the summer is to have an automated
profiling
> and
> > adaptive compilation framework for the LLVM. Even though the
performance
> > improvements are still unclear at this point, I believe that this
> adaptive
> > compilation framework will definitely give noticeable performance
> benefits,
> > as the current JIT compilation is either too simple to give a
reasonably
> > fast code or too expensive to apply to all functions.
> >
> >
> >
> > *Background:*
> >
> >
> >
> > I have some experience with the Java Just-In-Time compiler and some
> > experience with LLVM. I have included my CV for your reference. I
don't
> have
> > a specific mentor in mind, but I imagine that the existing mentors
from
> LLVM
> > would be extremely helpful.
> >
> >
> >
> >
> >
> >
> > Xin* Tong*
> >
> > * *
> >
> > *Email:**x.tong at utoronto.ca*
> >
> >
> >
> >
> >
> >
> >
> > Creative, quality-focused Computer Engineering student brings a strong
> > blend of programming, design and analysis skills. Offers solid
> understanding
> > of best practices at each stage of the software development lifecycle.
> > Skilled at recognizing and resolving design flaws that have the
potential
> to
> > create downstream maintenance, scalability and functionality issues.
> Adept
> > at optimizing complex system processes and dataflows, taking the
> initiative
> > to identify and recommend design and coding modifications to improve
> overall
> > system performance. Excels in dynamic, deadline-sensitive environments
> that
> > demand resourcefulness, astute judgement, and self-motivated quick
study
> > talents.Utilizes excellent time management skills to balance a
demanding
> > academic course of studies with employment and volunteer pursuits,
> achieving
> > excellent results in all endeavours.
> >
> >
> > STRENGTHS & EXPERTISE
> >
> >
> >
> > *Compiler Construction ? Compiler Optimization ? Computer Architecture
?
> > Bottleneck Analysis & Solutions*
> >
> > *Coding & Debugging ? Workload Prioritization ? Team
Collaboration &
> > Leadership *
> >
> > *Software Testing & Integration ? Test-Driven Development *
> >
> >
> > EDUCATION & CREDENTIALS
> >
> > * *
> >
> > *BACHELOR OF COMPUTER ENGINEERING*
> >
> > *University** of Toronto, Toronto, ON, Expected Completion 2011*
> >
> > Compiler*? *Operation Systems *?* Computer Architecture * *
> >
> >
> >
> >
> >
> > *Cisco Certified Networking Associate*, July 2009
> >
> >
> > PROFESSIONAL EXPERIENCE
> >
> > * *
> >
> > *Java VIRTUAL MACHINE JIT Developer
> > **Aug 2010-May 2011*
> >
> > *IBM, Toronto**, Canada*
> >
> > * *
> >
> > - Working on the PowerPC code generator of IBM Just-in-Time
compiler
> > for Java Virtual Machine.
> > - Benchmarking Just-in-Time compiler performance, analyzing and
fixing
> > possible regressions.
> > - Triaging and fixing defects in the Just-in-Time compiler
> > - Acquiring hand-on experience with powerpc assembly and powerpc
> binary
> > debugging with gdb and other related tools
> >
> > * *
> >
> > * *
> >
> > *Java VirTual Mahine Developer , Extreme Blue
> >
> > **May 2010-Aug 2010***
> >
> > *IBM, Ottawa**, Canada***
> >
> > - Architected a multi-tenancy solution for IBM J9 Java Virtual
Machine
> > for hosting multiple applications within one Java Virtual Machine.
> Designed
> > solutions to provide good tenant isolation and resource control for
> all
> > tenants running in the same Java Virtual Machine.
> > - Worked on Java class libraries and different components of J9
Java
> > Virtual Machine, including threading library, garbage collector,
> > interpreter, etc.
> >
> >
> >
> > * *
> >
> > * *
> >
> > *Continued?*
> >
> > *Xin Tong
> > ** **page 2*
> >
> > * *
> >
> > *Graphics Compiler Developer **
> > May 2009-May 2010*
> >
> > *Qualcomm,**San Diego**, USA***
> >
> > - Recruited for an internship position with this multinational
> > telecommunications company to work on their C++ compiler project.
> > - Developed a static verifier program which automatically generates
> and
> > addsintermediate language code to test programs to make them
> > self-verifying. Then the test programs are used to test the C++
> > compiler, ensuring that it can compile code correctly.
> > - Utilizes in-depth knowledge of LLVM systems and algorithms to
> > generate elegant and robust code.
> >
> >
> >
> > * *
> >
> > * *
> > ACADEMIC PROJECTS
> >
> > * *
> >
> > *COMPILER OPTIMIZER IMPLEMENTATION (Dec. 2010 ? Apr 2011) :*
Implemented
> a
> > compiler optimizer on the SUIF framework. Implemented control flow
> analysis,
> > data flow analysis, loop invariant code motion, global value
numbering,
> loop
> > unrolling and various other local optimizations.
> >
> >
> >
> > *GPU COMPILER IMPLEMENTATION (Sept. ? Dec. 2010) :* Implemented a GPU
> > compiler that compiles a subset of the GLSL language to ARB language
> which
> > then can be executed on GPU. Wrote the scanner and parser using Lex
and
> Yacc
> > and a code generator in a OOP fashion
> >
> >
> >
> > *Malloc Library Implementation** (Oct.-Nov. 2008) : *Leveraged solid
> > understanding of best fit algorithm and linkedlist data structure to
> design
> > a malloc library to perform dynamic memory allocation. Implemented the
> > library with C programming language to ensure robust and clear coding
for
> > 1000 line codes. Optimized library on the code level to obtain a 6%
> > increase of allocation throughput. Harnessed knowledge of trace files
and
> > drivers to test and evaluate the malloc library?s throughput and
memory
> > utilization.
> >
> >
> >
> >
> >
> >
> > COMPUTERSKILLS
> >
> >
> >
> > *Programming Languages*
> >
> > C* **?***C++* **?***Java* ***
> >
> > *Operating Systems*
> >
> > Linux**
> >
> > *Software Tools*
> >
> > GDB *?* GCC
> >
> > * *
> >
> >
> > Extracurricular Activities
> >
> > * *
> >
> > *Elected Officer**, *Institute of Electrical & Electronics
Engineers,
> > University of Toronto Branch,* Since May 2009*
> >
> > *Member**, *Institute of Electrical & Electronics Engineers,*
Since
> **2007
> > *
> >
> > *Member**, *University of Toronto E-Sports Club*, 2007*
> >
> > *Member**, *University of Toronto Engineering Chinese Culture Club*,
> 2007*
> > *Member**, *University of Toronto Robotics Club*, 2007*
> >
> > --
> > Kind Regards
> >
> > Xin Tong
> >
> > _______________________________________________
> > LLVM Developers mailing list
> > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >
> >
> Hi Xin,
>
> If I understand the above correctly, this basically means that whenever an
> application calls a function it's been given by getPointerToFunction(),
> there's a possibility the function is recompiled with more aggressive
> optimisations, should that function meet some hotness threshold. Does the
> application have to wait while this compilation takes place, before the
> function it called is actually executed?
>
> If so, it's nice that recompilation is transparent to the application,
and
> so functions just magically become faster over time, but stalling the
> application like this may not be desirable.
>
> I've added an adaptive optimisation system to an instruction set
simulator
> developed at my university which heavily relies on LLVM for JIT
> compilation.
> It performs all the compilation in a separate thread from where the
> interpretation of the simulated program is taking place, meaning it never
> needs to wait for any compilation. Adaptive reoptimisation also takes place
> in a separate thread, and this has caused me a multitude of headaches, but
> I
> digress...
>
> Basically: if the initial compilation is done in a separate thread, can you
> ensure that any adaptive reoptimisation also happens asynchronously, or
> will
> such use cases have to do without your system?
>
> Cheers,
> Stephen
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
>
http://lists.cs.uiuc.edu/pipermail/llvmdev/attachments/20110404/edabf84a/attachment-0001.html
>
> ------------------------------
>
> Message: 2
> Date: Mon, 4 Apr 2011 13:42:49 -0400
> From: Xin Tong Utoronto <x.tong at utoronto.ca>
> Subject: Re: [LLVMdev] GSOC Adaptive Compilation Framework for LLVM
> JIT Compiler
> To: Stephen Kyle <s.kyle at ed.ac.uk>
> Cc: llvmdev at cs.uiuc.edu
> Message-ID: <BANLkTi=QCsGwYG3Y-ckf_YJGPdDFKdX0Pw at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Mon, Apr 4, 2011 at 1:19 PM, Stephen Kyle <s.kyle at ed.ac.uk>
wrote:
>
> > On 29 March 2011 12:35, Xin Tong Utoronto <x.tong at
utoronto.ca> wrote:
> >
> >> *Project Description:*
> >>
> >> *
> >> *
> >>
> >> LLVM has gained much popularity in the programming languages and
> compiler
> >> industry from the time it is developed. Lots of researchers have
used
> LLVM
> >> as frameworks for their researches and many languages have been
ported
> to
> >> LLVM IR and interpreted, Just-in-Time compiled or statically
compiled to
> >> native code. One of the current drawbacks of the LLVM JIT is the
lack of
> an
> >> adaptive compilation System. All the non-adaptive bits are already
there
> in
> >> LLVM: optimizing compiler with the different types of instruction
> selectors,
> >> register allocators, preRA schedulers, etc. and a full set of
> optimizations
> >> changeable at runtime. What's left is a system that can keep
track of
> and
> >> dynamically look-up the hotness of methods and re-compile with
more
> >> expensive optimizations as the methods are executed over and over.
This
> >> should improve program startup time and execution time and will
bring
> great
> >> benefits to all ported languages that intend to use LLVM JIT as
one of
> the
> >> execution methods
> >>
> >>
> >> *Project Outline:*
> >>
> >> *
> >> *
> >>
> >> Currently, the LLVM JIT serves as a management layer for the
executed
> LLVM
> >> IR, it manages the compiled code and calls the LLVM code generator
to do
> the
> >> real work. There are levels of optimizations for the LLVM code
> generator,
> >> and depends on how much optimizations the code generator is asked
to do,
> the
> >> time taken may vary significantly. The adaptive compilation
mechanism
> should
> >> be able to detect when a method is getting hot, compiling or
recompiling
> it
> >> using the appropriate optimization levels. Moreover, this should
happen
> >> transparently to the running application. In order to keep track
of how
> many
> >> times a JITed function is called. This involves inserting
> instrumentational
> >> code into the function's LLVM bitcode before it is sent to the
code
> >> generator. This code will increment a counter when the function is
> called.
> >> And when the counter reaches a threshold, the function gives
control
> back to
> >> the LLVM JIT. Then the JIT will look at the hotness of all the
methods
> and
> >> find the one that triggered the recompilation threshold. The JIT
can
> then
> >> choose to raise the level of optimization based on the algorithm
below
> or
> >> some other algorithms developed later.
> >>
> >>
> >> IF (getCompilationCount(method) > 50 in the last 100 samples) =
>
> >> Recompile at Aggressive
> >> ELSE Recompile at the next optimization level.
> >>
> >>
> >> Even though the invocation counting introduces a few lines of
binary,
> but
> >> the advantages of adaptive optimization should far overweigh the
extra
> few
> >> lines of binary introduced. Note the adaptive compilation
framework I
> >> propose here is orthogonal to the LLVM profile-guided
optimizations. The
> >> profile-guided optimization is a technique used to optimize code
with
> some
> >> profiling or external information. But the adaptive compilation
> framework is
> >> concerned with level of optimizations instead of how the
optimizations
> are
> >> to be performed.
> >>
> >>
> >> *Project Timeline:*
> >>
> >> *
> >> *
> >>
> >> This is a relatively small project and does not involve a lot of
coding,
> >> but good portion of the time will be spent benchmarking, tuning
and
> >> experimenting with different algorithms, i.e. what would be the
> algorithm to
> >> raise the compilation level when a method recompilation threshold
is
> >> reached, can we make this algorithm adaptive too, etc. Therefore,
my
> >> timeline for the project is as follow
> >>
> >>
> >> Week 1
> >> Benchmarking the current LLVM JIT compiler, measuring compilation
speed
> >> differences for different levels of compilation. This piece of
> information
> >> is required to understand why a heuristic will outperform others
> >>
> >>
> >> Week 2
> >> Reading LLVM Execution Engine and Code Generator code. Design the
LLVM
> >> adaptive compilation framework
> >>
> >>
> >> Week 3 - 9
> >> Implementing and testing the LLVM adaptive compilation framework.
The
> >> general idea of the compilation framework is described in project
> outline
> >>
> >>
> >> Week 10 - 13
> >> Benchmarking, tuning and experimenting with different
recompilation
> >> algorithms. Typically benchmarking test cases would be
> >>
> >>
> >> Week 14
> >> Test and organize code. Documentation
> >>
> >>
> >> *Overall Goals:*
> >>
> >>
> >>
> >> My main goal at the end of the summer is to have an automated
profiling
> >> and adaptive compilation framework for the LLVM. Even though the
> performance
> >> improvements are still unclear at this point, I believe that this
> adaptive
> >> compilation framework will definitely give noticeable performance
> benefits,
> >> as the current JIT compilation is either too simple to give a
reasonably
> >> fast code or too expensive to apply to all functions.
> >>
> >>
> >>
> >> *Background:*
> >>
> >>
> >>
> >> I have some experience with the Java Just-In-Time compiler and
some
> >> experience with LLVM. I have included my CV for your reference. I
don't
> have
> >> a specific mentor in mind, but I imagine that the existing mentors
from
> LLVM
> >> would be extremely helpful.
> >>
> >>
> >>
> >>
> >>
> >>
> >> Xin* Tong*
> >>
> >> * *
> >>
> >> *Email:**x.tong at utoronto.ca*
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> Creative, quality-focused Computer Engineering student brings a
strong
> >> blend of programming, design and analysis skills. Offers solid
> understanding
> >> of best practices at each stage of the software development
lifecycle.
> >> Skilled at recognizing and resolving design flaws that have the
> potential to
> >> create downstream maintenance, scalability and functionality
issues.
> Adept
> >> at optimizing complex system processes and dataflows, taking the
> initiative
> >> to identify and recommend design and coding modifications to
improve
> overall
> >> system performance. Excels in dynamic, deadline-sensitive
environments
> that
> >> demand resourcefulness, astute judgement, and self-motivated quick
study
> >> talents.Utilizes excellent time management skills to balance a
demanding
> >> academic course of studies with employment and volunteer pursuits,
> achieving
> >> excellent results in all endeavours.
> >>
> >>
> >> STRENGTHS & EXPERTISE
> >>
> >>
> >>
> >> *Compiler Construction ? Compiler Optimization ? Computer
Architecture ?
> >> Bottleneck Analysis & Solutions*
> >>
> >> *Coding & Debugging ? Workload Prioritization ? Team
Collaboration &
> >> Leadership *
> >>
> >> *Software Testing & Integration ? Test-Driven Development *
> >>
> >>
> >> EDUCATION & CREDENTIALS
> >>
> >> * *
> >>
> >> *BACHELOR OF COMPUTER ENGINEERING*
> >>
> >> *University** of Toronto, Toronto, ON, Expected Completion 2011*
> >>
> >> Compiler*? *Operation Systems *?* Computer Architecture * *
> >>
> >>
> >>
> >>
> >>
> >> *Cisco Certified Networking Associate*, July 2009
> >>
> >>
> >> PROFESSIONAL EXPERIENCE
> >>
> >> * *
> >>
> >> *Java VIRTUAL MACHINE JIT Developer
> >> **Aug 2010-May 2011*
> >>
> >> *IBM, Toronto**, Canada*
> >>
> >> * *
> >>
> >> - Working on the PowerPC code generator of IBM Just-in-Time
compiler
> >> for Java Virtual Machine.
> >> - Benchmarking Just-in-Time compiler performance, analyzing and
> fixing
> >> possible regressions.
> >> - Triaging and fixing defects in the Just-in-Time compiler
> >> - Acquiring hand-on experience with powerpc assembly and
powerpc
> >> binary debugging with gdb and other related tools
> >>
> >> * *
> >>
> >> * *
> >>
> >> *Java VirTual Mahine Developer , Extreme Blue
> >>
> >> **May 2010-Aug 2010***
> >>
> >> *IBM, Ottawa**, Canada***
> >>
> >> - Architected a multi-tenancy solution for IBM J9 Java Virtual
> Machine
> >> for hosting multiple applications within one Java Virtual
Machine.
> Designed
> >> solutions to provide good tenant isolation and resource control
for
> all
> >> tenants running in the same Java Virtual Machine.
> >> - Worked on Java class libraries and different components of J9
Java
> >> Virtual Machine, including threading library, garbage
collector,
> >> interpreter, etc.
> >>
> >>
> >>
> >> * *
> >>
> >> * *
> >>
> >> *Continued?*
> >>
> >> *Xin Tong
> >> ** **page 2*
> >>
> >> * *
> >>
> >> *Graphics Compiler Developer **
> >> May 2009-May 2010*
> >>
> >> *Qualcomm,**San Diego**, USA***
> >>
> >> - Recruited for an internship position with this multinational
> >> telecommunications company to work on their C++ compiler
project.
> >> - Developed a static verifier program which automatically
generates
> >> and addsintermediate language code to test programs to make
them
> >> self-verifying. Then the test programs are used to test the C++
> >> compiler, ensuring that it can compile code correctly.
> >> - Utilizes in-depth knowledge of LLVM systems and algorithms to
> >> generate elegant and robust code.
> >>
> >>
> >>
> >> * *
> >>
> >> * *
> >> ACADEMIC PROJECTS
> >>
> >> * *
> >>
> >> *COMPILER OPTIMIZER IMPLEMENTATION (Dec. 2010 ? Apr 2011) :*
Implemented
> >> a compiler optimizer on the SUIF framework. Implemented control
flow
> >> analysis, data flow analysis, loop invariant code motion, global
value
> >> numbering, loop unrolling and various other local optimizations.
> >>
> >>
> >>
> >> *GPU COMPILER IMPLEMENTATION (Sept. ? Dec. 2010) :* Implemented a
GPU
> >> compiler that compiles a subset of the GLSL language to ARB
language
> which
> >> then can be executed on GPU. Wrote the scanner and parser using
Lex and
> Yacc
> >> and a code generator in a OOP fashion
> >>
> >>
> >>
> >> *Malloc Library Implementation** (Oct.-Nov. 2008) : *Leveraged
solid
> >> understanding of best fit algorithm and linkedlist data structure
to
> design
> >> a malloc library to perform dynamic memory allocation. Implemented
the
> >> library with C programming language to ensure robust and clear
coding
> for
> >> 1000 line codes. Optimized library on the code level to obtain a
6%
> >> increase of allocation throughput. Harnessed knowledge of trace
files
> and
> >> drivers to test and evaluate the malloc library?s throughput and
memory
> >> utilization.
> >>
> >>
> >>
> >>
> >>
> >>
> >> COMPUTERSKILLS
> >>
> >>
> >>
> >> *Programming Languages*
> >>
> >> C* **?***C++* **?***Java* ***
> >>
> >> *Operating Systems*
> >>
> >> Linux**
> >>
> >> *Software Tools*
> >>
> >> GDB *?* GCC
> >>
> >> * *
> >>
> >>
> >> Extracurricular Activities
> >>
> >> * *
> >>
> >> *Elected Officer**, *Institute of Electrical & Electronics
Engineers,
> >> University of Toronto Branch,* Since May 2009*
> >>
> >> *Member**, *Institute of Electrical & Electronics Engineers,*
Since **
> >> 2007*
> >>
> >> *Member**, *University of Toronto E-Sports Club*, 2007*
> >>
> >> *Member**, *University of Toronto Engineering Chinese Culture
Club*,
> 2007
> >> *
> >> *Member**, *University of Toronto Robotics Club*, 2007*
> >>
> >> --
> >> Kind Regards
> >>
> >> Xin Tong
> >>
> >> _______________________________________________
> >> LLVM Developers mailing list
> >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> >>
> >>
> > Hi Xin,
> >
> > If I understand the above correctly, this basically means that
whenever
> an
> > application calls a function it's been given by
getPointerToFunction(),
> > there's a possibility the function is recompiled with more
aggressive
> > optimisations, should that function meet some hotness threshold. Does
the
> > application have to wait while this compilation takes place, before
the
> > function it called is actually executed?
> >
> >
> > If so, it's nice that recompilation is transparent to the
application,
> and
> > so functions just magically become faster over time, but stalling the
> > application like this may not be desirable.
> >
> > I've added an adaptive optimisation system to an instruction set
> simulator
> > developed at my university which heavily relies on LLVM for JIT
> compilation.
> > It performs all the compilation in a separate thread from where the
> > interpretation of the simulated program is taking place, meaning it
never
> > needs to wait for any compilation. Adaptive reoptimisation also takes
> place
> > in a separate thread, and this has caused me a multitude of headaches,
> but I
> > digress...
> >
> > Basically: if the initial compilation is done in a separate thread,
can
> you
> > ensure that any adaptive reoptimisation also happens asynchronously,
or
> will
> > such use cases have to do without your system?
> >
> > Cheers,
> > Stephen
> >
>
> Functions will have to meet some hotness threshold before it is recompiled
> at a higher optimization level. The application does not have to wait for
> the compilation to finish as the compilation will be done asynchronously in
> the different thread and the application would use the current copy(less
> optimized copy this time) and the more optimized copy later. Thank you for
> the suggestion.
>
> Xin
>
>
>
> --
> Kind Regards
>
> Xin Tong
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
>
http://lists.cs.uiuc.edu/pipermail/llvmdev/attachments/20110404/1b15bbca/attachment-0001.html
>
> ------------------------------
>
> Message: 3
> Date: Mon, 04 Apr 2011 10:49:53 -0700
> From: Owen Anderson <resistor at mac.com>
> Subject: Re: [LLVMdev] GSOC Adaptive Compilation Framework for LLVM
> JIT Compiler
> To: LLVMdev List <llvmdev at cs.uiuc.edu>
> Message-ID: <D7B4856C-5708-47B8-A379-66A7ECDFE047 at mac.com>
> Content-Type: text/plain; CHARSET=US-ASCII
>
>
> On Apr 3, 2011, at 12:11 PM, Eric Christopher wrote:
>
> > <snip conversation about call patching>
>
> It seems to me that there's a general feature here that LLVM is
lacking,
> that would be useful in a number of JIT-compilation contexts, namely the
> ability to mark certain instructions (direct calls, perhaps branches too)
as
> back-patchable.
>
> The thing that stands out to me is that back-patching a call or branch in a
> JIT'd program is very much like relocation resolution that a dynamic
linker
> does at launch time for a statically compiled program. It seems like it
> might be possible to build on top of the runtime-dyld work that Jim has
been
> doing for the MC-JIT to facilitate this. Here's the idea:
>
> Suppose we had a means of tagging certain calls (and maybe branches) as
> explicitly requiring relocations. Any back-patchable call would have a
> relocation in the generated code, and the MC-JIT would be aware of the
> location and type of the relocations, and rt-dyld would handle the upfront
> resolution. Backpatching, then, is just a matter of updating the
resolution
> for a given symbol, and asking rt-dyld to re-link the executable code.
>
> Thoughts?
>
> --Owen
>
>
>
> ------------------------------
>
> _______________________________________________
> LLVMdev mailing list
> LLVMdev at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
>
> End of LLVMdev Digest, Vol 82, Issue 7
> **************************************
>
--
Kind Regards
Xin Tong
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20110404/9f0586ee/attachment.html>