Andrew Trick via llvm-dev
2016-Feb-15 23:12 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
> On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > > > JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT <https://trac.webkit.org/wiki/FTLJIT>) is moving away > > from using LLVM as its backend, towards [B3 (Bare Bones > > Backend)](https://webkit.org/docs/b3/ <https://webkit.org/docs/b3/>). This includes its own [SSA > > IR](https://webkit.org/docs/b3/intermediate- <https://webkit.org/docs/b3/intermediate-representation.html>representation.html <https://webkit.org/docs/b3/intermediate-representation.html>), > > optimisations, and instruction selection backend. > > In the end, what was the main motivation for creating a new IR? >I can't speak to the motivation of the WebKit team. Those are outlined in https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/. I'll give you my personal perspective on using LLVM for JITs, which may be interesting to the LLVM community. Most of the payoff for high level languages comes from the language-specific optimizer. It was simpler for JavaScriptCore to perform loop optimization at that level, so it doesn't even make use of LLVM's most powerful optimizations, particularly SCEV based optimization. There is a relatively small, finite amount of low-level optimization that is going to be important for JavaScript benchmarks (most of InstCombine is not relevant). SelectionDAG ISEL's compile time makes it a very poor choice for a JIT. We never put the effort into making x86 FastISEL competitive for WebKit's needs. The focus now is on Global ISEL, but that won't be ready for a while. Even when LLVM's compile time problems are largely solved, and I believe they can be, there will always be systemic compile time and memory overhead from design decisions that achieve generality, flexibility, and layering. These are software engineering tradeoffs. It is possible to design an extremely lightweight SSA IR that works well in a carefully controlled, fixed optimization pipeline. You then benefit from basic SSA optimizations, which are not hard to write. You end up working with an IR of arrays, where identifiers are indicies into the array. It's a different way of writing passes, but very efficient. It's probably worth it for WebKit, but not LLVM. LLVM's patchpoints and stackmaps features are critical for managed runtimes. However, directly supporting these features in a custom IR is simply more convenient. It takes more time to make design changes to LLVM IR vs. a custom IR. For example, LLVM does not yet support TBAA on calls, which would be very useful for optimizating around patchpoints and runtime calls. Prior to FTL, JavaScriptCore had no dependence on the LLVM project. Maintaining a dependence on an external project naturally has integration overhead. So, while LLVM is not the perfect JIT IR, it is very useful for JIT developers who want a quick solution for low-level optimization and retargetable codegen. WebKit FTL was a great example of using it to bootstrap a higher tier JIT. To that end, I think it is important for LLVM to have a well-supported -Ojit pipeline (compile fast) with the right set of passes for higher-level languages (e.g. Tail Duplication). -Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160215/36427ef8/attachment.html>
Philip Reames via llvm-dev
2016-Feb-16 00:25 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
After reading https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/., I jotted down a couple of thoughts of my own here: http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/ Philip On 02/15/2016 03:12 PM, Andrew Trick via llvm-dev wrote:> >> On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev >> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> > >> > JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT) is >> moving away >> > from using LLVM as its backend, towards [B3 (Bare Bones >> > Backend)](https://webkit.org/docs/b3/). This includes its own [SSA >> > IR](https://webkit.org/docs/b3/intermediate- >> <https://webkit.org/docs/b3/intermediate-representation.html>representation.html >> <https://webkit.org/docs/b3/intermediate-representation.html>), >> > optimisations, and instruction selection backend. >> >> In the end, what was the main motivation for creating a new IR? >> > I can't speak to the motivation of the WebKit team. Those are outlined > in https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/. > I'll give you my personal perspective on using LLVM for JITs, which > may be interesting to the LLVM community. > > Most of the payoff for high level languages comes from the > language-specific optimizer. It was simpler for JavaScriptCore to > perform loop optimization at that level, so it doesn't even make use > of LLVM's most powerful optimizations, particularly SCEV based > optimization. There is a relatively small, finite amount of low-level > optimization that is going to be important for JavaScript benchmarks > (most of InstCombine is not relevant). > > SelectionDAG ISEL's compile time makes it a very poor choice for a > JIT. We never put the effort into making x86 FastISEL competitive for > WebKit's needs. The focus now is on Global ISEL, but that won't be > ready for a while. > > Even when LLVM's compile time problems are largely solved, and I > believe they can be, there will always be systemic compile time and > memory overhead from design decisions that achieve generality, > flexibility, and layering. These are software engineering tradeoffs. > > It is possible to design an extremely lightweight SSA IR that works > well in a carefully controlled, fixed optimization pipeline. You then > benefit from basic SSA optimizations, which are not hard to write. You > end up working with an IR of arrays, where identifiers are indicies > into the array. It's a different way of writing passes, but very > efficient. It's probably worth it for WebKit, but not LLVM. > > LLVM's patchpoints and stackmaps features are critical for managed > runtimes. However, directly supporting these features in a custom IR > is simply more convenient. It takes more time to make design changes > to LLVM IR vs. a custom IR. For example, LLVM does not yet support > TBAA on calls, which would be very useful for optimizating around > patchpoints and runtime calls. > > Prior to FTL, JavaScriptCore had no dependence on the LLVM project. > Maintaining a dependence on an external project naturally has > integration overhead. > > So, while LLVM is not the perfect JIT IR, it is very useful for JIT > developers who want a quick solution for low-level optimization and > retargetable codegen. WebKit FTL was a great example of using it to > bootstrap a higher tier JIT. > > To that end, I think it is important for LLVM to have a well-supported > -Ojit pipeline (compile fast) with the right set of passes for > higher-level languages (e.g. Tail Duplication). > > -Andy > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160215/b6799352/attachment.html>
Andrew Trick via llvm-dev
2016-Feb-16 00:57 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
> On Feb 15, 2016, at 4:25 PM, Philip Reames <listmail at philipreames.com> wrote: > > After reading https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/ <https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/>., I jotted down a couple of thoughts of my own here: http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/ <http://www.philipreames.com/Blog/2016/02/15/quick-thoughts-on-webkits-b3/>Thanks for sharing. I think it’s worth noting that what you are doing would be considered 5th tier for WebKit, since you already had a decent optimizing backend without LLVM. You also have more room for background compilation threads and aren’t benchmarking on a MacBook Air. Andy> > Philip > > On 02/15/2016 03:12 PM, Andrew Trick via llvm-dev wrote: >> >>> On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >>> > >>> > JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT <https://trac.webkit.org/wiki/FTLJIT>) is moving away >>> > from using LLVM as its backend, towards [B3 (Bare Bones >>> > Backend)](https://webkit.org/docs/b3/ <https://webkit.org/docs/b3/>). This includes its own [SSA >>> > IR](https://webkit.org/docs/b3/intermediate- <https://webkit.org/docs/b3/intermediate-representation.html>representation.html <https://webkit.org/docs/b3/intermediate-representation.html>), >>> > optimisations, and instruction selection backend. >>> >>> In the end, what was the main motivation for creating a new IR? >>> >> >> I can't speak to the motivation of the WebKit team. Those are outlined in https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/ <https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/>. >> I'll give you my personal perspective on using LLVM for JITs, which may be interesting to the LLVM community. >> >> Most of the payoff for high level languages comes from the language-specific optimizer. It was simpler for JavaScriptCore to perform loop optimization at that level, so it doesn't even make use of LLVM's most powerful optimizations, particularly SCEV based optimization. There is a relatively small, finite amount of low-level optimization that is going to be important for JavaScript benchmarks (most of InstCombine is not relevant). >> >> SelectionDAG ISEL's compile time makes it a very poor choice for a JIT. We never put the effort into making x86 FastISEL competitive for WebKit's needs. The focus now is on Global ISEL, but that won't be ready for a while. >> >> Even when LLVM's compile time problems are largely solved, and I believe they can be, there will always be systemic compile time and memory overhead from design decisions that achieve generality, flexibility, and layering. These are software engineering tradeoffs. >> >> It is possible to design an extremely lightweight SSA IR that works well in a carefully controlled, fixed optimization pipeline. You then benefit from basic SSA optimizations, which are not hard to write. You end up working with an IR of arrays, where identifiers are indicies into the array. It's a different way of writing passes, but very efficient. It's probably worth it for WebKit, but not LLVM. >> >> LLVM's patchpoints and stackmaps features are critical for managed runtimes. However, directly supporting these features in a custom IR is simply more convenient. It takes more time to make design changes to LLVM IR vs. a custom IR. For example, LLVM does not yet support TBAA on calls, which would be very useful for optimizating around patchpoints and runtime calls. >> >> Prior to FTL, JavaScriptCore had no dependence on the LLVM project. Maintaining a dependence on an external project naturally has integration overhead. >> >> So, while LLVM is not the perfect JIT IR, it is very useful for JIT developers who want a quick solution for low-level optimization and retargetable codegen. WebKit FTL was a great example of using it to bootstrap a higher tier JIT. >> >> To that end, I think it is important for LLVM to have a well-supported -Ojit pipeline (compile fast) with the right set of passes for higher-level languages (e.g. Tail Duplication). >> >> -Andy >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160215/ddcfdeeb/attachment.html>
David Chisnall via llvm-dev
2016-Feb-16 09:14 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
On 15 Feb 2016, at 23:12, Andrew Trick via llvm-dev <llvm-dev at lists.llvm.org> wrote:> > Prior to FTL, JavaScriptCore had no dependence on the LLVM project. Maintaining a dependence on an external project naturally has integration overhead.And the fact that a company that has as much in-house LLVM expertise as Apple decided that this was a significant burden is something that we should take note of. LLVM is particularly unfriendly to out-of-tree developers, with no attempt made to provide API compatibility between releases. I maintain several out-of-tree projects that use LLVM and the effort involved in moving between major releases is significant (and not much more than the effort involved in moving between svn head revisions so, like most other projects, I don’t test with head until there’s a release candidate - or often after the release, if I don’t have a few days to update to the new APIs, which means that we lose out on a load of testing that other library projects get for free). Methods are removed or renamed with no deprecation warnings and often without any documentation indicating what their usage should be replaced with. Even for a fairly small project, upgrading between point releases of LLVM is typically a few days of effort. David
Andrew Trick via llvm-dev
2016-Feb-16 10:31 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
> On Feb 16, 2016, at 1:14 AM, David Chisnall <David.Chisnall at cl.cam.ac.uk> wrote: > > On 15 Feb 2016, at 23:12, Andrew Trick via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >> Prior to FTL, JavaScriptCore had no dependence on the LLVM project. Maintaining a dependence on an external project naturally has integration overhead. > > And the fact that a company that has as much in-house LLVM expertise as Apple decided that this was a significant burden is something that we should take note of. LLVM is particularly unfriendly to out-of-tree developers, with no attempt made to provide API compatibility between releases. I maintain several out-of-tree projects that use LLVM and the effort involved in moving between major releases is significant (and not much more than the effort involved in moving between svn head revisions so, like most other projects, I don’t test with head until there’s a release candidate - or often after the release, if I don’t have a few days to update to the new APIs, which means that we lose out on a load of testing that other library projects get for free). Methods are removed or renamed with no deprecation warnings and often without any documentation indicating what their usage should be replaced with. Even for a fairly small project, upgrading between point releases of LLVM is typically a few days of effort.Thanks David, The integration burden is something to raise awareness of. I thought failing to mention it would be disingenuous. It needs to factor into anyone's plans to integrate LLVM into their runtime. I'll reiterate that I do not speak for the WebKit team or their motivation. I don't think integration burden is any less whether you work for one company or another, or have "in-house" expertise, and I know that API breakage can't be blamed on a particular company. Bottom line (to risk stating the obvious): - runtime compiler integration is even harder than static compiler integration - don't expect to piggyback on LLVM's continual advances without continually engaging the LLVM open source community I think either of these topics, MCJIT design and general API migration, would be great to discuss in separate threads. Andy
Rafael Espíndola via llvm-dev
2016-Feb-16 13:26 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
On 15 February 2016 at 18:12, Andrew Trick <atrick at apple.com> wrote:> > On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev > <llvm-dev at lists.llvm.org> wrote: > >> >> JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT) is moving >> away >> from using LLVM as its backend, towards [B3 (Bare Bones >> Backend)](https://webkit.org/docs/b3/). This includes its own [SSA >> IR](https://webkit.org/docs/b3/intermediate-representation.html), >> optimisations, and instruction selection backend. > > In the end, what was the main motivation for creating a new IR? > > I can't speak to the motivation of the WebKit team. Those are outlined in > https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/. > I'll give you my personal perspective on using LLVM for JITs, which may be > interesting to the LLVM community.Thanks! I found that during the weekend and it was a very nice read. I find it quite impressive what you guys manage to do in such a short time. Hope to see llvm catch up some day. Cheers, Rafael