Alex Bradbury via llvm-dev
2016-Feb-08 16:40 UTC
[llvm-dev] LLVM Weekly - #110, Feb 8th 2016
LLVM Weekly - #110, Feb 8th 2016 =============================== If you prefer, you can read a HTML version of this email at <http://llvmweekly.org/issue/110>. Welcome to the one hundred and tenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by [Alex Bradbury](http://asbradbury.org). Subscribe to future issues at <http://llvmweekly.org> and pass it on to anyone else you think may be interested. Please send any tips or feedback to <asb at asbradbury.org>, or @llvmweekly or @asbradbury on Twitter. ## News and articles from around the web Slides from the LLVM devroom at FOSDEM last weekend are [now available online](http://llvm.org/devmtg/2016-01/). Unfortunately there was an issue with the recording of the talks so videos will not be available. JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT) is moving away from using LLVM as its backend, towards [B3 (Bare Bones Backend)](https://webkit.org/docs/b3/). This includes its own [SSA IR](https://webkit.org/docs/b3/intermediate-representation.html), optimisations, and instruction selection backend. Source tarballs and binaries are [now available](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094987.html) for LLVM and Clang 3.8-RC2. The Zurich LLVM Social [is coming up this Thursday](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094759.html), February 11th at 7pm. Jeremy Bennett has written up a [comparison of the Clang and GCC command-line flags](http://www.embecosm.com/2016/02/05/how-similar-are-gcc-and-llvm-the-user-perspective/). The headline summary is that 397 work in both GCC and LLVM, 433 are LLVM-only and 598 are GCC-only. [vim-llvmcov](https://github.com/alepez/vim-llvmcov) has been released. It is a vim plugin to show code coverage using the llvm-cov tool. ## On the mailing lists * Mehdi Amini has posted an [RFC on floating point environment and rounding mode handling in LLVM](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094869.html). The work started all the way back in 2014 and has a whole bunch of patches up for review. Chandler Carruth has responded with a [detail description of his concerns about the current design](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094916.html), and his proposed alternative seems to be getting a lot of positive feedback. * Morten Brodersen has recently upgraded a number of applications from the old JIT to the new MCJIT under LLVM 3.7.1 but has [found significant performance regressions](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094908.html). Some other respondents have seen similar issues, either in compilation time or in reduced code quality in the generated code. Some of the thread participants will be providing specific examples so they can be investigated. It's possible the issue is something as simple as a different default somewhere. Benoit Belley noted [they saw regressions due to their frontend's use of allocas in 3.7](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094946.html). * Lang Hames kicked off a long discussion about [error handling in LLVM libraries](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094804.html). Lang has implemented a new scheme and is seeking feedback on it. There's a lot of discussion that unfortunately I haven't had time to summarise properly. If error handling design interests you, do get stuck in. * Adrian McCarthy has written up details on the [recent addition of minidump support to LLDB](http://lists.llvm.org/pipermail/lldb-dev/2016-February/009533.html). Minidumps are the Windows equivalent of a core file. * Juan Wajnerman is looking at adding support for multithreading to the Crystal language, and has a [question about thread local variables](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094736.html). LLVM won't re-load the thread local address, which causes issues when a thread local variable is read in a coroutine running on one thread which is then suspended and continued on a different thread. This is apparently a known issue, covered by [PR19177](https://llvm.org/bugs/show_bug.cgi?id=19177). * Steven Wu has posted an [RFC on embedding bitcode in object files](http://lists.llvm.org/pipermail/llvm-dev/2016-February/094851.html). The intent is to upstream support that already exists in Apple's fork. Understandably some of the respondents asked how this relates to the .llvmbc section that the Thin-LTO work is introducing. Steven indicates it's pretty much the same, but for Mach-O rather than ELF and that he hopes to unify them during the upstreaming. ## LLVM commits * LLVM now has a memory SSA form. This isn't yet used by anything in-tree, but should form a very useful basis for a variety of analyses and transformations. This patch has been baking for a long time, first being submitted for initial feedback in April last year. [r259595](http://reviews.llvm.org/rL259595). * A new loop versioning loop-invariant code motion (LICM) pass was introduced. This enables more opportunities for LICM by creating a new version of the loop guarded by runtime checks to test for potential aliases that can't be determined not to exist at compile-time. [r259986](http://reviews.llvm.org/rL259986). * LazyValueInfo gained an intersect operation on lattice values, which can be used to exploit multiple sources of facts at once. The intent is to make greater use of it, but already it is able to remove a half range-check when performing jump-threading. [r259461](http://reviews.llvm.org/rL259461). * The SmallSet and SmallPtrSet templates will now error out if created with a size greater than 32. [r259419](http://reviews.llvm.org/rL259419). * The ability to emit errors from the backend for unsupported features has been refactored, so BPF, WebAssembly, and AMDGPU backends can all share the same implementation. [r259498](http://reviews.llvm.org/rL259498). * A simple pass using LoopVersioning has been added, primarily for testing. The new pass will fully disambiguate all may-aliasing memory accesses no matter how many runtime checks are required. [r259610](http://reviews.llvm.org/rL259610). * The way bitsets are used to encode type information has now been documented. [r259619](http://reviews.llvm.org/rL259619). * You can now use the flag `-DLLVM_ENABLE_LTO` with CMake to build LLVM with link-time optimisation. [r259766](http://reviews.llvm.org/rL259766). * TableGen's AsmOperandClass gained the `IsOptional` field. Setting this to 1 means the operand is optional and the AsmParser will not emit an error if the operand isn't present. [r259913](http://reviews.llvm.org/rL259913). * There is now a scheduling model for the Exynos-M1. [r259958](http://reviews.llvm.org/rL259958). ## Clang commits * Clang now has builtins for the bitreverse intrinsic. [r259671](http://reviews.llvm.org/rL259671). * The option names for profile-guided optimisations with the cc1 driver have been modified. [r259811](http://reviews.llvm.org/rL259811). ## Other project commits * AddressSanitizer now supports iOS. [r259451](http://reviews.llvm.org/rL259451). * The current policy for using the new ELF LLD as a library has been documented. [r259606](http://reviews.llvm.org/rL259606). * Polly's new Sphinx documentation gained a guide on using Polly with Clang. [r259767](http://reviews.llvm.org/rL259767).
Rafael Espíndola via llvm-dev
2016-Feb-09 17:55 UTC
[llvm-dev] LLVM Weekly - #110, Feb 8th 2016
.> > JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT) is movingaway> from using LLVM as its backend, towards [B3 (Bare Bones > Backend)](https://webkit.org/docs/b3/). This includes its own [SSA > IR](https://webkit.org/docs/b3/intermediate-<https://webkit.org/docs/b3/intermediate-representation.html> representation.html <https://webkit.org/docs/b3/intermediate-representation.html>),> optimisations, and instruction selection backend.In the end, what was the main motivation for creating a new IR? Cheers, Rafael -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160209/8d6c3e52/attachment.html>
Andrew Trick via llvm-dev
2016-Feb-15 23:12 UTC
[llvm-dev] WebKit B3 (was LLVM Weekly - #110, Feb 8th 2016)
> On Feb 9, 2016, at 9:55 AM, Rafael Espíndola via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > > > JavaScriptCore's [FTL JIT](https://trac.webkit.org/wiki/FTLJIT <https://trac.webkit.org/wiki/FTLJIT>) is moving away > > from using LLVM as its backend, towards [B3 (Bare Bones > > Backend)](https://webkit.org/docs/b3/ <https://webkit.org/docs/b3/>). This includes its own [SSA > > IR](https://webkit.org/docs/b3/intermediate- <https://webkit.org/docs/b3/intermediate-representation.html>representation.html <https://webkit.org/docs/b3/intermediate-representation.html>), > > optimisations, and instruction selection backend. > > In the end, what was the main motivation for creating a new IR? >I can't speak to the motivation of the WebKit team. Those are outlined in https://webkit.org/blog/5852/introducing-the-b3-jit-compiler/. I'll give you my personal perspective on using LLVM for JITs, which may be interesting to the LLVM community. Most of the payoff for high level languages comes from the language-specific optimizer. It was simpler for JavaScriptCore to perform loop optimization at that level, so it doesn't even make use of LLVM's most powerful optimizations, particularly SCEV based optimization. There is a relatively small, finite amount of low-level optimization that is going to be important for JavaScript benchmarks (most of InstCombine is not relevant). SelectionDAG ISEL's compile time makes it a very poor choice for a JIT. We never put the effort into making x86 FastISEL competitive for WebKit's needs. The focus now is on Global ISEL, but that won't be ready for a while. Even when LLVM's compile time problems are largely solved, and I believe they can be, there will always be systemic compile time and memory overhead from design decisions that achieve generality, flexibility, and layering. These are software engineering tradeoffs. It is possible to design an extremely lightweight SSA IR that works well in a carefully controlled, fixed optimization pipeline. You then benefit from basic SSA optimizations, which are not hard to write. You end up working with an IR of arrays, where identifiers are indicies into the array. It's a different way of writing passes, but very efficient. It's probably worth it for WebKit, but not LLVM. LLVM's patchpoints and stackmaps features are critical for managed runtimes. However, directly supporting these features in a custom IR is simply more convenient. It takes more time to make design changes to LLVM IR vs. a custom IR. For example, LLVM does not yet support TBAA on calls, which would be very useful for optimizating around patchpoints and runtime calls. Prior to FTL, JavaScriptCore had no dependence on the LLVM project. Maintaining a dependence on an external project naturally has integration overhead. So, while LLVM is not the perfect JIT IR, it is very useful for JIT developers who want a quick solution for low-level optimization and retargetable codegen. WebKit FTL was a great example of using it to bootstrap a higher tier JIT. To that end, I think it is important for LLVM to have a well-supported -Ojit pipeline (compile fast) with the right set of passes for higher-level languages (e.g. Tail Duplication). -Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160215/36427ef8/attachment.html>