Jakub (Kuba) Kuderski via llvm-dev
2021-Feb-19 16:41 UTC
[llvm-dev] LLVM GPU News Issue #6, February 19 2021
Hi folks, The sixth issue of LLVM GPU News, a bi-weekly newsletter on all the GPU things under the LLVM umbrella, is now available at: https://llvm-gpu-news.github.io/2021/02/19/issue-6.html I'm also pasting the content below, in case you prefer to read in your email client. -Jakub ===================================================================== # LLVM GPU News Issue #6, February 19 2021 Authors: Jakub Kuderski, Lei Zhang, Johannes Doerfert Welcome to the sixth issue of LLVM GPU News, a bi-weekly newsletter on all the GPU things under the LLVM umbrella. This issue covers the period from February 5 to February 18 2021. We welcome your feedback and suggestions. Let us know if we missed anything interesting, or want us to bring attention to your (sub)project, revisions under review, or proposals. Please see the bottom of the page for details on how to submit suggestions and contribute. ## Industry News and Conference Talks * Vulkan, a cross-platform graphics API, is [five years old now]( https://www.phoronix.com/scan.php?page=news_item&px=Vulkan-Turns-Five-Years-Old ). * In another Apple M1 GPU tinkering effort, Dougall Johnson published an in-progress doc attempting to [explain the M1 GPU architecture]( https://dougallj.github.io/applegpu/docs.html). The project [repository contains various tools](https://github.com/dougallj/applegpu), including an assembler, disassembler, emulator, and a test suite. ## LLVM and Clang ### Discussions * David Blaikie is [looking for volunteers]( https://lists.llvm.org/pipermail/llvm-dev/2021-February/148467.html) with GPU and/or LLVM middle-end background to help review the ["Abstracting over SSA form IRs to implement generic analyses"]( https://lists.llvm.org/pipermail/llvm-dev/2020-December/147433.html) proposal. One of the main uses of the proposed abstractions is supposed to be the Divergence Analysis. * Sameer Sahasrabuddhe continues the attempts to [enable Divergence Analysis](https://reviews.llvm.org/D96615) with the New Pass Manager. Alina Sbirlea [pointed out that there are two feasible ways]( https://lists.llvm.org/pipermail/llvm-dev/2021-February/148600.html) to make the `SimpleLoopUnswitch` pass work: either disable non-trivial unswitching for targets with divergence, or compute Diverge Analysis results within the pass. ### Commits * Fixes to [AMDGPU maximum memory scope for scratch, LDS, and GDS]( https://reviews.llvm.org/D96643) address spaces. * [Support for the AMDGPU `gfx90a` target](https://reviews.llvm.org/D96906) was posted, but may have been committed prematurely. * CUDA/HIP [option for specifying compilation unit ID]( https://reviews.llvm.org/D95007), `-fuse-cuid`. * (In-review) HIP option to enable [sanitizer support for the AMDGPU target](https://reviews.llvm.org/D96835), `-fgpu-sanitize`. This is experimental and off by default. * (In-review) A new [clspv target for libclc]( https://reviews.llvm.org/D94013). [clspv](https://github.com/google/clspv) is an open-source OpenCL C to Vulkan SPIR-V compiler. ## MLIR ### Discussions ### Commits * NVVM/ROCDL kernel function conversions [now rely on]( https://reviews.llvm.org/D96591) target-specific attributes for better control. * NVVM/ROCDL to LLVM IR conversions [now adopt]( https://reviews.llvm.org/D96592) the interface-based LLVM translation. * In SPIR-V dialect, more [types](https://reviews.llvm.org/D96169) and [ops](https://reviews.llvm.org/D96527) were defined to support graphics use cases. * More [patterns](https://reviews.llvm.org/D96042) were added to convert vector ops to SPIR-V ops. ## OpenMP (Target Offloading) ### Discussions * Konstantin Sidorov is interested in Google Summer of Code [project ideas related to Machine Learning-assisted compiler optimizations]( https://lists.llvm.org/pipermail/llvm-dev/2021-January/147908.html). Johannes Doerfert [suggested a predictor]( https://lists.llvm.org/pipermail/llvm-dev/2021-February/148386.html) for grid/block/thread block size for OpenMP GPU kernels. ### Commits * NVIDIA devices will from now on [require CUDA 9.0 or higher]( https://reviews.llvm.org/D97003). * We will natively [support CUDA 11.1 and 11.2]( https://reviews.llvm.org/D97004). * All target directives, not only target regions, will now utilize [asynchronous actions](https://reviews.llvm.org/D96379) if the plugin supports them (which includes the CUDA plugin). * The [NVIDIA device runtime](https://reviews.llvm.org/D94745) and the [AMDGPU device runtime](https://reviews.llvm.org/D96533) are now build as C++ with OpenMP code, not as CUDA/HIP anymore. * The CUDA plugin can be built [without having CUDA installed]( https://reviews.llvm.org/D95155) on a system (or known to clang), this should allow us to distribute LLVM with OpenMP offload support more easily. * Various bugs have been fixed, including but not limited to: - [PR49158](https://llvm.org/PR49158) fixed by allowing unused functions in declare target regions if they are [not emitted]( https://reviews.llvm.org/D95928), - [PR49207](https://llvm.org/PR49207) fixed by avoiding stack locations in [asynchronous actions](https://reviews.llvm.org/D96667). ## External Compilers ### LLPC ### Mesa * llvmpipe, a CPU OpenGL implementation, landed [support for more SPIR-V extensions](https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/8972), bringing it closer to full GL4.6 support. ### SYCL * Khronos released the final version of the [SYCL 2020 spec]( https://www.khronos.org/blog/sycl-2020-what-do-you-need-to-know). SYCL 2020 is based on C++17 and contains [over 40 new features]( https://www.khronos.org/news/press/khronos-releases-sycl-2020-final-specification), including Unified Shared Memory, built-in parallel reduction operations, atomic operations with C++ atomics semantics. -- Jakub Kuderski -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210219/397acead/attachment-0001.html>