The Illinois LLVM compiler research group is excited to announce the open-source release of HPVM<http://hpvm.cs.illinois.edu> (version 1.0). HPVM is a retargetable compiler infrastructure that targets CPUs, GPUs, FPGAs and accelerators (this release does not include FPGA and accelerator support) [1]. HPVM uses a target-independent compiler IR that extends the LLVM 9.0.0 compiler IR with an explicit, hierarchical data flow representation that captures task, data, and pipelined parallelism. This release is a major addition to our first release (version 0.5), adding support for linear algebra tensor operations, Pytorch and Keras frontends, approximations for convolution operators, and an efficient and flexible framework for approximation tuning. Our novel approximation-tuner [2] automatically selects approximation knobs for individual tensor operations and selects configurations that maximize a (configurable) performance objective. HPVM includes backends for CPUs and NVIDIA GPUs (using cuDNN for tensor ops and OpenCL for non-tensor computations). HPVM comes with an easy-to-use install script that automates the process of installing and patching LLVM 9.0, and automatically installs the necessary python packages. The release includes multiple benchmarks (10 CNN benchmarks and 5 non-tensor benchmarks) as well as unit tests and regression tests. HPVM can be downloaded from our public GitLab repository<https://gitlab.engr.illinois.edu/llvm/hpvm-release>. Read our online documentation<https://hpvm.readthedocs.io/en/latest/> for how to build, install, and use HPVM. HPVM is provided under the Apache 2.0 License with LLVM Extensions (the same as used by the LLVM infrastructure). Any questions or suggestions can be directed to: hpvm-dev at lists.cs.illinois.edu<mailto:hpvm-dev at lists.cs.illinois.edu>. The intended audience for HPVM includes researchers and developers interested in heterogeneous parallel computing, including those working in the areas of compilers, programming languages, approximate computing, software optimization, static and dynamic program analysis, autotuning, and systems for machine learning. The following people led the effort in creating this release: * Hashim Sharif <https://www.hashimsharif.com/> (hsharif3 at illinois.edu) * Yifan Zhao<https://evzh.net/> (yifanz16 at illinois.edu) * Akash Kothari<https://www.linkedin.com/in/akash-kothari-007/> (akashk4 at illinois.edu) * Abdul Rafae Noor<https://github.com/RafaeNoor> (arnoor2 at illinois.edu) * Nathan Zhao<https://www.linkedin.com/in/nathan-zhao-58410917a?trk=public_profile_samename_mini-profile_title> (nz11 at illinois.edu) * Peter Pao-Huang (ytp2 at illinois.edu) * Leon Medvinsky (leonkm2 at illinois.edu) * Adel Ejjeh (aejjeh at illinois.edu) [1] Maria Kotsifakou, Prakalp Srivastava, Matthew D. Sinclair, Rakesh Komuravelli, Vikram Adve, and Sarita Adve. 2018. HPVM: Heterogeneous Parallel Virtual Machine<https://hpvm.cs.illinois.edu/publications/>. In Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '18). Association for Computing Machinery, New York, NY, USA. [2] Hashim Sharif, Yifan Zhao, Maria Kotsifakou, Akash Kothari, Benjamin Schreiber, Elizabeth Wang, Yasmin Sarita, Nathan Zhao, Keyur Joshi, Vikram Adve, Sasa Misailovic, Sarita Adve, “ApproxTuner: A Compiler and Runtime System for Adaptive Approximations<https://hpvm.cs.illinois.edu/publications/>,” In Proceedings of Principles and Practice of Parallel Programming (PPoPP), Feb-Mar 2021, Virtual Conference, Seoul, South Korea. Thanks, Hashim Sharif University of Illinois -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210409/63a1f412/attachment.html>