Star Tan
2013-Mar-18 15:31 UTC
[LLVMdev] [Polly]GSoC Proposal: Reducing LLVM-Polly Compiling overhead
Dear Tobias Grosser, Thank you so much for your kind reply. Your advice is very helpful and inspiring. At 2013-03-18 20:40:50,"Tobias Grosser" <tobias at grosser.es> wrote:>On 03/17/2013 11:54 PM, Star Tan wrote: >> Hello Tobi, >> >> I am interested in Polly project. Polly seems to be a very promising tool to find out program parallelization based on LLVM infrastructure. However, I find that Polly analysis and optimization can consume significant compiling time, so I propose a GSoC project to reduce Polly compiling time and I hope my work can make the Polly tool more applicable for all LLVM users. > > >Dear Star Tan, > >the project your propose sounds both interesting and important. As you >correctly pointed out, maintaining fast compile times when Polly is >enabled is a very important point. Especially, if we want to think of >enabling Polly by default.Yes, it is very important to reduce Polly compiling overhead, especially if Polly is enabled by default. That is the motivation why I propose this GSoC project and I hope my work can be helpful for the further development of Polly project.> >> I have done some preliminary experiments. My experimental environment is built as follows: >> First, I build the LLVM, Clang and Polly using -O3 option, so all of these tools can be run in best case; >> Second, I evaluate the compiling time of Polybench using the following options: >> Clang-O3: clang -O3 (the basic clang without polly) >> Polly-basic: clang -Xclang -load -Xclang LLVMPolly.so -O3 (load polly but no use of polly optimization) >> Polly-optimize: clang -Xclang -load -Xclang LLVMPolly.so -mllvm -polly -O3 (use polly optimization) > >I believe this setup is a very good start. However, to draw useful >conclusions we probably need more information: > >1) Did you build LLVM/Polly in release mode? > >The compile times with and without Polly look very high. Compiling 2mm >with a current version of clang takes 0.076 seconds for me, whereas you >report a compile time of 0.780 seconds. This may caused by performance >differences of our machines or, what I believe is more likely, that you >did not compile LLVM and Polly in Release mode. Profiling of debug >builds may give a wrong idea on where compile time is spent. Can you >verify you built in Release mode? >Thank you so much for your advice. You are right. I just use the polly.sh (provided by Polly website) to build LLVM-Polly and it builds LLVM-Polly in Debug mode in default. I will rebuild the LLVM-Polly in Release mode by configuring with --enable-optimized option and I will report the new data after I redo these experiments in the new environments.>2) Compile time impact when Polly does not optimize the program code > >The benchmarks you have chosen can be optimized well by Polly. This is >obviously a nice thing, but it also implies that Polly transforms and >optimizes a large percentage of this code. In cases, where Polly can >create an optimized version of the code, (limited) increases in compile >time are often acceptable. However, one area where we should be super >critical with compile time is the case when Polly is enabled, but when >it can not perform any useful transformation. > >For polybench, you can get a first idea of the current overhead, by >timing only the polly-detect part, but by disabling the polly >optimizations and the polly code-generation. However, to get a more >realistic picture, you probably want to run Polly on the LLVM >test-suite. I know of some in-house nightly testers, that track Polly >performance on the LLVM test suite. Your work would be even another >reason to get a public version of such a tester. I will see if I can get >hardware to do so. Meanwhile, you can run it on your own machine. >That is OK. I will evaluate Polly using both LLVM testsuite and Polybench in the following experiments.>> The preliminary experimental results are as follows: (benchmarks are collected from Po >> >> | >> | Clang >> (seconds) | Polly-basic >> (seconds) | Polly-optimize >> (seconds) | Polly-load overhead | Polly-optimize >> overhead | >> | 2mm.c | 0.786 | 0.802 | 1.944 | 2.0% | 147.3% | >> | correlation.c | 0.782 | 0.792 | 2.005 | 1.3% | 156.4% | >> | gesummv.c | 0.583 | 0.603 | 1.08 | 3.4% | 85.2% | >> | ludcmp.c | 0.787 | 0.806 | 2.475 | 2.4% | 214.5% | >> | 3mm.c | 0.786 | 0.811 | 2.617 | 3.2% | 233.0% | >> | covariance.c | 0.73 | 0.74 | 2.294 | 1.4% | 214.2% | >> | gramschmidt.c | 0.63 | 0.643 | 1.134 | 2.1% | 80.0% | >> | seidel.c | 0.632 | 0.645 | 2.036 | 2.1% | 222.2% | >> | adi.c | 0.8 | 0.811 | 3.044 | 1.4% | 280.5% | >> | doitgen.c | 0.742 | 0.752 | 2.32 | 1.3% | 212.7% | >> | instrument.c | 0.445 | 0.45 | 0.495 | 1.1% | 11.2% | > >It is interesting to see that the only file that does not contain a >kernel that is optimized by polly, but just some auxiliary functions has >a very low compile time overhead. This may imply that most of the >compile time overhead is due to optimizations Polly can perform. >However, we should only draw conclusions from this after we verified >those numbers are from a Release build. >Yes, I will report the new data from the Release build in the next few days and then we can discuss the results.>> Experimental results show that Polly analysis and optimization can leads to 142% extra compiling overhead, which maybe unacceptable in many large software building. As a result, it is an urgent task to reduce the compiling time of Polly analysis and optimization. >> >> My plan for this proposal is to reduce Polly compiling overhead step by step: >> 1) Investigate the details of Polly, find out how much time spent on analysis and how much time spent on optimization. Of course it is very important to distinguish the overhead on codes where Polly is applicable and codes where Polly is not applicable. >> 2) Profile the Polly to find out the hot code and find out the sources of compiling overhead; Based on the profiling, I will try to rewrite the hot code to improving the compiling process. >> 3) Remove expensive analysis passes. For example, the scop detection currently requires both the post-dominance analysis >> as well as the dominance frontier analysis. Not requiring these up front (at all) would be great. >> 4) Canonicalization passes scheduled before Polly. Before running Polly, we currently schedule a set of passes to canonicalize the LLVM-IR on which the scop detection is run on. If I can reduce the number of preparing passes, then the compiling overhead can be reduced. >> 5) Find out other ways to reduce compiling overhead. > >Very good points. > >I believe the general direction is great. As a next step I propose to >continue your profiling work to get a better idea of the performance >characteristics of Polly and to understand where compile time is spent. >I have some ideas myself, but I would prefer that you do this analysis >independent of my suggestions to double check my own findings. I invite >you to regularly report your findings. I should be able to both help you >to ensure that the numbers you get allow us to draw correct conclusions >and also to point you to source code changes that you could perform to >reduce the compile time. > >Thanks again for this interesting proposal. > >All the best, >Tobias > >Thank you so much for your kindly help. I will try my best to contribute to the LLVM and Polly, and I hope this project can be useful for Polly development. Best regards, Star Tan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130318/53b12f2d/attachment.html>
Star Tan
2013-Mar-19 16:02 UTC
[LLVMdev] [Polly]GSoC Proposal: Reducing LLVM-Polly Compiling overhead
Dear Tobias Grosser, Today I have rebuilt the LLVM-Polly in Release mode. The configuration of my own testing machine is: Intel Pentium Dual CPU T2390(1.86GHz) with 2GB DDR2 memory. I evaluated the Polly using PolyBench and Mediabench. It takes too long time to evaluate the whole LLVM-testsuite, so I just choose the Mediabench from LLVM-testsuite. The preliminary results of Polly compiling overhead is listed as follows: Table 1: Compiling time overhead of Polly for PolyBench. | | Clang (econd) | Polly-load (econd) | Polly-optimize (econd) | Polly-load penalty | Polly-optimize Penalty | | 2mm.c | 0.155 | 0.158 | 0.75 | 1.9% | 383.9% | | correlation.c | 0.132 | 0.133 | 0.319 | 0.8% | 141.7% | | geummv.c | 0.152 | 0.157 | 0.794 | 3.3% | 422.4% | | ludcmp.c | 0.157 | 0.159 | 0.391 | 1.3% | 149.0% | | 3mm.c | 0.103 | 0.109 | 0.122 | 5.8% | 18.4% | | covariance.c | 0.16 | 0.163 | 1.346 | 1.9% | 741.3% | | gramchmidt.c | 0.159 | 0.167 | 1.023 | 5.0% | 543.4% | | eidel.c | 0.125 | 0.13 | 0.285 | 4.0% | 128.0% | | adi.c | 0.155 | 0.156 | 0.953 | 0.6% | 514.8% | | doitgen.c | 0.124 | 0.128 | 0.298 | 3.2% | 140.3% | | intrument.c | 0.149 | 0.151 | 0.837 | 1.3% | 461.7% | | atax.c | 0.135 | 0.136 | 0.917 | 0.7% | 579.3% | | gemm.c | 0.161 | 0.162 | 1.839 | 0.6% | 1042.2% | | jacobi-2d-imper.c | 0.16 | 0.161 | 0.649 | 0.6% | 305.6% | | bicg.c | 0.149 | 0.152 | 0.444 | 2.0% | 198.0% | | gemver.c | 0.135 | 0.136 | 0.416 | 0.7% | 208.1% | | lu.c | 0.143 | 0.148 | 0.398 | 3.5% | 178.3% | | Average | | | | 2.20% | 362.15% | Table 2: Compiling time overhead of Polly for Mediabench (Selected from LLVM-testsuite). | | Clang (econd) | Polly-load (econd) | Polly-optimize (econd) | Polly-load penalty | Polly-optimize Penalty | | adpcm | 0.18 | 0.187 | 0.218 | 3.9% | 21.1% | | g721 | 0.538 | 0.538 | 0.803 | 0.0% | 49.3% | | gsm | 2.869 | 2.936 | 4.789 | 2.3% | 66.9% | | mpeg2 | 3.026 | 3.072 | 4.662 | 1.5% | 54.1% | | jpeg | 13.083 | 13.248 | 22.488 | 1.3% | 71.9% | | Average | | | | 1.80% | 52.65% | As shown in these two tables, Polly can significantly increase the compiling time when it indeed works for the Polybench. On average, Polly will increase the compiling time by 4.5X for Polybench. Even for the Mediabench, in which Polly does not actually improve the efficiency of generated code, it still increases the compiling time by 1.5X. Based on the above observation, I think we should not only reduce the Polly analysis and optimization time, but also make it bail out early when it cannot improve the efficiency of generated code. That is very important when Polly is enabled in default for LLVM users. Thanks again for Tobi's kind help. Best Regards, Star Tan At 2013-03-18 23:31:29,"Star Tan" <tanmx_star at yeah.net> wrote: Dear Tobias Grosser, Thank you so much for your kind reply. Your advice is very helpful and inspiring. At 2013-03-18 20:40:50,"Tobias Grosser" <tobias at grosser.es> wrote:>On 03/17/2013 11:54 PM, Star Tan wrote: >> Hello Tobi, >> >> I am interested in Polly project. Polly seems to be a very promising tool to find out program parallelization based on LLVM infrastructure. However, I find that Polly analysis and optimization can consume significant compiling time, so I propose a GSoC project to reduce Polly compiling time and I hope my work can make the Polly tool more applicable for all LLVM users. > > >Dear Star Tan, > >the project your propose sounds both interesting and important. As you >correctly pointed out, maintaining fast compile times when Polly is >enabled is a very important point. Especially, if we want to think of >enabling Polly by default.Yes, it is very important to reduce Polly compiling overhead, especially if Polly is enabled by default. That is the motivation why I propose this GSoC project and I hope my work can be helpful for the further development of Polly project.> >> I have done some preliminary experiments. My experimental environment is built as follows: >> First, I build the LLVM, Clang and Polly using -O3 option, so all of these tools can be run in best case; >> Second, I evaluate the compiling time of Polybench using the following options: >> Clang-O3: clang -O3 (the basic clang without polly) >> Polly-basic: clang -Xclang -load -Xclang LLVMPolly.so -O3 (load polly but no use of polly optimization) >> Polly-optimize: clang -Xclang -load -Xclang LLVMPolly.so -mllvm -polly -O3 (use polly optimization) > >I believe this setup is a very good start. However, to draw useful >conclusions we probably need more information: > >1) Did you build LLVM/Polly in release mode? > >The compile times with and without Polly look very high. Compiling 2mm >with a current version of clang takes 0.076 seconds for me, whereas you >report a compile time of 0.780 seconds. This may caused by performance >differences of our machines or, what I believe is more likely, that you >did not compile LLVM and Polly in Release mode. Profiling of debug >builds may give a wrong idea on where compile time is spent. Can you >verify you built in Release mode? >Thank you so much for your advice. You are right. I just use the polly.sh (provided by Polly website) to build LLVM-Polly and it builds LLVM-Polly in Debug mode in default. I will rebuild the LLVM-Polly in Release mode by configuring with --enable-optimized option and I will report the new data after I redo these experiments in the new environments.>2) Compile time impact when Polly does not optimize the program code > >The benchmarks you have chosen can be optimized well by Polly. This is >obviously a nice thing, but it also implies that Polly transforms and >optimizes a large percentage of this code. In cases, where Polly can >create an optimized version of the code, (limited) increases in compile >time are often acceptable. However, one area where we should be super >critical with compile time is the case when Polly is enabled, but when >it can not perform any useful transformation. > >For polybench, you can get a first idea of the current overhead, by >timing only the polly-detect part, but by disabling the polly >optimizations and the polly code-generation. However, to get a more >realistic picture, you probably want to run Polly on the LLVM >test-suite. I know of some in-house nightly testers, that track Polly >performance on the LLVM test suite. Your work would be even another >reason to get a public version of such a tester. I will see if I can get >hardware to do so. Meanwhile, you can run it on your own machine. >That is OK. I will evaluate Polly using both LLVM testsuite and Polybench in the following experiments.>> The preliminary experimental results are as follows: (benchmarks are collected from Po >> >> | >> | Clang >> (seconds) | Polly-basic >> (seconds) | Polly-optimize >> (seconds) | Polly-load overhead | Polly-optimize >> overhead | >> | 2mm.c | 0.786 | 0.802 | 1.944 | 2.0% | 147.3% | >> | correlation.c | 0.782 | 0.792 | 2.005 | 1.3% | 156.4% | >> | gesummv.c | 0.583 | 0.603 | 1.08 | 3.4% | 85.2% | >> | ludcmp.c | 0.787 | 0.806 | 2.475 | 2.4% | 214.5% | >> | 3mm.c | 0.786 | 0.811 | 2.617 | 3.2% | 233.0% | >> | covariance.c | 0.73 | 0.74 | 2.294 | 1.4% | 214.2% | >> | gramschmidt.c | 0.63 | 0.643 | 1.134 | 2.1% | 80.0% | >> | seidel.c | 0.632 | 0.645 | 2.036 | 2.1% | 222.2% | >> | adi.c | 0.8 | 0.811 | 3.044 | 1.4% | 280.5% | >> | doitgen.c | 0.742 | 0.752 | 2.32 | 1.3% | 212.7% | >> | instrument.c | 0.445 | 0.45 | 0.495 | 1.1% | 11.2% | > >It is interesting to see that the only file that does not contain a >kernel that is optimized by polly, but just some auxiliary functions has >a very low compile time overhead. This may imply that most of the >compile time overhead is due to optimizations Polly can perform. >However, we should only draw conclusions from this after we verified >those numbers are from a Release build. >Yes, I will report the new data from the Release build in the next few days and then we can discuss the results.>> Experimental results show that Polly analysis and optimization can leads to 142% extra compiling overhead, which maybe unacceptable in many large software building. As a result, it is an urgent task to reduce the compiling time of Polly analysis and optimization. >> >> My plan for this proposal is to reduce Polly compiling overhead step by step: >> 1) Investigate the details of Polly, find out how much time spent on analysis and how much time spent on optimization. Of course it is very important to distinguish the overhead on codes where Polly is applicable and codes where Polly is not applicable. >> 2) Profile the Polly to find out the hot code and find out the sources of compiling overhead; Based on the profiling, I will try to rewrite the hot code to improving the compiling process. >> 3) Remove expensive analysis passes. For example, the scop detection currently requires both the post-dominance analysis >> as well as the dominance frontier analysis. Not requiring these up front (at all) would be great. >> 4) Canonicalization passes scheduled before Polly. Before running Polly, we currently schedule a set of passes to canonicalize the LLVM-IR on which the scop detection is run on. If I can reduce the number of preparing passes, then the compiling overhead can be reduced. >> 5) Find out other ways to reduce compiling overhead. > >Very good points. > >I believe the general direction is great. As a next step I propose to >continue your profiling work to get a better idea of the performance >characteristics of Polly and to understand where compile time is spent. >I have some ideas myself, but I would prefer that you do this analysis >independent of my suggestions to double check my own findings. I invite >you to regularly report your findings. I should be able to both help you >to ensure that the numbers you get allow us to draw correct conclusions >and also to point you to source code changes that you could perform to >reduce the compile time. > >Thanks again for this interesting proposal. > >All the best, >Tobias > >Thank you so much for your kindly help. I will try my best to contribute to the LLVM and Polly, and I hope this project can be useful for Polly development. Best regards, Star Tan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130320/a522a8bb/attachment.html>
Tobias Grosser
2013-Mar-20 13:06 UTC
[LLVMdev] [Polly]GSoC Proposal: Reducing LLVM-Polly Compiling overhead
On 03/19/2013 11:02 AM, Star Tan wrote:> > Dear Tobias Grosser, > > Today I have rebuilt the LLVM-Polly in Release mode. The configuration of my own testing machine is: Intel Pentium Dual CPU T2390(1.86GHz) with 2GB DDR2 memory. > I evaluated the Polly using PolyBench and Mediabench. It takes too long time to evaluate the whole LLVM-testsuite, so I just choose the Mediabench from LLVM-testsuite.OK. This is a good baseline.> The preliminary results of Polly compiling overhead is listed as follows: > > Table 1: Compiling time overhead of Polly for PolyBench. > > | | Clang > (econd) | Polly-load > (econd) | Polly-optimize > (econd) | Polly-load penalty | Polly-optimize > Penalty | > | 2mm.c | 0.155 | 0.158 | 0.75 | 1.9% | 383.9% | > | correlation.c | 0.132 | 0.133 | 0.319 | 0.8% | 141.7% | > | geummv.c | 0.152 | 0.157 | 0.794 | 3.3% | 422.4% | > | ludcmp.c | 0.157 | 0.159 | 0.391 | 1.3% | 149.0% | > | 3mm.c | 0.103 | 0.109 | 0.122 | 5.8% | 18.4% | > | covariance.c | 0.16 | 0.163 | 1.346 | 1.9% | 741.3% |This is a very large slowdown. On my system I get 0.06 sec for Polly-load 0.09 sec for Polly-optimize What exact version of Polybench did you use? What compiler flags did you use to compile the benchmark? Also, did you run the executables several times? How large is the standard deviation of the results? (You can use a tool like ministat to calculate these values [1])> | gramchmidt.c | 0.159 | 0.167 | 1.023 | 5.0% | 543.4% | > | eidel.c | 0.125 | 0.13 | 0.285 | 4.0% | 128.0% | > | adi.c | 0.155 | 0.156 | 0.953 | 0.6% | 514.8% | > | doitgen.c | 0.124 | 0.128 | 0.298 | 3.2% | 140.3% | > | intrument.c | 0.149 | 0.151 | 0.837 | 1.3% | 461.7% |This number is surprising. In your last numbers you reported Polly-optimize as taking 0.495 sec in debug mode. The time you now report for the release mode is almost twice as much. Can you verify this number please?> | atax.c | 0.135 | 0.136 | 0.917 | 0.7% | 579.3% | > | gemm.c | 0.161 | 0.162 | 1.839 | 0.6% | 1042.2% |This number looks also fishy. In debug mode you reported for Polly-optimize 1.327 seconds. This is again faster than in release mode.> | jacobi-2d-imper.c | 0.16 | 0.161 | 0.649 | 0.6% | 305.6% | > | bicg.c | 0.149 | 0.152 | 0.444 | 2.0% | 198.0% | > | gemver.c | 0.135 | 0.136 | 0.416 | 0.7% | 208.1% | > | lu.c | 0.143 | 0.148 | 0.398 | 3.5% | 178.3% | > | Average | | | | 2.20% | 362.15% |Otherwise, those numbers look like a good start. Maybe you can put them on some website/wiki/document where you can extend them as you proceed with benchmarking.> Table 2: Compiling time overhead of Polly for Mediabench (Selected from LLVM-testsuite). > | | Clang > (econd) | Polly-load > (econd) | Polly-optimize > (econd) | Polly-load penalty | Polly-optimize > Penalty | > | adpcm | 0.18 | 0.187 | 0.218 | 3.9% | 21.1% | > | g721 | 0.538 | 0.538 | 0.803 | 0.0% | 49.3% | > | gsm | 2.869 | 2.936 | 4.789 | 2.3% | 66.9% | > | mpeg2 | 3.026 | 3.072 | 4.662 | 1.5% | 54.1% | > | jpeg | 13.083 | 13.248 | 22.488 | 1.3% | 71.9% | > | Average | | | | 1.80% | 52.65% |I run jpeg myself to verify these numbers on my machine. I got: A: -O3 B: -O3 -load LLVMPolly.so C: -O3 -load LLVMPolly.so -mllvm -polly D: -O3 -load LLVMPolly.so -mllvm -polly -mllvm -polly-optimizer=none E: -O3 -load LLVMPolly.so -mllvm -polly -mllvm -polly-optimizer=none -mllvm -polly-code-generator=none A B C D E | jpeg | 5.1 | 5.2 | 8.0 | 7.9 | 5.5 The overhead between A and C is similar to the one you report. Hence, the numbers seem to be correct. I also added two more runs D and E to figure out where the slowdown comes from. As you can see most of the slow down disappears when we do not do code generation. This either means that the polly code generation itself is slow or that the LLVM passes afterwards need more time due to the code we generated (it contains many opportunities for scalar simplifications). It would be interesting to see if this holds for the other benchmarks and to investigate the actual reasons for the slowdown. It is also interesting to see that just running Polly, but without applying optimizations does not slow down the compilation a lot. Does this also hold for other benchmarks?> As shown in these two tables, Polly can significantly increase the compiling time when it indeed works for the Polybench. On average, Polly will increase the compiling time by 4.5X for Polybench. Even for the Mediabench, in which Polly does not actually improve the efficiency of generated code, it still increases the compiling time by 1.5X. > Based on the above observation, I think we should not only reduce the Polly analysis and optimization time, but also make it bail out early when it cannot improve the efficiency of generated code. That is very important when Polly is enabled in default for LLVM users.Bailing out early is definitely something we can think about. To get started here, you could e.g. look into the jpeg benchmark and investigate on which files Polly is spending a lot of time, where exactly the time is spent and what kind of SCoPs Polly is optimizing. In case we do not expect any benefit, we may skip code generation entirely. Thanks again for your interesting analysis. Cheers, Tobi [1] https://github.com/codahale/ministat
Maybe Matching Threads
- [LLVMdev] [Polly]GSoC Proposal: Reducing LLVM-Polly Compiling overhead
- [LLVMdev] [Polly]GSoC Proposal: Reducing LLVM-Polly Compiling overhead
- [LLVMdev] [Polly]GSoC Proposal: Reducing LLVM-Polly Compiling overhead
- [LLVMdev] [Polly] GSoC Proposal: Reducing LLVM-Polly Compiling overhead
- [LLVMdev] [Polly] GSoC Proposal: Reducing LLVM-Polly Compiling overhead