Hi David, I am the one who's responsible for CFLAA's refactoring in the summer. I've sent out another email on llvm-dev, and you can find more about my work in my GSoC final report. I think it is fantastic that you have done such an interesting work. I'll definitely try to help getting the code reviewed and merged in the current. After a quick glance at your patch, it seems that what you are trying to do there is an optimized version of CFL-Steens, with a custom way of handling context-sensitivity. I'll be happy if we can end up integrating it into the existing CFL-Steens pass. Regarding the benchmark numbers, I'm very interested in what kind of tests files were you running the experiments on? Is it possible to share it?> On Wed, Aug 24, 2016 at 2:56 PM, David Callahan <dcallahan at fb.com> wrote: > Hi Greg, > > > > I see there is on going work with alias analysis and it appears the prior CFLAA has been abandoned. > > > > I have a variant of it where I reworked how compression was done to be less conservative, reworked the interprocedural to do simulated but bounded inlining, and added code to do on-demand testing of CFL paths on both compressed and full graphs. > > > > I reached a point where the ahead-of-time compression was linear but still very accurate compared to on-demand path search and there were noticeable improvements in the alias analysis results and impacted transformations. Happy to share the patch with you if you are interested as well as some data collected. > > > > However I was not able to see any performance improvements in the code. In fact on a various benchmarks there were noticeable regressions in measured performance of the generated code. Have you noticed any similar problems? > > > > --david-- Best Regards, -- Jia Chen
Hi Jia, nice to meet you, On 8/25/16, 6:22 PM, "Jia Chen" <jchen at cs.utexas.edu> wrote:>Hi David, > >I am the one who's responsible for CFLAA's refactoring in the summer. >I've sent out another email on llvm-dev, and you can find more about my >work in my GSoC final report.Is this report available?>I think it is fantastic that you have done such an interesting work. >I'll definitely try to help getting the code reviewed and merged in the >current. After a quick glance at your patch, it seems that what you are >trying to do there is an optimized version of CFL-Steens, with a custom >way of handling context-sensitivity. I'll be happy if we can end up >integrating it into the existing CFL-Steens passThe work was more about improving the accuracy of the equivalencing step then it is about context sensitivity. In fact, it is only context-sensitive to the extent there is simulated inlining. There is now downward propagation of facts into called functions. I wanted to share it incase there were lessons of value. It is not in a very clean state at the moment but I can clean it up. Let me know how I can help.>Regarding the benchmark numbers, I'm very interested in what kind of >tests files were you running the experiments on? Is it possible to share >it? > >> On Wed, Aug 24, 2016 at 2:56 PM, David Callahan <dcallahan at fb.com> >>wrote: >> Hi Greg, >> >> >> >> I see there is on going work with alias analysis and it appears the >>prior CFLAA has been abandoned. >> >> >> >> I have a variant of it where I reworked how compression was done to be >>less conservative, reworked the interprocedural to do simulated but >>bounded inlining, and added code to do on-demand testing of CFL paths on >>both compressed and full graphs. >> >> >> >> I reached a point where the ahead-of-time compression was linear but >>still very accurate compared to on-demand path search and there were >>noticeable improvements in the alias analysis results and impacted >>transformations. Happy to share the patch with you if you are >>interested as well as some data collected. >> >> >> >> However I was not able to see any performance improvements in the code. >>In fact on a various benchmarks there were noticeable regressions in >>measured performance of the generated code. Have you noticed any similar >>problems? >> >> >> >> --david > > > >-- >Best Regards, > >-- >Jia Chen
Sorry, I forgot your last question, The benchmarks were a rather arbitrarily selected set of files out of Facebook¹s codebase so not really suitable to share. On 8/25/16, 6:34 PM, "David Callahan" <dcallahan at fb.com> wrote:>Hi Jia, nice to meet you, > > >On 8/25/16, 6:22 PM, "Jia Chen" <jchen at cs.utexas.edu> wrote: > >>Hi David, >> >>I am the one who's responsible for CFLAA's refactoring in the summer. >>I've sent out another email on llvm-dev, and you can find more about my >>work in my GSoC final report. > >Is this report available? > >>I think it is fantastic that you have done such an interesting work. >>I'll definitely try to help getting the code reviewed and merged in the >>current. After a quick glance at your patch, it seems that what you are >>trying to do there is an optimized version of CFL-Steens, with a custom >>way of handling context-sensitivity. I'll be happy if we can end up >>integrating it into the existing CFL-Steens pass > >The work was more about improving the accuracy of the equivalencing step >then it >is about context sensitivity. In fact, it is only context-sensitive to the >extent there is simulated inlining. There is now downward propagation of >facts into called functions. > >I wanted to share it incase there were lessons of value. It is not in a >very >clean state at the moment but I can clean it up. Let me know how I can >help. > > >>Regarding the benchmark numbers, I'm very interested in what kind of >>tests files were you running the experiments on? Is it possible to share >>it? >> >>> On Wed, Aug 24, 2016 at 2:56 PM, David Callahan <dcallahan at fb.com> >>>wrote: >>> Hi Greg, >>> >>> >>> >>> I see there is on going work with alias analysis and it appears the >>>prior CFLAA has been abandoned. >>> >>> >>> >>> I have a variant of it where I reworked how compression was done to be >>>less conservative, reworked the interprocedural to do simulated but >>>bounded inlining, and added code to do on-demand testing of CFL paths on >>>both compressed and full graphs. >>> >>> >>> >>> I reached a point where the ahead-of-time compression was linear but >>>still very accurate compared to on-demand path search and there were >>>noticeable improvements in the alias analysis results and impacted >>>transformations. Happy to share the patch with you if you are >>>interested as well as some data collected. >>> >>> >>> >>> However I was not able to see any performance improvements in the code. >>>In fact on a various benchmarks there were noticeable regressions in >>>measured performance of the generated code. Have you noticed any similar >>>problems? >>> >>> >>> >>> --david >> >> >> >>-- >>Best Regards, >> >>-- >>Jia Chen >
On 08/25/2016 08:34 PM, David Callahan wrote:> Hi Jia, nice to meet you, > > > On 8/25/16, 6:22 PM, "Jia Chen" <jchen at cs.utexas.edu> wrote: > >> Hi David, >> >> I am the one who's responsible for CFLAA's refactoring in the summer. >> I've sent out another email on llvm-dev, and you can find more about my >> work in my GSoC final report. > > Is this report available?Yes. You can find the PDF in my github repository: https://github.com/grievejia/GSoC2016> Sorry, I forgot your last question, > > The benchmarks were a rather arbitrarily selected set of files out of > Facebook¹s codebase > so not really suitable to share.Thanks for the info! I asked the question with the intention to better understand the results you posted. Based on my own experience, how the benchmarks are written sometimes has a noticeable impact on the effectiveness of cfl-aa. For example, if the codes are written in such a way that a large buffer gets allocated first and then the majority of the program logic deal with pointers obtained by offsetting into this buffer, it is unlikely that the current implementation of cfl-aa will produce any useful results given its field-insensitive nature. Identifying common program idioms like this and adapt cfl-aa and its clients to handle them better is one of the thing I'd be interested to look into. -- Best Regards, -- Jia Chen