Hi Sudakshina, Glad it helped :)> I did not find any 'print-changed' option for llvm.Hmm.. I can't reproduce it myself right now either.. Anyway, let's go with what works for sure. That's `-print-after-all`. This prints the IR after every (middle-end) pass, no matter whether the pass made any changes or not. Alright, now to use that: This is _not_ an option of Clang (or the Clang driver; i.e., the command: clang test.c -print-after-all won't work), but an option of opt. opt, in case you're not familiar with it, is basically the middle-end optimizer of LLVM i.e. it's supposed to be doing target-independent optimizations, which from what I understand is that you're interested in. If you want to also print back-end passes (i.e., register allocation etc.), that's another story. Anyway, when you type e.g., clang test.c -o test, 3 high-level steps happen: 1) Clang parses, type-checks etc. the C source and generates (very trivial) LLVM IR 2) This is then passed to opt, which issues a bunch of passes (again, depending on whether you used -O1, -O2 etc.). Each pass takes IR and outputs IR 3) When it's done, it passes it to the back-end, which is another story and uses another IR. Now, what you want is to get the IR after the first step, so that you can pass it _yourself_ to opt, with any options _you_ want (one of them being `print-after-all`). To do that, you type e.g.,: clang test.c -o test.ll -S -emit-llvm -emit-llvm tells Clang to stop at step 1) and output the IR in a text file (note that we used no -O1, -O2 options because we want the fully unoptimized, trivial IR at this step) -S tells clang to print in textual format (let's not go now into what is the other format) If your file can't be linked to an executable (e.g., it doesn't have a main()), you should add a -c there. Alright, now we have our IR but there's a problem. Our functions have the `optnone` attribute, which tells the optimizer to not touch them (that's because we used no optimization options). We don't want that, so we add another option to clang, -Xclang -disable-O0-optnone So, all in all, it looks something like this: clang test.c -o test.ll -c -emit-llvm -S -Xclang -disable-O0-optnone Now, we have the LLVM IR in test.ll and we can pass it to opt. You can now say: opt test.ll -O3 -print-after-all Let me show you the steps in Godbolt: - Generate (unoptimized) IR from Clang: https://godbolt.org/z/rY3Thx (note that I also added -g0 to avoid printing debug info, which are probably not helpful) - Copy this exact IR and pass it to opt: https://godbolt.org/z/7cjdcf (you can see on the right window that SROA changed the code a lot) As you can understand, you can automate all that with a script. Final comment: After step 1), it's useful to pass your IR through opt with the option: -metarenamer or -instnamer. This message has already become way too big so let me not explain now why it's useful, but trust me, it is. Best, Stefanos Στις Τρί, 26 Ιαν 2021 στις 4:18 π.μ., ο/η Sudakshina Dutta < sudakshina at iitgoa.ac.in> έγραψε:> Dear Stefanos, > > Thank you for your reply. It helped me to understand the optimization > phase of LLVM. However, I did not find any 'print-changed' option for llvm. > Can you kindly help me in this regard ? I want to generate the IRs after > each optimization pass. > > Regards, > Sudakshina > > > On Sun, Jan 24, 2021 at 7:13 PM Stefanos Baziotis < > stefanos.baziotis at gmail.com> wrote: > >> Hi Sudakshina, >> >> > The optimization applied in the optimization pass depends on the source >> program; hence, the number of optimizations applied differs from source >> program to source program. >> >> "applied" is still ambiguous, at least to me. If by "applied" you mean >> "attempted", then no, that does not depend on the source program. It >> depends on the optimization level (e.g., O1, O2, ...) or the individual >> passes that you may request yourself. >> That is, for -O1 for example, there is a predetermined sequence of passes >> that _attempt_ to optimize the program and you can see that with the >> options I mentioned above (e.g., `-mllvm -opt-bisect-limit=-1`) >> >> If by applied you mean "actually changed the code", then yes, this >> differs from program to program. You can see that with `print-changed`, >> it'll show you the IR after every transformation that changed your program. >> >> Finally, if you want to see why a transformation could or not change the >> code, you can use the related comments about remarks. >> >> Best, >> Stefanos >> >> Στις Κυρ, 24 Ιαν 2021 στις 7:24 π.μ., ο/η Sudakshina Dutta < >> sudakshina at iitgoa.ac.in> έγραψε: >> >>> Dear all, >>> >>> In the optimization phase, the compiler applies some optimization to >>> generate an optimized program. The optimization applied in the optimization >>> pass depends on the source program; hence, the number of optimizations >>> applied differs from source program to source program. By mentioning >>> "applied" transformation, I wanted to know what all transformations are >>> applied for a specific input program when subjected to the LLVM optimizer. >>> >>> Thanks, >>> Sudakshina >>> >>> >>> On Sun, 24 Jan 2021, 09:27 Stefanos Baziotis, < >>> stefanos.baziotis at gmail.com> wrote: >>> >>>> Hi Sudakshina, >>>> >>>> Not really sure what you mean by "applied", so, let me offer some more >>>> ideas other than Brian's and Adrian's great suggestions. First, there are >>>> some >>>> diagnostics / remarks flags in Clang like the -R family [1] or some -f >>>> flags about printing optimization reports [2] from Clang. They can be >>>> useful or useless depending >>>> on your case. They can also be parsed relatively easily. >>>> >>>> If you just want to see a list of passes that were attempted in your >>>> code, you can do it with: `-mllvm -opt-bisect-limit=-1` >>>> You can also use `-mllvm-debug-pass=Arguments` to see the arguments >>>> that were passed. >>>> >>>> Moving into opt, you can use something like `print-after-all`, which >>>> was already mentioned. If you don't know what these flags do, is they show >>>> you >>>> the IR in different stages in the pipeline (e.g., `print-after-all` >>>> shows you each pass attempted and how the IR is after it). >>>> >>>> Hope it helps, >>>> Stefanos >>>> >>>> [1] >>>> https://clang.llvm.org/docs/ClangCommandLineReference.html#diagnostic-flags >>>> [2] >>>> https://clang.llvm.org/docs/UsersManual.html#cmdoption-f-no-save-optimization-record >>>> >>>> Στις Κυρ, 24 Ιαν 2021 στις 5:47 π.μ., ο/η Adrian Vogelsgesang via >>>> llvm-dev <llvm-dev at lists.llvm.org> έγραψε: >>>> >>>>> I used “-print-changed”, “-print-before-all”, “print-after-all” last >>>>> time I wanted to see the passes together with their inout/output IR modules. >>>>> >>>>> In my case, I used them through “clang++”, i.e. I had to prefix them >>>>> with “-mllvm” >>>>> > clang++ test_file.cpp -mllvm -print-after-all >>>>> >>>>> >>>>> >>>>> *From: *llvm-dev <llvm-dev-bounces at lists.llvm.org> on behalf of Brian >>>>> Cain via llvm-dev <llvm-dev at lists.llvm.org> >>>>> *Date: *Sunday, 24. January 2021 at 04:40 >>>>> *To: *Sudakshina Dutta <sudakshina at iitgoa.ac.in> >>>>> *Cc: *LLVM Development List <llvm-dev at lists.llvm.org> >>>>> *Subject: *Re: [llvm-dev] LLVM log file >>>>> >>>>> I don't know if it's exhaustive but there's the "remarks" feature: >>>>> >>>>> >>>>> >>>>> >>>>> https://llvm.org/docs/Remarks.html#introduction-to-the-llvm-remark-diagnostics >>>>> >>>>> >>>>> >>>>> On Sat, Jan 23, 2021 at 9:20 PM Sudakshina Dutta via llvm-dev < >>>>> llvm-dev at lists.llvm.org> wrote: >>>>> >>>>> Dear all, >>>>> >>>>> >>>>> >>>>> Good morning. I want to know whether LLVM creates any log file >>>>> consisting of applied optimizations in the optimization phase. It will be >>>>> really useful for the researchers who work on compilers, formal methods, >>>>> etc. >>>>> >>>>> >>>>> >>>>> Thanks, >>>>> >>>>> Sudakshina >>>>> >>>>> _______________________________________________ >>>>> LLVM Developers mailing list >>>>> llvm-dev at lists.llvm.org >>>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> >>>>> -Brian >>>>> _______________________________________________ >>>>> LLVM Developers mailing list >>>>> llvm-dev at lists.llvm.org >>>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>>> >>>>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210126/5f7ae5f6/attachment.html>
On Mon, Jan 25, 2021 at 6:53 PM Stefanos Baziotis via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hi Sudakshina, > > Glad it helped :) > > > I did not find any 'print-changed' option for llvm. > Hmm.. I can't reproduce it myself right now either.. Anyway, let's go with > what works for sure. That's `-print-after-all`. This prints the IR after > every (middle-end) pass, no matter whether the pass made any changes or not. > > Alright, now to use that: This is _not_ an option of Clang (or the Clang > driver; i.e., the command: clang test.c -print-after-all won't work), but > an option of opt. >These debug options are available from clang when prefixed with `-mllvm ` (so `-mllvm --print-after-all` here). -- Mehdi> opt, in case you're not familiar with it, is basically the middle-end > optimizer of LLVM > i.e. it's supposed to be doing target-independent optimizations, which > from what I understand is that you're interested in. If you want to also > print back-end passes (i.e., register allocation etc.), that's another > story. > > Anyway, when you type e.g., clang test.c -o test, 3 high-level steps > happen: > 1) Clang parses, type-checks etc. the C source and generates (very > trivial) LLVM IR > 2) This is then passed to opt, which issues a bunch of passes (again, > depending on whether you used -O1, -O2 etc.). Each pass takes IR and > outputs IR > 3) When it's done, it passes it to the back-end, which is another story > and uses another IR. > > Now, what you want is to get the IR after the first step, so that you can > pass it _yourself_ to opt, with any options _you_ want (one of them being > `print-after-all`). To do that, you type e.g.,: clang test.c -o test.ll -S > -emit-llvm > -emit-llvm tells Clang to stop at step 1) and output the IR in a text file > (note that we used no -O1, -O2 options because we want the fully > unoptimized, trivial IR at this step) > -S tells clang to print in textual format (let's not go now into what is > the other format) > > If your file can't be linked to an executable (e.g., it doesn't have a > main()), you should add a -c there. > > Alright, now we have our IR but there's a problem. Our functions have the > `optnone` attribute, which tells the optimizer to not touch them (that's > because we used no optimization options). > We don't want that, so we add another option to clang, -Xclang > -disable-O0-optnone > > So, all in all, it looks something like this: clang test.c -o test.ll -c > -emit-llvm -S -Xclang -disable-O0-optnone > > Now, we have the LLVM IR in test.ll and we can pass it to opt. You can now > say: opt test.ll -O3 -print-after-all > > Let me show you the steps in Godbolt: > - Generate (unoptimized) IR from Clang: https://godbolt.org/z/rY3Thx > (note that I also added -g0 to avoid printing debug info, which are > probably not helpful) > - Copy this exact IR and pass it to opt: https://godbolt.org/z/7cjdcf > (you can see on the right window that SROA changed the code a lot) > > As you can understand, you can automate all that with a script. > > Final comment: After step 1), it's useful to pass your IR through opt with > the option: -metarenamer or -instnamer. This message has already become way > too big so let me not explain now why it's useful, but trust me, it is. > > Best, > Stefanos > > Στις Τρί, 26 Ιαν 2021 στις 4:18 π.μ., ο/η Sudakshina Dutta < > sudakshina at iitgoa.ac.in> έγραψε: > >> Dear Stefanos, >> >> Thank you for your reply. It helped me to understand the optimization >> phase of LLVM. However, I did not find any 'print-changed' option for llvm. >> Can you kindly help me in this regard ? I want to generate the IRs after >> each optimization pass. >> >> Regards, >> Sudakshina >> >> >> On Sun, Jan 24, 2021 at 7:13 PM Stefanos Baziotis < >> stefanos.baziotis at gmail.com> wrote: >> >>> Hi Sudakshina, >>> >>> > The optimization applied in the optimization pass depends on the >>> source program; hence, the number of optimizations applied differs from >>> source program to source program. >>> >>> "applied" is still ambiguous, at least to me. If by "applied" you mean >>> "attempted", then no, that does not depend on the source program. It >>> depends on the optimization level (e.g., O1, O2, ...) or the individual >>> passes that you may request yourself. >>> That is, for -O1 for example, there is a predetermined sequence of >>> passes that _attempt_ to optimize the program and you can see that with the >>> options I mentioned above (e.g., `-mllvm -opt-bisect-limit=-1`) >>> >>> If by applied you mean "actually changed the code", then yes, this >>> differs from program to program. You can see that with `print-changed`, >>> it'll show you the IR after every transformation that changed your program. >>> >>> Finally, if you want to see why a transformation could or not change the >>> code, you can use the related comments about remarks. >>> >>> Best, >>> Stefanos >>> >>> Στις Κυρ, 24 Ιαν 2021 στις 7:24 π.μ., ο/η Sudakshina Dutta < >>> sudakshina at iitgoa.ac.in> έγραψε: >>> >>>> Dear all, >>>> >>>> In the optimization phase, the compiler applies some optimization to >>>> generate an optimized program. The optimization applied in the optimization >>>> pass depends on the source program; hence, the number of optimizations >>>> applied differs from source program to source program. By mentioning >>>> "applied" transformation, I wanted to know what all transformations are >>>> applied for a specific input program when subjected to the LLVM optimizer. >>>> >>>> Thanks, >>>> Sudakshina >>>> >>>> >>>> On Sun, 24 Jan 2021, 09:27 Stefanos Baziotis, < >>>> stefanos.baziotis at gmail.com> wrote: >>>> >>>>> Hi Sudakshina, >>>>> >>>>> Not really sure what you mean by "applied", so, let me offer some more >>>>> ideas other than Brian's and Adrian's great suggestions. First, there are >>>>> some >>>>> diagnostics / remarks flags in Clang like the -R family [1] or some -f >>>>> flags about printing optimization reports [2] from Clang. They can be >>>>> useful or useless depending >>>>> on your case. They can also be parsed relatively easily. >>>>> >>>>> If you just want to see a list of passes that were attempted in your >>>>> code, you can do it with: `-mllvm -opt-bisect-limit=-1` >>>>> You can also use `-mllvm-debug-pass=Arguments` to see the arguments >>>>> that were passed. >>>>> >>>>> Moving into opt, you can use something like `print-after-all`, which >>>>> was already mentioned. If you don't know what these flags do, is they show >>>>> you >>>>> the IR in different stages in the pipeline (e.g., `print-after-all` >>>>> shows you each pass attempted and how the IR is after it). >>>>> >>>>> Hope it helps, >>>>> Stefanos >>>>> >>>>> [1] >>>>> https://clang.llvm.org/docs/ClangCommandLineReference.html#diagnostic-flags >>>>> [2] >>>>> https://clang.llvm.org/docs/UsersManual.html#cmdoption-f-no-save-optimization-record >>>>> >>>>> Στις Κυρ, 24 Ιαν 2021 στις 5:47 π.μ., ο/η Adrian Vogelsgesang via >>>>> llvm-dev <llvm-dev at lists.llvm.org> έγραψε: >>>>> >>>>>> I used “-print-changed”, “-print-before-all”, “print-after-all” last >>>>>> time I wanted to see the passes together with their inout/output IR modules. >>>>>> >>>>>> In my case, I used them through “clang++”, i.e. I had to prefix them >>>>>> with “-mllvm” >>>>>> > clang++ test_file.cpp -mllvm -print-after-all >>>>>> >>>>>> >>>>>> >>>>>> *From: *llvm-dev <llvm-dev-bounces at lists.llvm.org> on behalf of >>>>>> Brian Cain via llvm-dev <llvm-dev at lists.llvm.org> >>>>>> *Date: *Sunday, 24. January 2021 at 04:40 >>>>>> *To: *Sudakshina Dutta <sudakshina at iitgoa.ac.in> >>>>>> *Cc: *LLVM Development List <llvm-dev at lists.llvm.org> >>>>>> *Subject: *Re: [llvm-dev] LLVM log file >>>>>> >>>>>> I don't know if it's exhaustive but there's the "remarks" feature: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> https://llvm.org/docs/Remarks.html#introduction-to-the-llvm-remark-diagnostics >>>>>> >>>>>> >>>>>> >>>>>> On Sat, Jan 23, 2021 at 9:20 PM Sudakshina Dutta via llvm-dev < >>>>>> llvm-dev at lists.llvm.org> wrote: >>>>>> >>>>>> Dear all, >>>>>> >>>>>> >>>>>> >>>>>> Good morning. I want to know whether LLVM creates any log file >>>>>> consisting of applied optimizations in the optimization phase. It will be >>>>>> really useful for the researchers who work on compilers, formal methods, >>>>>> etc. >>>>>> >>>>>> >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Sudakshina >>>>>> >>>>>> _______________________________________________ >>>>>> LLVM Developers mailing list >>>>>> llvm-dev at lists.llvm.org >>>>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> -Brian >>>>>> _______________________________________________ >>>>>> LLVM Developers mailing list >>>>>> llvm-dev at lists.llvm.org >>>>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>>>>> >>>>> _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210125/30a3f905/attachment.html>
On 26/01/2021 02:53, Stefanos Baziotis via llvm-dev wrote:> Alright, now to use that: This is _not_ an option of Clang (or the Clang > driver; i.e., the command: clang test.c -print-after-all won't work), > but an option of opt. opt, in case you're not familiar with it, is > basically the middle-end optimizer of LLVMI think this is sufficiently close to being true that it ends up being very misleading. I've seen a lot of posts on the mailing lists from people who have a mental model of LLVM like this. The opt tool is a thin wrapper around the LLVM pass pipeline infrastructure. Most of the command-line flags for opt are not specific to opt, they are exposed by LLVM libraries. Opt passes all of its arguments to LLVM, clang passes only the ones prefixed with -mllvm, but they are both handled by the same logic. Opt has some default pipelines with names such as -O1 and -O3 but these are *not* the same as the pipelines of the same names in clang (or other compilers that use LLVM). This is a common source of confusion from people wondering why clang and opt give different output at -O2 (for example). The opt tool is primarily intended for unit testing. It is a convenient way of running a single pass or sequence of passes (which is also useful for producing reduced test cases when a long pass pipeline generates a miscompile). Almost none of the logic, including most of the command-line handling, is actually present in opt. David