Hi Sudakshina,
> the loop to be optimized has to be enclosed between #pragma scop and
#pragma endoscop
No it doesn't :) So, ok. There is a way to do loop optimizations, called
polyhedral optimization and is based on a mathematical framework.
Why do I mention this ? Because in polyhedral optimization there is the
term SCOP, which is basically parts of the code where polyhedral
optimization shines.
The fact that you mentioned this pragma implies that you're probably trying
to do polyhedral optimization.
Now, LLVM has Polly, an infrastructure based on polyhedral optimization.
Unfortunately, I'm not familiar with Polly, so if you really want to use it,
I could CC people who are more familiar with it.
_However_, you don't _have_ to use Polly to do loop optimizations. In fact,
Polly is not enabled by default in LLVM (i.e., when you do -O3, Polly
doesn't run)
Classic loop optimizations implemented in LLVM, like loop-unrolling,
loop-invariant code motion and all that are not based on polyhedral
optimization.
So, it depends on what you're trying to do. And what log file are you
using... A couple of ways to obtain logs were mentioned, it would
be good to mention what you're using.
Best,
Stefanos
Στις Παρ, 29 Ιαν 2021 στις 2:18 μ.μ., ο/η Sudakshina Dutta <
sudakshina at iitgoa.ac.in> έγραψε:
> Dear Stefanos,
>
> Thanks for all your reply. I have one simple question to ask. I am new to
> llvm. I know that the loop to be optimized has to be enclosed between
> #pragma scop and #pragma endoscop. I have seen the llvm log. It says about
> the attempted optimizations. My question is does the log specify the loop
> which is optimized ? More precisely, if there are multiple loops, does the
> log specify which loop has been optimized ? I request you to kindly answer
> the above question or give me pointers to the answers.
>
> Thanks.
> Sudakshina
>
> On Tue, Jan 26, 2021 at 8:10 PM Stefanos Baziotis via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
>
>> Hi David,
>>
>> Sure I agree in part but "very misleading" is a strong
statement and I
>> think we might want to see a different perspective. People present this
>> mental model in conferences [1]
>> "If you want to optimize, you use opt, which takes LLVM IR and
generates
>> LLVM IR". We could say that this is misleading too, as it seems
that opt
>> does the optimization, but is it really?
>> This an "Introduction to LLVM", going into specifics at this
point would
>> probably confuse people a lot more than it would help them.
>>
>> Same here [2]. Chandler was using clang -O2, then he used opt -O2
>> mentioning it as the "default" pipeline. Wow, that should be
super
>> misleading, but again is it really? Would it help
>> if Chandler stopped and said "Oh by the way... Let me digress here
and
>> explain a thing about libraries and opt etc."
>>
>> With the same logic, when I said "target-independent"
optimizations, that
>> was misleading too.
>>
>> But my message was like, 1 page long already and I think spending
another
>> 1 page (or more) to explain such things would not help. And I think
this is
>> similar for people in conferences and llvm posts.
>>
>> Anyway, it's obvious that your message came with good intentions
and I
>> appreciate that. But I think it should be mentioned that for any
beginner
>> trying to understand the beast called LLVM,
>> it's not super important to know it now, at least IMHO. But
it's good to
>> be mentioned that it's not the grand truth so that they come back
to it
>> when they're more comfortable with the "approximation".
>>
>> Best,
>> Stefanos
>>
>> [1] https://youtu.be/J5xExRGaIIY?t=429
>> [2] https://youtu.be/s4wnuiCwTGU?t=433
>>
>> Στις Τρί, 26 Ιαν 2021 στις 11:47 π.μ., ο/η David Chisnall via llvm-dev
<
>> llvm-dev at lists.llvm.org> έγραψε:
>>
>>> On 26/01/2021 02:53, Stefanos Baziotis via llvm-dev wrote:
>>> > Alright, now to use that: This is _not_ an option of Clang (or
the
>>> Clang
>>> > driver; i.e., the command: clang test.c -print-after-all
won't work),
>>> > but an option of opt. opt, in case you're not familiar
with it, is
>>> > basically the middle-end optimizer of LLVM
>>>
>>> I think this is sufficiently close to being true that it ends up
being
>>> very misleading. I've seen a lot of posts on the mailing lists
from
>>> people who have a mental model of LLVM like this.
>>>
>>> The opt tool is a thin wrapper around the LLVM pass pipeline
>>> infrastructure. Most of the command-line flags for opt are not
specific
>>> to opt, they are exposed by LLVM libraries. Opt passes all of its
>>> arguments to LLVM, clang passes only the ones prefixed with -mllvm,
but
>>> they are both handled by the same logic.
>>>
>>> Opt has some default pipelines with names such as -O1 and -O3 but
these
>>> are *not* the same as the pipelines of the same names in clang (or
other
>>> compilers that use LLVM). This is a common source of confusion
from
>>> people wondering why clang and opt give different output at -O2
(for
>>> example).
>>>
>>> The opt tool is primarily intended for unit testing. It is a
convenient
>>> way of running a single pass or sequence of passes (which is also
useful
>>> for producing reduced test cases when a long pass pipeline
generates a
>>> miscompile). Almost none of the logic, including most of the
>>> command-line handling, is actually present in opt.
>>>
>>> David
>>>
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> llvm-dev at lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20210129/457f2ffc/attachment.html>