On 03/11/2011 12:42 PM, Renato Golin wrote:> On 11 March 2011 14:53, Duncan Sands<baldrick at free.fr> wrote:
>> There's no magic bullet. The things to improve that would give you
the most
>> bang for your buck are probably the code generator and
auto-vectorization.
>> Increasing the number of developers would be helpful.
>
> I'm not a GCC expert, but their auto-vectorization is not that great.
> It may be simple to do basic loop transformations and some stupid
> vectorization, but having a really good vectoriser is a lot of work.
>
> I personally think that the biggest difference is the number of people
> that have contributed over the years on very specific optimizations.
> There are as many corner cases as there are particles in the universe
> (maybe more), and implementing each one of them requires time and
> people willing. LLVM has the latter, but lacks the former, for now.
>
> Spending a full year on a vectoriser prototype might bring less value
> than the same year optimizing micro-benchmarks against GCC...
>
> Not that I don't think we should have a vectoriser, Poly is going to
> be great!
Hi,
in case you are referring to PoLLy*, thanks for this nice comment. We
can already do some basic vectorization and are currently working on
increased coverage and enhanced robustness. I have already seen some
nice speedups on some micro kernels, but need to get more confidence
before I present them. I will also talk PoLLy on IMPACT/CGO 2011** , in
case someone is around.
> But until it's not (and it's going to take some time), we
> better focus on some magic, as GCC did over the decades.
Yes. Also for a vectorizer to be efficient you need to have a lot of
magic and canonicalization done beforehand, to enable it to do a decent
job. LLVM is actually pretty good in this respect.
Cheers
Tobi
* Like Polly the parrot
** impact2011.inrialpes.fr