Hello, folks! I need help with LSR optimization. I've spent pretty much trying to understand what's going on here, but still don’t understand some things. *First*, what is Scale in Formula? Consider *for(i=0; i < n; i++)*. The initial formula for ICMP use will be: (1) reg({%n,+,-1}<nw><%for.body>) Next we build other forms: (2) reg(%n) + 1*reg({0,+,-1} (3) reg({(-1 * %n),+,1} (4) reg((-1 * %n)) + 1*reg({0,+,1} (5) reg(%n) + -1*reg({0,+,1} (6) reg((-1 * %n)) + -1*reg({0,+,-1} I’m wondering about (4) and (6). What’s negative scale? It affects formula’s cost, so, for example, (4) loses to (5). *Second*, we change ICMP to compare against zero, so we get *for(i=n; i > 0; i--)* and (1) for initial formula. But we’re not considering (7) reg((-1 * %n)) + reg({0,+,1} which corresponds to our initial *for(i=0; i < n; i++)*. Again, I don’t know what’s the difference between this and (4) and (6), bit it will win (5) at least. Now, I also want (7) to win (1), in cases like this *void foo( int n, int red[], int green[], int blue[], int alpha[], int rdest[], int gdest[], int bdest[], int adest[] ){ for (int i=0;i<n;i++) { red[i] = rdest[i]; green[i] = gdest[i]; blue[i] = bdest[i]; alpha[i] = adest[i]; }}* Just compile it with GCC and CLANG and see how bad the things are. (.text section is 64 bytes for GCC and 184 for CLANG). I'm talking about x86, not sure for other architectures. The problem is that we shouldn’t change loop direction, i.e. apply (1). If you replace *n* in example above with constant, everything will be fine. (because *-1024 + reg({0,+,1}* will win *reg({1024,+,-1}*) Even with (7) it will always lose to (1), because (1) has less registers. So, any ideas on this? P.S. I was thinking about a hack, when in certain conditions we just explicitly lower formula’s cost (NumRegs -= 1), but this is probably not the best way. Thanks,> Anton-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150917/a24eec7b/attachment.html>