On Jun 28, 2017 10:32 PM, "Peter Lawrence" <peterl95124 at
sbcglobal.net>
wrote:
Sean,
Many thanks for taking the time to respond. I didn’t make
myself clear, I will try to be brief...
On Jun 28, 2017, at 7:48 PM, Sean Silva <chisophugis at gmail.com> wrote:
On Wed, Jun 28, 2017 at 3:33 PM, Peter Lawrence via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Chandler,
> where we disagree is in whether the current project is
> moving the issue
> forward. It is not. It is making the compiler more complex for no
> additional value.
>
> The current project is not based in evidence, I have asked for any SPEC
> benchmark
> that shows performance gain by the compiler taking advantage of “undefined
> behavior”
> and no one can show that.
>
One easy way to measure this is to enable -fwrapv/-ftrapv on the compiler
command line, which impede the compiler's usual exploitation of signed wrap
UB in C. Last I heard, those options lead to substantial performance
regressions.
In my other emails I point out that Dan achieved the goal of hoisting
sign-extension
out of loops for LP64 targets by exploiting “+nsw”, and that this appears
to be
the only example of a performance benefit. I am not suggesting we remove
this
optimization, and my emails point out that keeping this opt does not
require that the
compiler model “undefined behavior”.
Heres what gcc says about those options
-ftrapvThis option generates traps for signed overflow on addition,
subtraction, multiplication operations.
-fwrapvThis option instructs the compiler to assume that signed arithmetic
overflow of addition, subtraction and multiplication wraps around using
twos-complement representation. This flag enables some optimizations and
disables others. This option is enabled by default for the Java front end,
as required by the Java language specification.
Sounds like -fwrapv has equivocal results, at least for gcc,
my guess is that the same applies to llvm,
If anyone can show that -fwrapv makes a significant drop in SPEC-INT
performance on
a plain old 32-bit machine then we need to look into it because it is kind
of hard to believe
and doesn’t sound consistent with gcc.
I would do the tests myself, but all I have is my Mac-mini which is an LP64
machine,
(and no license for SPEC sources)
Is it possible to compile 32-bit programs on Mac? I know that on Linux you
can typically install 32-bit libraries and run 32-bit programs. This is a
bit extreme, but if that option isn't available on Mac, you could dual-boot
Linux on your machine (at least temporarily) to perform the measurements.
If that isn't possible, you might consider spinning up a Linux machine in
the cloud (Amazon, Google Cloud, etc.) and doing the measurements there.
Cloud machines aren't the most stable for benchmarking, but with a decent
number of runs and appropriate data analysis you should be able to get
reliable results.
Not having access to SPEC isn't an insurmountable hurtle either.
LLVM's test-suite has a number of interesting benchmarks and is freely
available and easy to set up:
http://llvm.org/docs/TestSuiteMakefileGuide.html
If you ever acquire access to the SPEC sources, they are easy to drop in
later if you need to.
-- Sean Silva
On an LP64 machine we would need a flag to disable all but Dan’s
optimization to
to do a comparison. It would take some time, but it is doable.
Well, I guess that wasn’t brief, sorry!
Peter Lawrence.
It sounds like you are implicitly claiming that there would be no
performance regression. If the -fwrapv/-ftrapv measurements are in line
with this, that will be useful new information for the discussion. But I
think that you need to do the work here (and it isn't that much work
compared to writing out emails; or perhaps I'm just a slow email writer).
I don't think anybody has really up to date numbers on the performance
effect of those options on SPEC. Could you please get some up to date
numbers before continuing this discussion? If the results are as you seem
to imply, then that will be very convincing support for your point.
If you can't be bothered to do that, I have to question how invested you
are in pushing the LLVM project toward a better solution to its current
woes with the poison/undef situation.
-- Sean Silva
> The current project doesn’t even address some of the bugs described in the
> paper,
> in particular those derived from the incorrect definition of “undef” in
> the LangRef.
>
> The current project perpetuates the myth that “poison” is somehow required.
> It isn’t, and when I show proof of that you reply with “its in bug
> reports, etc”,
> that’s BS and you know it, this hasn’t been explored. The same thing
> happened
> before when I said “nsw” shouldn’t be hoisted, folks replied with “that’s
> already
> been studied and rejected”, but I did the research and found out no, it
> had not
> been fully explored, Dan discarded the idea based on incorrect analysis
> and people
> never questioned it. Dan created “poison” on a whim, and people picked up
> on it too
> without question. We’ve been stuck with this self-inflicted wound ever
> since, and it is
> time to heal it.
>
> The entire situation here can be summarized as incorrect analysis, and
> failure to
> fully root-cause problems, because people didn’t question their
> assumptions.
> And we should not be surprised, it is one of the most common problems in
> software
> engineering. Haven’t you ever gone in to fix a bug only to find that what
> you are doing
> is correcting someone else’s bug fix that didn’t actually fix anything
> because the person
> doing the fix didn’t fully understand the problem? Happens to me all the
> time.
>
> The correct software engineering decision here is to fix the definition of
> “undef”,
> delete “poison”, and not hoist “nsw” attributes. That is a no-brainer.
> There is nothing
> to try out, or test, ormeasure. That is simply the way it has to be to
> avoid the current
> set of problems.
>
> I cannot emphasize that last point enough, fixing the definition of
> “undef”, deleting
> “poison”, and not allowing “nsw” attributes to be hoisted, fixes all known
> problems,
> even including ones that weren’t thought of before these discussions
> started, and
> I don’t think there is any technical disagreement here, not even from
> John, Sanjoy,
> or Nuno. This is a no-brainer.
>
> John and I do not have a technical disagreement, John is having an
> emotional
> reaction to the fact that he doesn’t have an answer to the
> function-inlining question.
> We can’t disagree until he actually has an opinion to disagree with, but
> he is not
> willing to express one.
>
> The work on “poison” and “freeze” should be in new analysis and transform
> passes
> in which folks can do whatever they want to propagate “undefined behavior”
> around
> in a function (but “undefined behavior" should be an analysis
attribute
> not an IR
> instruction). Then folks can try to see if they can actually measure a
> performance gain
> on any SPEC benchmarks. I think we all know this is going to turn out
> negative, but that’s
> besides the point, the point is that “poison” and “freeze” are an
> experiment and need
> to be treated as such and not just simply forced into the trunk because
> someone likes it.
>
> What you are saying about ’the llvm way" goes against everything we
know
> about
> "software engineering”, that things have to be evidence based, have
> utility, and
> be the least complex solution to avoid confusion. And we do engineering
> reviews
> and take review feedback seriously. I suggest you take a step back and
> think about
> that, because it sure seems to me like you’re advocating that we don’t do
> reviews and
> we don’t base decisions on evidence.
>
>
> Peter Lawrence.
>
>
>
> On Jun 28, 2017, at 12:16 PM, Chandler Carruth <chandlerc at
gmail.com>
> wrote:
>
> On Wed, Jun 28, 2017 at 9:39 AM Peter Lawrence <peterl95124 at
sbcglobal.net>
> wrote:
>
>>
>>
>> Part I.
>>
>> The original LangRef appeared to be “nice and pretty”
>> and originally ‘undef’ did not seem to stick out.
>>
>> Then evidence came to light in the form of bug reports, and
>> in the form of Dan Gohman’s email “the nsw story”, that all
>> was not good for ‘undef’ [1,2].
>>
>> A proposal was made based on that evidence.
>> A presentation was made at an llvm gathering.
>> A paper was written. The proposal has even been
>> implemented and tested.
>>
>> The IR might not be so “nice and pretty” anymore,
>> but at least all the known bugs could be closed.
>> I think everyone agreed that the case could be closed
>> based on the evidence at hand.
>>
>> However new evidence has come to light,
>> the function-inlining example for one,
>> which the proposal does not address.
>>
>> This means the case must be re-opened.
>>
>
> Peter,
>
> People have been continuing to work on these issues for years. This is not
> new, and and it is not only now being reopened.
>
>
> Unfortunately, at this point I think you are repeating well known and well
> understood information in email after email. I don't think that is a
> productive way to discuss this. However, I don't want to dissuade you
from
> contributing to the project. But I don't think new emails on this
subject
> will not be a good use of anyone's time.
>
> Instead, someone needs to go do the very hard work of building, testing,
> and understanding solutions to some of these problems. In fact, a few
> others are already doing exactly this.
>
> I understand you disagree with the approach others are taking, and that is
> perfectly fine, even good! You have explained your concern, and there
> remains a technical disagreement. This is OK. Repeating your position
won't
> really help move forward.
>
> Contributing technical perspectives (especially different ones!) is always
> valuable, and I don't want to ever discourage it. But when there
remains a
> strong technical disagreement, we have to find some way to make
> progress.Typically, LLVM lends weight towards those who have the most
> significant contributions to LLVM in the area *and* are actually doing the
> work to realize a path forward. This doesn't make them any more
"right" or
> their opinions "better". It is just about having a path forward.
>
> But this should also show you how to make progress. Continuing to debate
> in email probably won't help as you're relatively new to the LLVM
project.
> Instead, write the code, get the data and benchmark results to support your
> position and approach, and come back to us. I think you will have to do the
> engineering work of building the solution you want (and others disagree
> with) and showing why it is objectively better.
>
>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20170628/6bf6e07d/attachment-0001.html>