Xinliang David Li
2015-Apr-24 19:28 UTC
[LLVMdev] Loss of precision with very large branch weights
yes -- for count representation, 64 bit is needed. The branch weight here is different and does not needs to be 64bit to represent branch probability precisely. David On Fri, Apr 24, 2015 at 12:21 PM, Smith, Kevin B <kevin.b.smith at intel.com> wrote:> FWIW. Intel compiler's profile instrumentation uses 64 bit integer counters. > We wrestled with similar problems for a long time before biting the bullet and switching to 64 bit counters. > > For 32 bit architectures this is definitely not ideal, as it now the code must to use multi-instruction sequences > to perform the counter increments. > > With 64 bits, even with 1 billion counter increments/second, the range still wouldn't cause counter overflow > for something like 800 years. So I think 64 bit scale of values is definitely enough. > > Kevin Smith > > -----Original Message----- > From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Diego Novillo > Sent: Friday, April 24, 2015 11:47 AM > To: Xinliang David Li > Cc: LLVM Developers Mailing List > Subject: Re: [LLVMdev] Loss of precision with very large branch weights > > > > On 04/24/15 14:44, Xinliang David Li wrote: >> >> Isn't that the direct result of the branch weights not being scaled >> (after reaching the cap) -- thus leading to wrong branch probability >> (computed from weights)? Wrong probability leads to wrong Frequency >> propagation. > > Yup, I'm trying to see if we couldn't just use 64bit values all over to > make things easier. The drawback would be that we are just punting the > problem to a different scale of values (but, it may be enough). > > > Diego. > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Diego Novillo
2015-Apr-24 19:29 UTC
[LLVMdev] Loss of precision with very large branch weights
On Fri, Apr 24, 2015 at 3:28 PM, Xinliang David Li <davidxl at google.com> wrote:> yes -- for count representation, 64 bit is needed. The branch weight > here is different and does not needs to be 64bit to represent branch > probability precisely. >Actually, the branch weights are really counts. They get converted to frequencies. For frequencies, we don't really need 64bits, as they're just comparative values that can be squished into 32bits. It's the branch weights being 32 bit quantities that are throwing off the calculations. Diego. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150424/94a484ce/attachment.html>
Xinliang David Li
2015-Apr-24 19:44 UTC
[LLVMdev] Loss of precision with very large branch weights
On Fri, Apr 24, 2015 at 12:29 PM, Diego Novillo <dnovillo at google.com> wrote:> > > On Fri, Apr 24, 2015 at 3:28 PM, Xinliang David Li <davidxl at google.com> > wrote: >> >> yes -- for count representation, 64 bit is needed. The branch weight >> here is different and does not needs to be 64bit to represent branch >> probability precisely. > > > Actually, the branch weights are really counts.No -- I think that is our original proposal (including changing the meaning of MD_prof meta data) :). After many rounds of discussions, I think what we eventually settled to is to 1) use 64 bit value to represent function entry count 2) keep branch weight representation and meaning as it is Changing weights to 64bit for can slightly increase memory usage. In fact, what we want longer term is get rid of 'weights' and just use a fixed point representation for branch probability. For blocks with 2 targets, such info can be attached at Block (source) level, thus further saving memory.>They get converted to > frequencies. For frequencies, we don't really need 64bits, as they're just > comparative values that can be squished into 32bits. It's the branch > weights being 32 bit quantities that are throwing off the calculations.Do you still see the issue after fixing bhe bug (limit without scaling) in BranchProbabilityInfo::calcMetadataWeights ? David> > > Diego.
Philip Reames
2015-Apr-25 18:03 UTC
[LLVMdev] Loss of precision with very large branch weights
On 04/24/2015 12:29 PM, Diego Novillo wrote:> > > On Fri, Apr 24, 2015 at 3:28 PM, Xinliang David Li <davidxl at google.com > <mailto:davidxl at google.com>> wrote: > > yes -- for count representation, 64 bit is needed. The branch weight > here is different and does not needs to be 64bit to represent branch > probability precisely. > > > Actually, the branch weights are really counts. They get converted to > frequencies. For frequencies, we don't really need 64bits, as they're > just comparative values that can be squished into 32bits. It's the > branch weights being 32 bit quantities that are throwing off the > calculations.Having branch weights as 64 bit seems entirely reasonable to me. Increasing the range of the value stored doesn't change the semantics no matter how you interpret them. It does change the calculated frequencies, but only to make them more accurate. I don't see any problem with that. Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150425/8980ad40/attachment.html>
Seemingly Similar Threads
- [LLVMdev] Loss of precision with very large branch weights
- [LLVMdev] RFC - Improvements to PGO profile support
- [LLVMdev] RFC - Improvements to PGO profile support
- [LLVMdev] Loss of precision with very large branch weights
- [LLVMdev] RFC - Improvements to PGO profile support