Hi, I noticed that when dividing with signed 8-bit values the IR uses a 32-bit signed divide, however, when unsigned 8-bit values are used the IR uses an 8-bit unsigned divide. Why not use a 8-bit signed divide when using 8-bit signed values? Here is the C code and IR: char idiv8(char a, char b) { char c = a / b; return c; } define signext i8 @idiv8(i8 signext %a, i8 signext %b) nounwind readnone { entry: %conv = sext i8 %a to i32 %conv1 = sext i8 %b to i32 %div = sdiv i32 %conv, %conv1 %conv2 = trunc i32 %div to i8 ret i8 %conv2 } unsigned char div8(unsigned char a, unsigned char b) { unsigned char c = a / b; return c; } define zeroext i8 @div8(i8 zeroext %a, i8 zeroext %b) nounwind readnone { entry: %div3 = udiv i8 %a, %b ret i8 %div3 } I noticed the same behavior in O3. The command line arguments I'm using for clang are: -O2 -emit-llvm -S. Tyler Nowicki Intel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120627/e47dfdcd/attachment.html>
On Wed, Jun 27, 2012 at 4:02 PM, Nowicki, Tyler <tyler.nowicki at intel.com> wrote:> Hi, > > > > I noticed that when dividing with signed 8-bit values the IR uses a 32-bit > signed divide, however, when unsigned 8-bit values are used the IR uses an > 8-bit unsigned divide. Why not use a 8-bit signed divide when using 8-bit > signed values?"sdiv i8 -128, -1" has undefined behavior; "sdiv i32 -128, -1" is well-defined. -Eli
I understand, but this sounds like legalization. Does every architecture trigger an overflow exception, as opposed to setting a bit? Perhaps it makes more sense to do this in the backends that trigger an overflow exception? I'm working on a modification for DIV right now in the x86 backend for Intel Atom that will improve performance, however because the *actual* operation has been replaced with a 32-bit operation it becomes much more difficult to detect when a real 32-bit divide is happening. If someone knows where the 8-bit DIV is being handled in the IR I could look into this change? Tyler -----Original Message----- From: Eli Friedman [mailto:eli.friedman at gmail.com] Sent: Wednesday, June 27, 2012 19:07 PM To: Nowicki, Tyler Cc: llvmdev at cs.uiuc.edu Subject: Re: [LLVMdev] 8-bit DIV IR irregularities On Wed, Jun 27, 2012 at 4:02 PM, Nowicki, Tyler <tyler.nowicki at intel.com> wrote:> Hi, > > > > I noticed that when dividing with signed 8-bit values the IR uses a > 32-bit signed divide, however, when unsigned 8-bit values are used the > IR uses an 8-bit unsigned divide. Why not use a 8-bit signed divide > when using 8-bit signed values?"sdiv i8 -128, -1" has undefined behavior; "sdiv i32 -128, -1" is well-defined. -Eli