I understand, but this sounds like legalization. Does every architecture trigger an overflow exception, as opposed to setting a bit? Perhaps it makes more sense to do this in the backends that trigger an overflow exception? I'm working on a modification for DIV right now in the x86 backend for Intel Atom that will improve performance, however because the *actual* operation has been replaced with a 32-bit operation it becomes much more difficult to detect when a real 32-bit divide is happening. If someone knows where the 8-bit DIV is being handled in the IR I could look into this change? Tyler -----Original Message----- From: Eli Friedman [mailto:eli.friedman at gmail.com] Sent: Wednesday, June 27, 2012 19:07 PM To: Nowicki, Tyler Cc: llvmdev at cs.uiuc.edu Subject: Re: [LLVMdev] 8-bit DIV IR irregularities On Wed, Jun 27, 2012 at 4:02 PM, Nowicki, Tyler <tyler.nowicki at intel.com> wrote:> Hi, > > > > I noticed that when dividing with signed 8-bit values the IR uses a > 32-bit signed divide, however, when unsigned 8-bit values are used the > IR uses an 8-bit unsigned divide. Why not use a 8-bit signed divide > when using 8-bit signed values?"sdiv i8 -128, -1" has undefined behavior; "sdiv i32 -128, -1" is well-defined. -Eli
On Wed, Jun 27, 2012 at 5:22 PM, Nowicki, Tyler <tyler.nowicki at intel.com> wrote:> I understand, but this sounds like legalization. Does every architecture trigger an overflow exception, as opposed to setting a bit? Perhaps it makes more sense to do this in the backends that trigger an overflow exception?The IR instruction has undefined behavior on overflow. This has nothing to do with legalization.> I'm working on a modification for DIV right now in the x86 backend for Intel Atom that will improve performance, however because the *actual* operation has been replaced with a 32-bit operation it becomes much more difficult to detect when a real 32-bit divide is happening.There is no way to write an 8-bit divide in C; it's a 32-bit divide where each operand happens to be a sign extension from an 8-bit type.> If someone knows where the 8-bit DIV is being handled in the IR I could look into this change?For your div8 testcase, instcombine transforms from a "udiv i32" to a "udiv i8". instcombine isn't allowed to do that for "sdiv i32" because it potentially introduces undefined behavior. -Eli
On Jun 27, 2012, at 6:26 PM, Eli Friedman wrote:> On Wed, Jun 27, 2012 at 5:22 PM, Nowicki, Tyler <tyler.nowicki at intel.com> wrote: >> I understand, but this sounds like legalization. Does every architecture trigger an overflow exception, as opposed to setting a bit? Perhaps it makes more sense to do this in the backends that trigger an overflow exception? > > The IR instruction has undefined behavior on overflow. This has > nothing to do with legalization. > >> I'm working on a modification for DIV right now in the x86 backend for Intel Atom that will improve performance, however because the *actual* operation has been replaced with a 32-bit operation it becomes much more difficult to detect when a real 32-bit divide is happening. > > There is no way to write an 8-bit divide in C; it's a 32-bit divide > where each operand happens to be a sign extension from an 8-bit type. ><nitpick>As I recall, it's a divide of 'int' type, which is typically 32-bit but it only mandated to be at least 16-bit (or, more precisely, a representation able to contain the values between -32768 and 32767).</nitpick>>> If someone knows where the 8-bit DIV is being handled in the IR I could look into this change? > > For your div8 testcase, instcombine transforms from a "udiv i32" to a > "udiv i8". instcombine isn't allowed to do that for "sdiv i32" > because it potentially introduces undefined behavior. > > -Eli > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev