Hello, I was looking at the following test case which is very relevant in imaging applications. int sad(unsigned char *pix1, unsigned char *pix2) { int sum = 0; for( int x = 0; x < 16; x++ ) { sum += abs( pix1[x] - pix2[x] ); } return sum; } The llvm IR generated after all the IR optimizations is ..... %9 = bitcast i8* %8 to <4 x i8>* %wide.load.1 = load <4 x i8>* %9, align 1 %10 = zext <4 x i8> %wide.load.1 to <4 x i32> %11 = getelementptr inbounds i8* %pix2, i64 4 %12 = bitcast i8* %11 to <4 x i8>* %wide.load7.1 = load <4 x i8>* %12, align 1 %13 = zext <4 x i8> %wide.load7.1 to <4 x i32> %14 = sub nsw <4 x i32> %10, %13 %15 = icmp uge <4 x i8> %wide.load.1, %wide.load7.1 %16 = sub nsw <4 x i32> zeroinitializer, %14 %17 = select <4 x i1> %15, <4 x i32> %14, <4 x i32> %16 %18 = add nsw <4 x i32> %17, %7 %19 = getelementptr inbounds i8* %pix1, i64 8 ..... The test case is a perfect example for the generation of Sum of Absolute Difference (SAD) instruction which is present in most of the targets. The proposed generated IR is : ( directly x86 intrinsic is called just to demonstrate the difference ) ..... %6 = bitcast i8* %5 to x86_mmx* %wide.load8.1 = load x86_mmx* %6, align 1 %7 = getelementptr inbounds i8* %pix2, i64 8 %8 = bitcast i8* %7 to x86_mmx* %wide.load79.1 = load x86_mmx* %8, align 1 %9 = call x86_mmx @llvm.x86.mmx.psad.bw(x86_mmx %wide.load8.1, x86_mmx %wide.load79.1) %10 = bitcast x86_mmx %9 to i64 %11 = trunc i64 %10 to i32 %12 = add i32 %11, %4 ..... This Loop optimization can reduce the loop to only one instruction for every 4 or 8 or 16 iterations of the loop ( depending on the target sad instruction). Proposed Solution The sad pattern shown above which is the abs pattern with reduction variable ( sequence of sub, icomp, sub, select, add ) can be replaced with an intrinsic call. For a non-unrolled loop we can do this in LoopVectorization where all the infrastructure for identifying reduction variables is already there. We just need to identify the sad pattern, remove those instructions and call an intrinsic. This can be done in the loop body which Vectorizer creates (vector.body) without disturbing the scalar body of the loop. For this to happen in LoopVectorizer, we also need to influence the cost modeling of LV. For computing cost for different Vectorization Factors, we should remove the cost of sad pattern and add the cost of intrinsic. If selected vectorization factor comes to be equal to any operand vector size of basic SAD instruction for that particular target (8 for x86 target) then we can go ahead and replace with intrinsic. This intrinsic call can be a call to llvm intrinsic which can be lowered into different target intrinsic calls depending on the VF selected. For unrolled loops, we can target in SLPVectorizer. The work in loopVectorizer using llvm intrinsic call is ready for x86 codegen and can send the patch after this discussion. ( Attached contains complete IR with and without 8 byte SAD for the above test case after optimizations ) Please provide your thoughts. Thank you, Vijender -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150128/8807f579/attachment.html> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: IR with SAD.txt URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150128/8807f579/attachment.txt> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: IR without SAD.txt URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150128/8807f579/attachment-0001.txt>
Hi Vijender, Thanks for posting this, there is wide support here for improving our support for reductions of various kinds, both in flavor and robustness. I've cc'd some others who have previously discussed this. James has advocated in the past for an intrinsic for horizontal reductions, which I support. We also need better support for vectorization cost modeling of combinations of IR-level instructions that have efficient target representations. Taken together, I think this addresses the use case you're highlighting here (and more). -Hal ----- Original Message -----> From: "Vj Muthyala" <Vj.Muthyala at amd.com> > To: llvmdev at cs.uiuc.edu > Sent: Tuesday, January 27, 2015 10:14:44 PM > Subject: [LLVMdev] RFC: generation of PSAD instruction > > Hello, > > I was looking at the following test case which is very relevant in > imaging applications. > > > > int sad(unsigned char *pix1, unsigned char *pix2) > > { > > int sum = 0; > > for( int x = 0; x < 16; x++ ) > > { > > sum += abs( pix1[x] - pix2[x] ); > > } > > return sum; > > } > > > > The llvm IR generated after all the IR optimizations is > > > > ….. > > %9 = bitcast i8* %8 to <4 x i8>* > > %wide.load.1 = load <4 x i8>* %9, align 1 > > %10 = zext <4 x i8> %wide.load.1 to <4 x i32> > > %11 = getelementptr inbounds i8* %pix2, i64 4 > > %12 = bitcast i8* %11 to <4 x i8>* > > %wide.load7.1 = load <4 x i8>* %12, align 1 > > %13 = zext <4 x i8> %wide.load7.1 to <4 x i32> > > %14 = sub nsw <4 x i32> %10, %13 > > %15 = icmp uge <4 x i8> %wide.load.1, %wide.load7.1 > > %16 = sub nsw <4 x i32> zeroinitializer, %14 > > %17 = select <4 x i1> %15, <4 x i32> %14, <4 x i32> %16 > > %18 = add nsw <4 x i32> %17, %7 > > %19 = getelementptr inbounds i8* %pix1, i64 8 > > ….. > > > > The test case is a perfect example for the generation of Sum of > Absolute Difference (SAD) instruction which is present in most of > the targets. The proposed generated IR is : ( directly x86 intrinsic > is called just to demonstrate the difference ) > > > > ….. > > %6 = bitcast i8* %5 to x86_mmx* > > %wide.load8.1 = load x86_mmx* %6, align 1 > > %7 = getelementptr inbounds i8* %pix2, i64 8 > > %8 = bitcast i8* %7 to x86_mmx* > > %wide.load79.1 = load x86_mmx* %8, align 1 > > %9 = call x86_mmx @llvm.x86.mmx.psad.bw(x86_mmx %wide.load8.1, > x86_mmx %wide.load79.1) > > %10 = bitcast x86_mmx %9 to i64 > > %11 = trunc i64 %10 to i32 > > %12 = add i32 %11, %4 > > ….. > > > > This Loop optimization can reduce the loop to only one instruction > for every 4 or 8 or 16 iterations of the loop ( depending on the > target sad instruction). > > > > Proposed Solution > > > > The sad pattern shown above which is the abs pattern with reduction > variable ( sequence of sub, icomp, sub, select, add ) can be > replaced with an intrinsic call. > > For a non-unrolled loop we can do this in LoopVectorization where all > the infrastructure for identifying reduction variables is already > there. > > We just need to identify the sad pattern, remove those instructions > and call an intrinsic. > > This can be done in the loop body which Vectorizer creates > (vector.body) without disturbing the scalar body of the loop. > > > > For this to happen in LoopVectorizer, we also need to influence the > cost modeling of LV. For computing cost for different Vectorization > Factors, we should remove the cost of sad pattern and add the cost > of intrinsic. > > If selected vectorization factor comes to be equal to any operand > vector size of basic SAD instruction for that particular target (8 > for x86 target) then we can go ahead and replace with intrinsic. > > > > This intrinsic call can be a call to llvm intrinsic which can be > lowered into different target intrinsic calls depending on the VF > selected. > > > > For unrolled loops, we can target in SLPVectorizer. > > > > The work in loopVectorizer using llvm intrinsic call is ready for x86 > codegen and can send the patch after this discussion. > > ( Attached contains complete IR with and without 8 byte SAD for the > above test case after optimizations ) > > > > Please provide your thoughts. > > > > Thank you, > > Vijender > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-- Hal Finkel Assistant Computational Scientist Leadership Computing Facility Argonne National Laboratory
On Wed, Jan 28, 2015 at 7:50 AM, Hal Finkel <hfinkel at anl.gov> wrote:> Hi Vijender, > > Thanks for posting this, there is wide support here for improving our support for reductions of various kinds, both in flavor and robustness. I've cc'd some others who have previously discussed this. > > James has advocated in the past for an intrinsic for horizontal reductions, which I support. We also need better support for vectorization cost modeling of combinations of IR-level instructions that have efficient target representations. Taken together, I think this addresses the use case you're highlighting here (and more).The second part (multi-instruction cost modeling) is on my todo list, to vectorize saturation code. I created a thread recently, and indeed, people have mentioned SAD as another potential user. I need to look into how the cost model interacts with the reduction variable identification (it doesn't?), and will come up with a patch soon! -Ahmed> > -Hal > > ----- Original Message ----- >> From: "Vj Muthyala" <Vj.Muthyala at amd.com> >> To: llvmdev at cs.uiuc.edu >> Sent: Tuesday, January 27, 2015 10:14:44 PM >> Subject: [LLVMdev] RFC: generation of PSAD instruction >> >> Hello, >> >> I was looking at the following test case which is very relevant in >> imaging applications. >> >> >> >> int sad(unsigned char *pix1, unsigned char *pix2) >> >> { >> >> int sum = 0; >> >> for( int x = 0; x < 16; x++ ) >> >> { >> >> sum += abs( pix1[x] - pix2[x] ); >> >> } >> >> return sum; >> >> } >> >> >> >> The llvm IR generated after all the IR optimizations is >> >> >> >> ….. >> >> %9 = bitcast i8* %8 to <4 x i8>* >> >> %wide.load.1 = load <4 x i8>* %9, align 1 >> >> %10 = zext <4 x i8> %wide.load.1 to <4 x i32> >> >> %11 = getelementptr inbounds i8* %pix2, i64 4 >> >> %12 = bitcast i8* %11 to <4 x i8>* >> >> %wide.load7.1 = load <4 x i8>* %12, align 1 >> >> %13 = zext <4 x i8> %wide.load7.1 to <4 x i32> >> >> %14 = sub nsw <4 x i32> %10, %13 >> >> %15 = icmp uge <4 x i8> %wide.load.1, %wide.load7.1 >> >> %16 = sub nsw <4 x i32> zeroinitializer, %14 >> >> %17 = select <4 x i1> %15, <4 x i32> %14, <4 x i32> %16 >> >> %18 = add nsw <4 x i32> %17, %7 >> >> %19 = getelementptr inbounds i8* %pix1, i64 8 >> >> ….. >> >> >> >> The test case is a perfect example for the generation of Sum of >> Absolute Difference (SAD) instruction which is present in most of >> the targets. The proposed generated IR is : ( directly x86 intrinsic >> is called just to demonstrate the difference ) >> >> >> >> ….. >> >> %6 = bitcast i8* %5 to x86_mmx* >> >> %wide.load8.1 = load x86_mmx* %6, align 1 >> >> %7 = getelementptr inbounds i8* %pix2, i64 8 >> >> %8 = bitcast i8* %7 to x86_mmx* >> >> %wide.load79.1 = load x86_mmx* %8, align 1 >> >> %9 = call x86_mmx @llvm.x86.mmx.psad.bw(x86_mmx %wide.load8.1, >> x86_mmx %wide.load79.1) >> >> %10 = bitcast x86_mmx %9 to i64 >> >> %11 = trunc i64 %10 to i32 >> >> %12 = add i32 %11, %4 >> >> ….. >> >> >> >> This Loop optimization can reduce the loop to only one instruction >> for every 4 or 8 or 16 iterations of the loop ( depending on the >> target sad instruction). >> >> >> >> Proposed Solution >> >> >> >> The sad pattern shown above which is the abs pattern with reduction >> variable ( sequence of sub, icomp, sub, select, add ) can be >> replaced with an intrinsic call. >> >> For a non-unrolled loop we can do this in LoopVectorization where all >> the infrastructure for identifying reduction variables is already >> there. >> >> We just need to identify the sad pattern, remove those instructions >> and call an intrinsic. >> >> This can be done in the loop body which Vectorizer creates >> (vector.body) without disturbing the scalar body of the loop. >> >> >> >> For this to happen in LoopVectorizer, we also need to influence the >> cost modeling of LV. For computing cost for different Vectorization >> Factors, we should remove the cost of sad pattern and add the cost >> of intrinsic. >> >> If selected vectorization factor comes to be equal to any operand >> vector size of basic SAD instruction for that particular target (8 >> for x86 target) then we can go ahead and replace with intrinsic. >> >> >> >> This intrinsic call can be a call to llvm intrinsic which can be >> lowered into different target intrinsic calls depending on the VF >> selected. >> >> >> >> For unrolled loops, we can target in SLPVectorizer. >> >> >> >> The work in loopVectorizer using llvm intrinsic call is ready for x86 >> codegen and can send the patch after this discussion. >> >> ( Attached contains complete IR with and without 8 byte SAD for the >> above test case after optimizations ) >> >> >> >> Please provide your thoughts. >> >> >> >> Thank you, >> >> Vijender >> >> >> _______________________________________________ >> LLVM Developers mailing list >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >> > > -- > Hal Finkel > Assistant Computational Scientist > Leadership Computing Facility > Argonne National Laboratory > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev