Hey folks, I am hoping that someone might be able to help me better understand this design decision: For a 64 bit target, index's into GEP's of arrays are zext up to 64 bit integers, even if the variable itself is an alloca of only i32. Similarly, on a 32 bit target, index's into GEP's are trunc'd down to 32 bits even if they ultimately reference an alloc'd variable of type i64. My guess is that in the latter, there is a 2^32 limit on the number of elements in an array for 32 bit systems. This doesn't make sense for the former, though, since we're casting up to 64 bits from the smaller 32. So, does anyone have any insight, comments, or thoughts on why we're zext up to 64 bit representation? Take Care, ~Jon
On Nov 2, 2009, at 3:30 PM, Jon McLachlan wrote:> Hey folks, > > I am hoping that someone might be able to help me better understand > this design decision: For a 64 bit target, index's into GEP's of > arrays are zext up to 64 bit integers, even if the variable itself is > an alloca of only i32. Similarly, on a 32 bit target, index's into > GEP's are trunc'd down to 32 bits even if they ultimately reference an > alloc'd variable of type i64. > > My guess is that in the latter, there is a 2^32 limit on the number > of elements in an array for 32 bit systems. This doesn't make sense > for the former, though, since we're casting up to 64 bits from the > smaller 32. So, does anyone have any insight, comments, or thoughts > on why we're zext up to 64 bit representation?GEP uses sext, not zext (though a sext can be replaced by a zext in some cases by a crafty optimizer). GEP indices are normalized to be pointer-sized integers. This is done to aid optimization, as GEPs are ultimately lowered to pointer-sized integer arithmetic on most targets, and exposing the casts to the optimizer earlier rather than later gives it more flexibility to fold them or rearrange the surrounding code. Dan
Ahh, many thx for the insight - it makes perfect sense :) On Nov 2, 2009, at 7:17 PM, Dan Gohman wrote:> > On Nov 2, 2009, at 3:30 PM, Jon McLachlan wrote: > >> Hey folks, >> >> I am hoping that someone might be able to help me better understand >> this design decision: For a 64 bit target, index's into GEP's of >> arrays are zext up to 64 bit integers, even if the variable itself is >> an alloca of only i32. Similarly, on a 32 bit target, index's into >> GEP's are trunc'd down to 32 bits even if they ultimately reference >> an >> alloc'd variable of type i64. >> >> My guess is that in the latter, there is a 2^32 limit on the number >> of elements in an array for 32 bit systems. This doesn't make sense >> for the former, though, since we're casting up to 64 bits from the >> smaller 32. So, does anyone have any insight, comments, or thoughts >> on why we're zext up to 64 bit representation? > > GEP uses sext, not zext (though a sext can be replaced by a zext > in some cases by a crafty optimizer). > > GEP indices are normalized to be pointer-sized integers. This > is done to aid optimization, as GEPs are ultimately lowered to > pointer-sized integer arithmetic on most targets, and exposing > the casts to the optimizer earlier rather than later gives it more > flexibility to fold them or rearrange the surrounding code. > > Dan >