We use alias extensively in our library to support OpenCL generating code for
both our CPUs and GPUs. During the transition to LLVM 3.0 with the new type
system, we're seeing two problems. Both involve type conversions occurring
across an alias.
In one case, one of the types is pointer to an opaque type, and ends up creating
an assert in the verifier where it is checking that argument types passed match
parameter types expected.
In the other case, we're seeing a type conversion inserted where previously
no conversion (or an implicit bitcast conversion, if you prefer) was done.
In thinking about this, it feels like there are two possible interpretations of
alias, but the LLVM IR Language Reference leaves a lot to one's
pre-conceived notions.
The first interpretation (and we believe the long-standing one) is that an alias
is merely another name for a block of target machine instructions. We call this
"bit-preserving alias", or "late alias", not being aware of
any well-known names for this kind of alias, beyond "alias".
The second interpretation might be called "value-preserving" alias, or
"early alias". In this version, the alias is conceptually a
"call", with all the arguments converted (as if by assignment) to
those expected by the callee.
It seems LLVM 3.0 implements value-preserving alias, while previous LLVM
versions implemented (perhaps by accident) bit-preserving alias.
It's worth noting that in many cases both kinds of alias produce the same
results... for example, in OpenCL:
static uint2 __ SH1I422 ( uint2 foo, uint2 ) { ... }
extern __attribute__((overloadable, weak, alias("__SH1I422"))) uint2
shuffle(uint2, uint2);
extern __attribute__((overloadable, weak, alias("__SH1I422"))) int2
shuffle(int2, uint2);
Both value-preserving and bit-preserving alias do the same thing for the above
two cases.
But here's an example of an alias where the results differ. It used to work
with LLVM 2.9, but does not with LLVM 3.0...
extern __attribute__((overloadable, weak, alias("__SH1I422"))) float2
shuffle(float2, uint2);
In LLVM 2.9 and LLVM 3.0, our front-end generates:
@__shuffle_2f32_2u32 = alias weak <2 x i32> (<2 x i32>, <2 x
i32>)* @4
And the calls, before linking, look like:
%call9 = call <2 x float> @__shuffle_2f32_2u32(<2 x float> %tmp7,
<2 x i32> %tmp8) nounwind
After linking with LLVM 3.0, the call looks like:
%call9 = call <2 x float> bitcast (<2 x i32> (<2 x i32>, <2
x i32>)* @__shuffle_2f32_2u32 to <2 x float> (<2 x float>, <2
x i32>)*)(<2 x float> %tmp7, <2 x i32> %tmp8) nounwind
LLVM 3.0 ends up, after optimization including inlining, converting the
shuffle(float2) caller's argument to uint2 and then converting
__SH1I422's return value from uint2 to float2...
We're pretty intent on getting the old "bit-preserving" semantics
back somehow and are looking for suggestions on how to do this, maintaining the
space-compression value that we use alias for.
And, keep in mind the first case, where the types being passed in are pointers
to opaque types... the target function ends up casting the pointer to a type
that it knows the definition of, and it turns out is the same for all the
callers, even though they are all distinct types.
Thanks,
Richard
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.llvm.org/pipermail/llvm-dev/attachments/20120227/1bb2dc0a/attachment.html>