After updating from ~october 2015 I noticed there were changes in how i1 works. This method used to print "1", now "254": define i32 @"main"() { BasicBlock0: %x = alloca i1 store i1 true, i1* %x %5 = load i1, i1* %x %6 = xor i1 %5, true call void (%._Foundation.NSString*, ...) @NSLog(%NSString* bitcast (%NSConstantString* @_unnamed_cfstring_1 to %NSString*), i1 %6) ret i32 0 } The %6 = xor i1 %5, true results in the value 254 being stored, while it used to be 0. is this as designed and some undefined behavior I was depending on, or a bug? When running it through the optimizer it does work (becomes i1 false). -- Carlo Kok
Hi Carlo, On 7 March 2016 at 10:14, Carlo Kok via llvm-dev <llvm-dev at lists.llvm.org> wrote:> call void (%._Foundation.NSString*, ...) @NSLog(%NSString* bitcast (%NSConstantString* @_unnamed_cfstring_1 to %NSString*), i1 %6)I think this is incorrect. Per C rules, types smaller than int get promoted to int for varargs calls. So the standard library will be expecting an i32 here (which you can obtain by either sign or zero extension depending on preference). I also wouldn't necessarily want to rely on being able to pass an i1 even if you control both the caller and callee, it's likely fairly untested territory at best. Cheers. Tim.