Anyone run across __fixunsdfti relocation problems while building the llvm gcc frontend on an AMD-64 box under linux? For some reason, TImode is turned on but the bootstrap xgcc compiler complains that 128-bit integers are not supported. Any clues on a workaround?
This is not yet supported by the llvm x86 backend. Perhaps there is a configuration option (or you may have to hack up some files in config/ i386) to disable gcc long double support? Evan On Oct 9, 2006, at 11:50 AM, Scott Michel wrote:> Anyone run across __fixunsdfti relocation problems while building the > llvm gcc frontend on an AMD-64 box under linux? For some reason, > TImode > is turned on but the bootstrap xgcc compiler complains that 128-bit > integers are not supported. > > Any clues on a workaround? > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Evan Cheng wrote:> This is not yet supported by the llvm x86 backend. Perhaps there is a > configuration option (or you may have to hack up some files in config/ > i386) to disable gcc long double support?Tracked it down. More precisely, llvm doesn't support 128-bit integer (TImode) arithmetic yet. The "128-bit integers not supported" actually comes from llvm-types.cc. I wonder how x86_64-darwin-* gets built, since it would seem to have the same issue. In any case, I think I have a minor patch that will detect llvm and work around this problem until such time as when TImode is added to llvm.