Andrew Kelley via llvm-dev
2017-Sep-30 19:33 UTC
[llvm-dev] invalid code generated on Windows x86_64 using skylake-specific features
I have this code, which works fine on MacOS and Linux hosts: const char *target_specific_cpu_args; const char *target_specific_features; if (g->is_native_target) { target_specific_cpu_args = ZigLLVMGetHostCPUName(); target_specific_features = ZigLLVMGetNativeFeatures(); } else { target_specific_cpu_args = ""; target_specific_features = ""; } g->target_machine = LLVMCreateTargetMachine(target_ref, buf_ptr(&g->triple_str), target_specific_cpu_args, target_specific_features, opt_level, reloc_mode, LLVMCodeModelDefault); char *ZigLLVMGetHostCPUName(void) { std::string str = sys::getHostCPUName(); return strdup(str.c_str()); } char *ZigLLVMGetNativeFeatures(void) { SubtargetFeatures features; StringMap<bool> host_features; if (sys::getHostCPUFeatures(host_features)) { for (auto &F : host_features) features.AddFeature(F.first(), F.second); } return strdup(features.getString().c_str()); } On this windows laptop that I am testing on, I get these values: target_specific_cpu_args: skylake target_specific_features: +sse2,+cx16,-tbm,-avx512ifma,-avx512dq,-fma4,+prfchw,+bmi2,+xsavec,+fsgsbase,+popcnt,+aes,+xsaves,-avx512er,-avx512vpopcntdq,-clwb,-avx512f,-clzero,-pku,+mmx,-lwp,-xop,+rdseed,-sse4a,-avx512bw,+clflushopt,+xsave,-avx512vl,-avx512cd,+avx,-rtm,+fma,+bmi,+rdrnd,-mwaitx,+sse4.1,+sse4.2,+avx2,+sse,+lzcnt,+pclmul,-prefetchwt1,+f16c,+ssse3,+sgx,+cmov,-avx512vbmi,+movbe,+xsaveopt,-sha,+adx,-avx512pf,+sse3 It successfully creates a binary, but the binary when run crashes with: Unhandled exception at 0x00007FF7C9913BA7 in test.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF. The disassembly of the crashed instruction is: 00007FF7C9913BA7 vmovdqa xmmword ptr [rbp-20h],xmm0 There is no callstack or source in the MSVC debugger. The .pdb produced is 64KB exactly. The file was linked with: lld -NOLOGO -DEBUG -MACHINE:X64 /SUBSYSTEM:console -OUT:.\test.exe -NODEFAULTLIB -ENTRY:_start ./zig-cache/test.obj ./zig-cache/builtin.obj ./zig-cache/compiler_rt.obj ./zig-cache/kernel32.lib When I change the call to LLVMCreateTargetMachine so that both target_specific_cpu_args and target_specific_features are the empty string, the produced binary is valid and runs successfully. Is this an LLVM bug? Am I using the API incorrectly? Is there more information I can provide to LLVM-dev mailing list that would make it easier to help me? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170930/9b7b6ba8/attachment.html>
Andrew Kelley via llvm-dev
2017-Oct-01 01:27 UTC
[llvm-dev] invalid code generated on Windows x86_64 using skylake-specific features
I suspect that there are 2 issues here: * I have incorrect alignment somewhere * MSVC / .pdb / CodeView debugging is not working correctly. I think the latter would help solve the former. I will send out a new email later talking about the issues I'm having debugging llvm-generated binaries with MSVC. On Sat, Sep 30, 2017 at 3:33 PM, Andrew Kelley <superjoe30 at gmail.com> wrote:> I have this code, which works fine on MacOS and Linux hosts: > > const char *target_specific_cpu_args; > const char *target_specific_features; > if (g->is_native_target) { > target_specific_cpu_args = ZigLLVMGetHostCPUName(); > target_specific_features = ZigLLVMGetNativeFeatures(); > } else { > target_specific_cpu_args = ""; > target_specific_features = ""; > } > > g->target_machine = LLVMCreateTargetMachine(target_ref, > buf_ptr(&g->triple_str), > target_specific_cpu_args, target_specific_features, opt_level, > reloc_mode, LLVMCodeModelDefault); > > > > char *ZigLLVMGetHostCPUName(void) { > std::string str = sys::getHostCPUName(); > return strdup(str.c_str()); > } > > char *ZigLLVMGetNativeFeatures(void) { > SubtargetFeatures features; > > StringMap<bool> host_features; > if (sys::getHostCPUFeatures(host_features)) { > for (auto &F : host_features) > features.AddFeature(F.first(), F.second); > } > > return strdup(features.getString().c_str()); > } > > On this windows laptop that I am testing on, I get these values: > > target_specific_cpu_args: skylake > > target_specific_features: +sse2,+cx16,-tbm,-avx512ifma,- > avx512dq,-fma4,+prfchw,+bmi2,+xsavec,+fsgsbase,+popcnt,+aes, > +xsaves,-avx512er,-avx512vpopcntdq,-clwb,-avx512f,-clzero,-pku,+mmx,- > lwp,-xop,+rdseed,-sse4a,-avx512bw,+clflushopt,+xsave,- > avx512vl,-avx512cd,+avx,-rtm,+fma,+bmi,+rdrnd,-mwaitx,+sse4. > 1,+sse4.2,+avx2,+sse,+lzcnt,+pclmul,-prefetchwt1,+f16c,+ > ssse3,+sgx,+cmov,-avx512vbmi,+movbe,+xsaveopt,-sha,+adx,-avx512pf,+sse3 > > > It successfully creates a binary, but the binary when run crashes with: > > Unhandled exception at 0x00007FF7C9913BA7 in test.exe: 0xC0000005: Access > violation reading location 0xFFFFFFFFFFFFFFFF. > > The disassembly of the crashed instruction is: > > 00007FF7C9913BA7 vmovdqa xmmword ptr [rbp-20h],xmm0 > > There is no callstack or source in the MSVC debugger. The .pdb produced is > 64KB exactly. The file was linked with: > > lld -NOLOGO -DEBUG -MACHINE:X64 /SUBSYSTEM:console -OUT:.\test.exe > -NODEFAULTLIB -ENTRY:_start ./zig-cache/test.obj ./zig-cache/builtin.obj > ./zig-cache/compiler_rt.obj ./zig-cache/kernel32.lib > > > When I change the call to LLVMCreateTargetMachine so that both > target_specific_cpu_args and target_specific_features are the empty > string, the produced binary is valid and runs successfully. > > Is this an LLVM bug? Am I using the API incorrectly? Is there more > information I can provide to LLVM-dev mailing list that would make it > easier to help me? >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170930/7fcdd471/attachment.html>
Reid Kleckner via llvm-dev
2017-Oct-02 17:37 UTC
[llvm-dev] invalid code generated on Windows x86_64 using skylake-specific features
Can you post test.obj somewhere, and maybe the LLVM IR if you can get it? If it really was reading address 0xFFFFFFFFFFFFFFFF, then RBP must have been completely corrupted, probably by the prologue. On Sat, Sep 30, 2017 at 6:27 PM, Andrew Kelley via llvm-dev < llvm-dev at lists.llvm.org> wrote:> I suspect that there are 2 issues here: > > * I have incorrect alignment somewhere > * MSVC / .pdb / CodeView debugging is not working correctly. > > I think the latter would help solve the former. > > I will send out a new email later talking about the issues I'm having > debugging llvm-generated binaries with MSVC. > > On Sat, Sep 30, 2017 at 3:33 PM, Andrew Kelley <superjoe30 at gmail.com> > wrote: > >> I have this code, which works fine on MacOS and Linux hosts: >> >> const char *target_specific_cpu_args; >> const char *target_specific_features; >> if (g->is_native_target) { >> target_specific_cpu_args = ZigLLVMGetHostCPUName(); >> target_specific_features = ZigLLVMGetNativeFeatures(); >> } else { >> target_specific_cpu_args = ""; >> target_specific_features = ""; >> } >> >> g->target_machine = LLVMCreateTargetMachine(target_ref, >> buf_ptr(&g->triple_str), >> target_specific_cpu_args, target_specific_features, >> opt_level, reloc_mode, LLVMCodeModelDefault); >> >> >> >> char *ZigLLVMGetHostCPUName(void) { >> std::string str = sys::getHostCPUName(); >> return strdup(str.c_str()); >> } >> >> char *ZigLLVMGetNativeFeatures(void) { >> SubtargetFeatures features; >> >> StringMap<bool> host_features; >> if (sys::getHostCPUFeatures(host_features)) { >> for (auto &F : host_features) >> features.AddFeature(F.first(), F.second); >> } >> >> return strdup(features.getString().c_str()); >> } >> >> On this windows laptop that I am testing on, I get these values: >> >> target_specific_cpu_args: skylake >> >> target_specific_features: +sse2,+cx16,-tbm,-avx512ifma,- >> avx512dq,-fma4,+prfchw,+bmi2,+xsavec,+fsgsbase,+popcnt,+aes, >> +xsaves,-avx512er,-avx512vpopcntdq,-clwb,-avx512f,-clzero,- >> pku,+mmx,-lwp,-xop,+rdseed,-sse4a,-avx512bw,+clflushopt,+ >> xsave,-avx512vl,-avx512cd,+avx,-rtm,+fma,+bmi,+rdrnd,- >> mwaitx,+sse4.1,+sse4.2,+avx2,+sse,+lzcnt,+pclmul,- >> prefetchwt1,+f16c,+ssse3,+sgx,+cmov,-avx512vbmi,+movbe,+ >> xsaveopt,-sha,+adx,-avx512pf,+sse3 >> >> >> It successfully creates a binary, but the binary when run crashes with: >> >> Unhandled exception at 0x00007FF7C9913BA7 in test.exe: 0xC0000005: Access >> violation reading location 0xFFFFFFFFFFFFFFFF. >> >> The disassembly of the crashed instruction is: >> >> 00007FF7C9913BA7 vmovdqa xmmword ptr [rbp-20h],xmm0 >> >> There is no callstack or source in the MSVC debugger. The .pdb produced >> is 64KB exactly. The file was linked with: >> >> lld -NOLOGO -DEBUG -MACHINE:X64 /SUBSYSTEM:console -OUT:.\test.exe >> -NODEFAULTLIB -ENTRY:_start ./zig-cache/test.obj ./zig-cache/builtin.obj >> ./zig-cache/compiler_rt.obj ./zig-cache/kernel32.lib >> >> >> When I change the call to LLVMCreateTargetMachine so that both >> target_specific_cpu_args and target_specific_features are the empty >> string, the produced binary is valid and runs successfully. >> >> Is this an LLVM bug? Am I using the API incorrectly? Is there more >> information I can provide to LLVM-dev mailing list that would make it >> easier to help me? >> > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171002/3a0bc274/attachment.html>
Apparently Analagous Threads
- invalid code generated on Windows x86_64 using skylake-specific features
- invalid code generated on Windows x86_64 using skylake-specific features
- What is the correct Targettripple for generating a X86 COFF-Files on windows?
- What is the correct Targettripple for generating a X86 COFF-Files on windows?
- [LLVMdev] Proper values for LLVMCreateTargetMachine