similar to: Correct way to pass int128 from LLVM to C++ function (MSVC)

Displaying 20 results from an estimated 1000 matches similar to: "Correct way to pass int128 from LLVM to C++ function (MSVC)"

2016 Dec 21
0
Correct way to pass int128 from LLVM to C++ function (MSVC)
The Windows x64 ABI rules say that anything larger than 8 bytes is passed by reference.[1] Because MSVC doesn't support the __int128 type on x64, nobody has made sure that the LLVM i128 type is passed in a way that follows the local ABI rules. I think LLVM should probably pass i128 the same way it passes <2 x i64> on Win64, which is indirectly in memory. Until LLVM is fixed, you can
2016 Dec 21
2
Correct way to pass int128 from LLVM to C++ function (MSVC)
Thanks for the quick reply. Yes, passing it as int128* is a workaround that obviously works. Still, that leaves me with the return values. Or are you suggesting that I rewrite int128 Modify(int128& tmp) { … } to void Modify(int128& result, int128& tmp) { … } Obviously that will work, it just feels… dirty and wrong… :-) I’ve also attempted to bit-cast i128’s to <2 x i64> in
2016 Dec 21
0
Correct way to pass int128 from LLVM to C++ function (MSVC)
On Wed, Dec 21, 2016 at 11:18 AM, Stefan de Bruijn <stefan at nubilosoft.com> wrote: > Thanks for the quick reply. Yes, passing it as int128* is a workaround > that obviously works. Still, that leaves me with the return values. Or are > you suggesting that I rewrite > > > > int128 Modify(int128& tmp) { … } > > > > to > > > > void
2018 Dec 29
2
Portable multiplication 64 x 64 -> 128 for int128 reimplementation
Hi, For some maybe dumb reasons I try to write a portable version of int128. What is very valuable for this implementation is access to MUL instruction on x86 which provides full 64 x 64 -> 128 bit multiplication. An equally useful on ARM would be UMULH instruction. Well, the way you can access this on clang / GCC is to use __int128 type or use inline assembly. MSVC provides an intrinsic for
2018 Dec 30
3
[cfe-dev] Portable multiplication 64 x 64 -> 128 for int128 reimplementation
_mulx_u64 only exists when the target is x86_64. That's still not very portable. I'm not opposed to removing the bmi2 check, but gcc also has the same check so it doesn't improve portability much. ~Craig On Sat, Dec 29, 2018 at 4:44 PM Arthur O'Dwyer via llvm-dev < llvm-dev at lists.llvm.org> wrote: > Hi Pawel, > > There is the _mulx_u64 intrinsic, but it
2011 Feb 21
2
[LLVMdev] Passing structures as pointers, MSVC x64 style
The MS x64 ABI calling convention (http://msdn.microsoft.com/en-us/library/ms235286(VS.80).aspx) says: Any argument that doesn’t fit in 8 bytes, or is not 1, 2, 4, or 8 bytes, must be passed by reference. Clang isn't doing that for us when passing our triple, x86_64-pc-win32-macho. Here's a simple example program: struct Guid { unsigned int Data1; unsigned
2023 Sep 05
1
[PATCH nbdkit] server: Move size parsing code (nbdkit_parse_size) to common/include
On Tue, Sep 05, 2023 at 11:09:02AM +0100, Richard W.M. Jones wrote: > > > +static inline int64_t > > > +human_size_parse (const char *str, > > > + const char **error, const char **pstr) > > > +{ > > > + int64_t size; > > > + char *end; > > > + uint64_t scale = 1; > > > + > > > + /* XXX Should we
2010 Jun 11
3
[LLVMdev] Bignum development
On Fri, Jun 11, 2010 at 3:28 PM, Bill Hart <goodwillhart at googlemail.com> wrote: > Hi Eli, > > On 11 June 2010 22:44, Eli Friedman <eli.friedman at gmail.com> wrote: >> On Fri, Jun 11, 2010 at 10:37 AM, Bill Hart <goodwillhart at googlemail.com> wrote: >>> a) What plans are there to support addition, subtraction, >>> multiplication, division,
2018 Dec 31
0
[cfe-dev] Portable multiplication 64 x 64 -> 128 for int128 reimplementation
On trunk we never generate MULX. We used to blindly use it anytime bmi2 was enabled, but its a longer encoding and isn't a guaranteed register allocation improvement. So I took it out a few weeks ago. We need a more precise heuristic for when to use it. LLVM trunk will never generate ADCX/ADOX either. This was removed in September. We used to inconsistently generate them when adx was enabled
2011 Feb 22
0
[LLVMdev] Passing structures as pointers, MSVC x64 style
Carl, See clang/lib/CodeGen/TargetInfo.cpp. // FIXME: mingw64-gcc emits 128-bit struct as i128 if (Size <= 128 && (Size & (Size - 1)) == 0) return ABIArgInfo::getDirect(llvm::IntegerType::get(getVMContext(), Size)); It was my workaround, sorry. Please check to tweak the clause (128 to 64) and lemme
2010 Jun 11
4
[LLVMdev] Bignum development
Hi all, After searching for a decent compiler backend for ages (google sometimes isn't helpful), I recently stumbled upon LLVM. Woot!! I work on bignum arithmetic (I'm a professional mathematician) and have recently decided to switch from developing GPL'd bignum code to BSD licensed code. (See http://www.mpir.org/ which I contributed to for a while - a fork of GMP). Please bear with
2014 Sep 23
3
[LLVMdev] compiler-rt with MSVC?
I’m trying to figure out how to build compiler-rt 3.5.0 with Visual Studio 2013. In an autotools build or cmake on Linux, I believe putting the compiler-rt sources under llvm/projects is enough to build them automatically. Do I need to do anything specific to get the same with MSVC? I've tried setting -DLLVM_BUILD_EXTERNAL_COMPILER_RT either ON or OFF, but can't find any evidence of
2018 Apr 26
2
windows ABI problem with i128?
I'm trying to use LLVM to create compiler-rt.o on Windows. I use this command from the compiler-rt project: [nix-shell:~/downloads/llvm-project/compiler-rt]$ clang -nostdlib -S -emit-llvm lib/builtins/udivti3.c -g -target x86_64-windows -DCRT_HAS_128BIT The resulting LLVM IR is: ================================================================= ; ModuleID = 'lib/builtins/udivti3.c'
2018 Apr 26
0
windows ABI problem with i128?
Most probably you need to properly specify the calling convention the backend is using for calling the runtime functions. Or implement the stub for udivti3 that performs the necessary argument lifting. I guess there is no standard ABI document describing the intended calling convention here, so I'd just do what mingw64 does here and make everything here compatible. On Thu, Apr 26, 2018 at
2018 Apr 26
1
windows ABI problem with i128?
On Thu, Apr 26, 2018 at 3:44 AM, Anton Korobeynikov <anton at korobeynikov.info > wrote: > Most probably you need to properly specify the calling convention the > backend is using for calling the runtime functions. Thanks for the tip. Can you be more specific? Are you suggesting there is some config parameter I can set before running TargetMachineEmitToFile? Do you know what
2010 Jun 13
1
[LLVMdev] Bignum development
I think from the C compiler's point of view, it is going to want it to work for any size above an i64, i.e. all the way up to an i128 so that if the user of the C compiler does this computation with __uint128_t's then it will Do The Right Thing TM. Basically, you want unsigned long a, b, c, d; .... const __uint128_t u = (__uint128_t) a + b; const unsigned long v = u >> 64; const
2010 Jun 13
0
[LLVMdev] Bignum development
> Yeah I had a think about it, and I think intrinsics are the wrong way > to do it. So I'd say you are likely right. For this to work well, the way the code generators handle flags will need to be improved: currently it is suboptimal, in fact kind of a hack. Ciao, Duncan.
2015 Sep 14
2
JIT: Mapping global variable in JIT'ted code to variable in running program
Hi, I think this is probably easiest to explain with code (I only provided the essentials for clarity): // begin file jit.cpp int myglobal; void printMyGlobal() { printf("myglobal: %d\n", myglobal); } int main(int argc, char *argv[]) { // This file, jit.cpp has been compiled to bitcode (clang -S -emit-llvm jit.cpp) // and is read into Module M here Module *M = ...
2008 Feb 27
6
[LLVMdev] ABI for i128 on x86-32?
Hello, Does anyone know of any precedent for handling i128 in the calling convention on x86-32? I'm trying to write a testcase that returns an i128 value, and LLVM currently has only two 32-bit GPRs designated for returning integer values on x86-32. Dan
2009 Aug 06
4
[LLVMdev] i128 backend or frontend lowering
I am seeing i128 from llvm-gcc on Alpha.  I know the calling convention for them, they are split into two registers, but I don't know if that should be handled in the frontend or the backend.  I would just as soon do it in the backend, but I didn't see any support in the new calling convention work for automatically splitting an argument into multiple registers. Is the backend the best