similar to: [LLVMdev] Bignum development

Displaying 20 results from an estimated 5000 matches similar to: "[LLVMdev] Bignum development"

2010 Jun 11
3
[LLVMdev] Bignum development
On Fri, Jun 11, 2010 at 3:28 PM, Bill Hart <goodwillhart at googlemail.com> wrote: > Hi Eli, > > On 11 June 2010 22:44, Eli Friedman <eli.friedman at gmail.com> wrote: >> On Fri, Jun 11, 2010 at 10:37 AM, Bill Hart <goodwillhart at googlemail.com> wrote: >>> a) What plans are there to support addition, subtraction, >>> multiplication, division,
2010 Jun 13
2
[LLVMdev] Bignum development
I was able to get the loop to increment from -999 to 0 using IR directly. That got rid of the cmpq. The carry i was after was able to be obtained using the intrinsic @llvm.uadd.with.overflow.i64, however there is no way to add with carry and have it realise that the resulting *carry out* cannot exceed 1. It actually writes the carry to a byte, and then uses logical operations on it, which slows
2010 Jun 12
0
[LLVMdev] Bignum development
On 12 June 2010 00:51, Eli Friedman <eli.friedman at gmail.com> wrote: > On Fri, Jun 11, 2010 at 3:28 PM, Bill Hart <goodwillhart at googlemail.com> wrote: >> Hi Eli, >> >> On 11 June 2010 22:44, Eli Friedman <eli.friedman at gmail.com> wrote: >>> On Fri, Jun 11, 2010 at 10:37 AM, Bill Hart <goodwillhart at googlemail.com> wrote:
2010 Jun 13
0
[LLVMdev] Bignum development
Hi Bill- I think, ideally, the backend would be able to match arbitrary-precision arithmetic to add-with-carry or subtract-with-borrow through i65/i33. That would remove the need for the overflow intrinsics entirely. Alistair On 13 Jun 2010, at 02:27, Bill Hart wrote: > I was able to get the loop to increment from -999 to 0 using IR > directly. That got rid of the cmpq. > > The
2010 Jun 12
0
[LLVMdev] Bignum development
On 12 June 2010 00:51, Eli Friedman <eli.friedman at gmail.com> wrote: > On Fri, Jun 11, 2010 at 3:28 PM, Bill Hart <goodwillhart at googlemail.com> wrote: >> Hi Eli, >> >> On 11 June 2010 22:44, Eli Friedman <eli.friedman at gmail.com> wrote: >>> On Fri, Jun 11, 2010 at 10:37 AM, Bill Hart <goodwillhart at googlemail.com> wrote:
2010 Jun 13
1
[LLVMdev] Bignum development
I think from the C compiler's point of view, it is going to want it to work for any size above an i64, i.e. all the way up to an i128 so that if the user of the C compiler does this computation with __uint128_t's then it will Do The Right Thing TM. Basically, you want unsigned long a, b, c, d; .... const __uint128_t u = (__uint128_t) a + b; const unsigned long v = u >> 64; const
2010 Jun 13
2
[LLVMdev] Bignum development
Yeah I had a think about it, and I think intrinsics are the wrong way to do it. So I'd say you are likely right. Bill. On 13 June 2010 04:33, Alistair Lynn <arplynn at gmail.com> wrote: > Hi Bill- > > I think, ideally, the backend would be able to match arbitrary-precision arithmetic to add-with-carry or subtract-with-borrow through i65/i33. That would remove the need for the
2015 Nov 10
2
Generating Big Num addition code which uses ADC (add with carry) instructions
I'm trying to work out LLVM code which generates something similar to the following when adding large multiword numbers stored as separate words: ADD x1 x1 ADC x2 y2 ADC x3 y3 etc, where such a three argument add like ADC on x86 (which includes a carry in the addition) is available as a machine op. The background to this is that I'm trying to implement fast multiword addition in
2010 Jun 13
0
[LLVMdev] Bignum development
> Yeah I had a think about it, and I think intrinsics are the wrong way > to do it. So I'd say you are likely right. For this to work well, the way the code generators handle flags will need to be improved: currently it is suboptimal, in fact kind of a hack. Ciao, Duncan.
2008 Sep 08
2
[LLVMdev] Integer questions
On Sep 5, 2008, at 3:07 PM, Duncan Sands wrote: > The current maximum the code generators support is i256. If you try > to > use bigger integers it will work fine in the bitcode, but if you try > to do code generation the compiler will crash. FYI, there is one other issue here, PR2660. While codegen in general can handle types like i256, individual targets don't always have
2014 Feb 18
4
[LLVMdev] Optimizing math code
Hello LLVM-dev, I’m writing some crypto math code in C, and trying to make it as simple and portable as possible, so that there won’t be any unforeseen platform bugs. Ideally, I’d like the compiler to as much work as possible for me, and I’m wondering if you know how to make clang (or gcc, for that matter) do some of this. So I have a couple of questions. If this is the wrong list to ask for
2010 Jun 12
3
[LLVMdev] Bignum development
On Fri, Jun 11, 2010 at 5:41 PM, Bill Hart <goodwillhart at googlemail.com> wrote: > What is i1? Sorry for the really daft question. These short > abbreviations are sometimes hard to look up..... 1-bit integer; in practice, usually used like a boolean, although it supports the full complement of integer operations. >> There have really only been sporadic queries about practical
2008 Sep 08
0
[LLVMdev] Integer questions
On Mon, Sep 8, 2008 at 12:08 PM, Dan Gohman <gohman at apple.com> wrote: > FYI, there is one other issue here, PR2660. While codegen in > general can handle types like i256, individual targets don't always > have calling convention rules to cover them. For example, returning > an i128 on x86-32 or an i256 on x86-64 doesn't doesn't fit in the > registers designated
2010 Jun 12
3
[LLVMdev] Bignum development
On Fri, Jun 11, 2010 at 7:09 PM, Bill Hart <goodwillhart at googlemail.com> wrote: >>> There are obviously numerous ways we might use LLVM to aid development >>> of "bsdnt". I'll keep exploring those options. It sounds like, for the >>> time being, analysing existing code output and looking for ways to >>> improve it on certain arches is
2014 Feb 18
2
[LLVMdev] Optimizing math code
On Feb 17, 2014, at 6:38 PM, Stephen Checkoway <s at pahtak.org> wrote: > > On Feb 17, 2014, at 8:10 PM, Michael Hamburg <mike at shiftleft.org> wrote: > >> First, addition. I have multiprecision integer objects, and I’d like to add them component-wise (likewise, subtract, negate, mask…). For example: >> >> struct mp { >> int limb[8]; >> }
2011 Mar 30
1
[LLVMdev] Bignums
Hello all! I'm working on a library with bignum support, and I wanted to try LLVM as an apparently simpler and more portable system to my current design (a Haskell script which spits out mixed C and assembly). Porting the script to use the LLVM bindings instead of the current hack was pretty easy. But I have a few remaining questions: (1) Are bignums exposed to any higher-level
2010 Jun 12
0
[LLVMdev] Bignum development
On 12 June 2010 03:24, Eli Friedman <eli.friedman at gmail.com> wrote: > On Fri, Jun 11, 2010 at 7:09 PM, Bill Hart <goodwillhart at googlemail.com> wrote: >>>> There are obviously numerous ways we might use LLVM to aid development >>>> of "bsdnt". I'll keep exploring those options. It sounds like, for the >>>> time being, analysing
2015 Feb 02
3
[LLVMdev] LLVM IR i128
For 64-bit X86 code we have had good success with using up-to 128-bit integers (this includes say 36-bit or even 2-bit integers). On Mon, Feb 2, 2015 at 4:03 PM, Alejandro Velasco <gollumdelperdiguero at gmail.com> wrote: > I asked a similar question last year here. Operations on types iN with no > direct translation into one assembly instruction seem to be translated into >
2017 Feb 16
2
multiprecision add/sub
Stephen Canon via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Why do you think this requires new intrinsics instead of teaching the optimizer what to do with the existing intrinsics? IMO, as a multiprecision math library maker, the "teaching the optimizer what to do with the existing intrinsics" approach is much better as long as it can be made to work. If one is
2008 Sep 05
3
[LLVMdev] Integer questions
First off, most of my information about the integer representation in LLVM is from http://llvm.org/docs/LangRef.html#t_integer and I could use some things cleared up. First, I guess that smaller integer sizes, say, i1 (boolean) are stuffed into a full word size for the cpu it is compiled on (so 8bits, or 32 bits or whatever). What if someone made an i4 and compiled it on 32/64 bit windows/nix/bsd