search for: _float16

Displaying 16 results from an estimated 16 matches for "_float16".

2019 Jan 22
4
_Float16 support
I'd like to start a discussion about how clang supports _Float16 for target architectures that don't have direct support for 16-bit floating point arithmetic. The current clang language extensions documentation says, "If half-precision instructions are unavailable, values will be promoted to single-precision, similar to the semantics of __fp16 except t...
2019 Jan 24
4
[cfe-dev] _Float16 support
On 24 Jan 2019, at 4:46, Sjoerd Meijer wrote: > Hello, > > I added _Float16 support to Clang and codegen support in the AArch64 > and ARM backends, but have not looked into x86. Ahmed is right: > AArch64 is fine, only a few ACLE intrinsics are missing. ARM has rough > edges: scalar codegen should be mostly fine, vector codegen needs some > more work. > &...
2019 Jan 24
2
[cfe-dev] _Float16 support
....com> Sent: Wednesday, January 23, 2019 3:30 PM To: Kaylor, Andrew <andrew.kaylor at intel.com> Cc: cfe-dev at lists.llvm.org; llvm-dev <llvm-dev at lists.llvm.org>; Craig Topper <craig.topper at gmail.com>; Richard Smith <richard at metafoo.co.uk> Subject: Re: [cfe-dev] _Float16 support Hey Andy, On Tue, Jan 22, 2019 at 10:38 AM Kaylor, Andrew via cfe-dev <cfe-dev at lists.llvm.org> wrote: > I'd like to start a discussion about how clang supports _Float16 for target architectures that don't have direct support for 16-bit floating point arithmetic. Thank...
2019 Feb 11
2
[cfe-dev] [8.0.0 Release] rc2 has been tagged
...ithm:640: > In file included from > (...)/sysroot-x86_64-linux-gnu/usr/include/c++/v1/initializer_list:47: > In file included from > (...)/sysroot-x86_64-linux-gnu/usr/include/c++/v1/cstddef:110: > > (...)/sysroot-x86_64-linux-gnu/usr/include/c++/v1/type_traits:740:56: > error: _Float16 is not supported on this target > template <> struct __libcpp_is_floating_point<_Float16> : > public true_type {}; > This is quite odd, because libc++ trunk has no mentions of `_Float16`, and I see none here: https://llvm.org/svn/llvm-project/libcxx/branches/relea...
2019 Feb 07
9
[8.0.0 Release] rc2 has been tagged
Dear testers, 8.0.0-rc2 has been tagged from the release_80 branch at r353413. Please run the test script, share your results, and upload binaries. I'll get the source tarballs and docs published as soon as possible, and binaries as they become available. Thanks, Hans
2019 Apr 12
5
LLVM 7.1.0-final has been tagged
Hi, I've just tagged LLVM 7.1.0-final. Testers, please upload the final binaries. Thanks, Tom
2017 Dec 04
2
[RFC] Half-Precision Support in the Arm Backends
Hi, I am working on C/C++ language support for the Armv8.2-A half-precision instructions. I've added support for _Float16 as a new source language type to Clang. _Float16 is a C11 extension type for which arithmetic is well defined, as opposed to e.g. __fp16 which is a storage-only type. I then fixed up the AArch64 backend, which was mostly straightforward: this involved making operations on f16 legal when FullFP16 is...
2020 Jan 29
3
Floating point semantic modes
...ng the library does the right thing), but we're translating __builtin_isnan() to x!=x. That's not what we should be doing if except_behavior isn't "ignore". > excess precision handling is missing from this list which matters for x87 and m68k fpu support and may matter for _Float16 implementations that fall back to _Float32 arithmetic. Yeah, we don't currently have any support for controlling that, at least in the x87 case. I think our current strategy is nothing more than setting FLT_EVAL_METHOD to reflect that we might not be using source precision for intermediate res...
2019 Dec 10
2
TypePromoteFloat loses intermediate rounding operations
...ke > the obvious shortcut for fast-math. > > > > The “promote-to-larger” strategy doesn’t really round correctly in > general, but it works for specific pairs of operator/operation. For > example, for f16 fadd in the default rounding mode, “a+b” is exactly > equivalent to “(_Float16)((float)a+(float)b)”. Not sure if this works for > all f16 operations, and not sure how much we care if it doesn’t. > > There aren’t any calling convention implications here for ARM targets; not > sure about other targets. On 32-bit ARM, clang explicitly coerces half > values to a...
2019 Dec 10
2
TypePromoteFloat loses intermediate rounding operations
For the following C code __fp16 x, y, z, w; void foo() { x = y + z; x = x + w; } clang produces IR that extends each operand to float and then truncates to half before assigning to x. Like this define dso_local void @foo() #0 !dbg !18 { %1 = load half, half* @y, align 2, !dbg !21 %2 = fpext half %1 to float, !dbg !21 %3 = load half, half* @z, align 2, !dbg !22 %4 = fpext half %3 to float, !dbg
2018 Jan 18
0
[RFC] Half-Precision Support in the Arm Backends
.... use SPR:$a and change it in (f32 SPR:$a): -- this will be a massive patch, -- but hopefully will not have a problem in the rewrite rules. - Then, I also need to modify a clang workaround, which should inserts the proper up and down converts for half types that are generated from the usage of _Float16, similarly like we already do for __fp16. This workaround can then later be replaced by the CCState implementation. Alternative 3: ---------------- This is the best approach, where I start looking into the AAPCS compliance issues first: - Fix the AAPCS compliance issues by implementing our o...
2017 Dec 06
2
[RFC] Half-Precision Support in the Arm Backends
Thanks a lot for the suggestions! I will look into using vld1/vst1, sounds good. I am custom lowering the bitcasts, that's now the only place where FP_TO_FP16 and FP16_TO_FP nodes are created to avoid inefficient code generation. I will double check if I can't achieve the same without using these nodes (because I really would like to get completely rid of them). Cheers, Sjoerd.
2018 Jan 18
1
[RFC] Half-Precision Support in the Arm Backends
.... use SPR:$a and change it in (f32 SPR:$a): -- this will be a massive patch, -- but hopefully will not have a problem in the rewrite rules. - Then, I also need to modify a clang workaround, which should inserts the proper up and down converts for half types that are generated from the usage of _Float16, similarly like we already do for __fp16. This workaround can then later be replaced by the CCState implementation. Alternative 3: ---------------- This is the best approach, where I start looking into the AAPCS compliance issues first: - Fix the AAPCS compliance issues by implementing our o...
2020 Jan 27
11
Floating point semantic modes
Hi all, I'm trying to put together a set of rules for how the various floating point semantic modes should be handled in clang. A lot of this information will be relevant to other front ends, but the details are necessarily bound to a front end implementation so I'm framing the discussion here in terms of clang. Other front ends can choose to follow clang or not. The existence of this set
2020 Jan 28
3
Floating point semantic modes
...off } > > no_signed_zeros { on, off } > > allow_reciprocal { on, off } > > allow_approximate_fns { on, off } > > allow_reassociation { on, off } > > excess precision handling is missing from this list which matters for x87 and > m68k fpu support and may matter for _Float16 implementations that fall back > to _Float32 arithmetic. > > the granularity of these knobs is also interesting (expression, code block, > function or translation unit), iso c pragmas work on code block level. > > > ====================== > > FP models > > =======...
2019 Feb 04
7
[RFC] Vector Predication
On Mon, 4 Feb 2019 at 22:04, Simon Moll <moll at cs.uni-saarland.de> wrote: > On 2/4/19 9:18 PM, Robin Kruppe wrote: > > > > On Mon, 4 Feb 2019 at 18:15, David Greene via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Simon Moll <moll at cs.uni-saarland.de> writes: >> >> > You are referring to the sub-vector sizes, if i am