Hi everyone! The fptoui/fptosi instructions are currently specified to return a poison value if the rounded-towards-zero floating point number cannot be represented by the target integer type. The motivation for this behavior is that overflowing float to int casts in C are undefined behavior. However, many newer languages prefer to have a float to integer cast that is well-defined for all input values. A commonly chosen semantic is to saturate towards the minimum and maximum values of the integer type, and represent NaN values as zero. An extensive discussion of this issue for the Rust language can be found at https://github.com/rust-lang/rust/issues/10184 . Unfortunately, implementing this behavior in an efficient manner is not easy right now, because depending on the target architecture different instruction sequences need to be generated. On ARM the vcvt instruction directly exposes the desired saturation behavior. On X86 good instruction sequences vary depending on the size of the floating point number, and the size and signedness of the target integer type. I think there are broadly three ways in which the current situation can be improved: 1. Provide a fptoui/fptosi variant to produces target-specific values instead of a poison value for unrepresentable values. The result would be whatever is fastest for the given target. 2. Provide an intrinsic for saturating floating point to int conversions, as described above. 3. Provide an intrinsic for floating point to int conversions, which additionally indicates whether the value was representable, similarly to the existing XXX.with.overflow family of intrinsics. I think that point 1 is both the most pressing and the easiest to realize. This would resolve the immediate soundness problem in Rust (if not in a great way). Even if Rust specifies that float-to-int conversions are saturating we'd still want to support this kind of operation for performance reasons, and it would be preferable if performing a fast float-to-int conversion did not require dropping into unsafe code. The way I would imagine this to work is that fptoui/fptosi gain a flag similar to add nsw/nuw -- let's call it "fptoui representable" for now. If the flag is not specified the return value for unrepresentable values is target-specific. If it is specified, the return value is poison. (Alternatively the meaning of the flag could be inverted.)>From a cursory inspection of the code, there should not be too many placesthat care about the presence of this flag. The main one is of course constant folding, but there are probably others (I could imagine that the Float2Int pass makes assumptions here, but haven't looked too carefully.) Point 2 is also important, because specifying saturation as the default behavior for float-to-int casts is becoming increasingly common. This would need two new intrinsics, such as: iYY llvm.fptoui.sat.fXX.iYY(fXX %a) iYY llvm.fptosi.sat.fXX.iYY(fXX %a) There is some precedent here with the recently introduced llvm.sadd.sat and llvm.uadd.sat intrinsics for saturating integer addition. The wasm backend also has custom llvm.wasm.trunc.saturate intrinsics for this purpose. These intrinsics would also need corresponding SelectionDAG nodes. A generic lowering would use a number of comparison (or min/max) instructions, while target-specific lowerings will be able to do better (e.g. single instruction on arm or wasm). Point 3 is less important. Having a "with overflow" intrinsic would allow to easily implement custom handling of unrepresentable values, e.g. to generate an error in debug builds. The intrinsics would go something like this: {iYY, i1} llvm.fptoui.with.overflow.fXX.iYY(fXX %a) {iYY, i1} llvm.fptosi.with.overflow.fXX.iYY(fXX %a) If the overflow flag is true, the result could be specified to either be target-specific or undef. --- I would like to have some feedback on whether there is interest in improving this area, and in particular: a) Whether introducing a flag to control poison vs target-specific value for fptoui/fptosi is reasonable. Looking through the language reference, it is somewhat unusual to have target-specific behavior for a fundamental instruction. b) Whether introducing first-class saturating float-to-int cast intrinsics is reasonable. Regards, Nikita -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181105/caa23e8a/attachment.html>
I don’t think it’s reasonable to have a variant of fptoui/fptosi that produces a target-specific result. It’s not good for the language definition and I wouldn’t think it is good for the front end using it. If the actual results you get from a program depend on where you run it, that sounds like undefined behavior. Have I misunderstood your proposal? I do think adding intrinsics to allow representation of the saturating behavior is a reasonable approach. -Andy From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] On Behalf Of Nikita Popov via llvm-dev Sent: Monday, November 05, 2018 5:27 AM To: llvm-dev at lists.llvm.org Subject: [llvm-dev] Safe fptoui/fptosi casts Hi everyone! The fptoui/fptosi instructions are currently specified to return a poison value if the rounded-towards-zero floating point number cannot be represented by the target integer type. The motivation for this behavior is that overflowing float to int casts in C are undefined behavior. However, many newer languages prefer to have a float to integer cast that is well-defined for all input values. A commonly chosen semantic is to saturate towards the minimum and maximum values of the integer type, and represent NaN values as zero. An extensive discussion of this issue for the Rust language can be found at https://github.com/rust-lang/rust/issues/10184. Unfortunately, implementing this behavior in an efficient manner is not easy right now, because depending on the target architecture different instruction sequences need to be generated. On ARM the vcvt instruction directly exposes the desired saturation behavior. On X86 good instruction sequences vary depending on the size of the floating point number, and the size and signedness of the target integer type. I think there are broadly three ways in which the current situation can be improved: 1. Provide a fptoui/fptosi variant to produces target-specific values instead of a poison value for unrepresentable values. The result would be whatever is fastest for the given target. 2. Provide an intrinsic for saturating floating point to int conversions, as described above. 3. Provide an intrinsic for floating point to int conversions, which additionally indicates whether the value was representable, similarly to the existing XXX.with.overflow family of intrinsics. I think that point 1 is both the most pressing and the easiest to realize. This would resolve the immediate soundness problem in Rust (if not in a great way). Even if Rust specifies that float-to-int conversions are saturating we'd still want to support this kind of operation for performance reasons, and it would be preferable if performing a fast float-to-int conversion did not require dropping into unsafe code. The way I would imagine this to work is that fptoui/fptosi gain a flag similar to add nsw/nuw -- let's call it "fptoui representable" for now. If the flag is not specified the return value for unrepresentable values is target-specific. If it is specified, the return value is poison. (Alternatively the meaning of the flag could be inverted.) From a cursory inspection of the code, there should not be too many places that care about the presence of this flag. The main one is of course constant folding, but there are probably others (I could imagine that the Float2Int pass makes assumptions here, but haven't looked too carefully.) Point 2 is also important, because specifying saturation as the default behavior for float-to-int casts is becoming increasingly common. This would need two new intrinsics, such as: iYY llvm.fptoui.sat.fXX.iYY(fXX %a) iYY llvm.fptosi.sat.fXX.iYY(fXX %a) There is some precedent here with the recently introduced llvm.sadd.sat and llvm.uadd.sat intrinsics for saturating integer addition. The wasm backend also has custom llvm.wasm.trunc.saturate intrinsics for this purpose. These intrinsics would also need corresponding SelectionDAG nodes. A generic lowering would use a number of comparison (or min/max) instructions, while target-specific lowerings will be able to do better (e.g. single instruction on arm or wasm). Point 3 is less important. Having a "with overflow" intrinsic would allow to easily implement custom handling of unrepresentable values, e.g. to generate an error in debug builds. The intrinsics would go something like this: {iYY, i1} llvm.fptoui.with.overflow.fXX.iYY(fXX %a) {iYY, i1} llvm.fptosi.with.overflow.fXX.iYY(fXX %a) If the overflow flag is true, the result could be specified to either be target-specific or undef. --- I would like to have some feedback on whether there is interest in improving this area, and in particular: a) Whether introducing a flag to control poison vs target-specific value for fptoui/fptosi is reasonable. Looking through the language reference, it is somewhat unusual to have target-specific behavior for a fundamental instruction. b) Whether introducing first-class saturating float-to-int cast intrinsics is reasonable. Regards, Nikita -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181105/5a3aede4/attachment-0001.html>
On 11/05/2018 07:26 AM, Nikita Popov via llvm-dev wrote: Hi everyone! The fptoui/fptosi instructions are currently specified to return a poison value if the rounded-towards-zero floating point number cannot be represented by the target integer type. The motivation for this behavior is that overflowing float to int casts in C are undefined behavior. However, many newer languages prefer to have a float to integer cast that is well-defined for all input values. A commonly chosen semantic is to saturate towards the minimum and maximum values of the integer type, and represent NaN values as zero. I think that this is fine, motivationally, and we might even want a dedicated intrinsic if the IR needed to represent the lowering will, later in the pipeline, be difficult to pattern match. However, if you want the casts to be well defined, then you should define their behavior. "Do some fast thing" is not really a definition, and I don't believe that we should give target-independent constructs target-dependent behavior. -Hal An extensive discussion of this issue for the Rust language can be found at https://github.com/rust-lang/rust/issues/10184. Unfortunately, implementing this behavior in an efficient manner is not easy right now, because depending on the target architecture different instruction sequences need to be generated. On ARM the vcvt instruction directly exposes the desired saturation behavior. On X86 good instruction sequences vary depending on the size of the floating point number, and the size and signedness of the target integer type. I think there are broadly three ways in which the current situation can be improved: 1. Provide a fptoui/fptosi variant to produces target-specific values instead of a poison value for unrepresentable values. The result would be whatever is fastest for the given target. 2. Provide an intrinsic for saturating floating point to int conversions, as described above. 3. Provide an intrinsic for floating point to int conversions, which additionally indicates whether the value was representable, similarly to the existing XXX.with.overflow family of intrinsics. I think that point 1 is both the most pressing and the easiest to realize. This would resolve the immediate soundness problem in Rust (if not in a great way). Even if Rust specifies that float-to-int conversions are saturating we'd still want to support this kind of operation for performance reasons, and it would be preferable if performing a fast float-to-int conversion did not require dropping into unsafe code. The way I would imagine this to work is that fptoui/fptosi gain a flag similar to add nsw/nuw -- let's call it "fptoui representable" for now. If the flag is not specified the return value for unrepresentable values is target-specific. If it is specified, the return value is poison. (Alternatively the meaning of the flag could be inverted.) From a cursory inspection of the code, there should not be too many places that care about the presence of this flag. The main one is of course constant folding, but there are probably others (I could imagine that the Float2Int pass makes assumptions here, but haven't looked too carefully.) Point 2 is also important, because specifying saturation as the default behavior for float-to-int casts is becoming increasingly common. This would need two new intrinsics, such as: iYY llvm.fptoui.sat.fXX.iYY(fXX %a) iYY llvm.fptosi.sat.fXX.iYY(fXX %a) There is some precedent here with the recently introduced llvm.sadd.sat and llvm.uadd.sat intrinsics for saturating integer addition. The wasm backend also has custom llvm.wasm.trunc.saturate intrinsics for this purpose. These intrinsics would also need corresponding SelectionDAG nodes. A generic lowering would use a number of comparison (or min/max) instructions, while target-specific lowerings will be able to do better (e.g. single instruction on arm or wasm). Point 3 is less important. Having a "with overflow" intrinsic would allow to easily implement custom handling of unrepresentable values, e.g. to generate an error in debug builds. The intrinsics would go something like this: {iYY, i1} llvm.fptoui.with.overflow.fXX.iYY(fXX %a) {iYY, i1} llvm.fptosi.with.overflow.fXX.iYY(fXX %a) If the overflow flag is true, the result could be specified to either be target-specific or undef. --- I would like to have some feedback on whether there is interest in improving this area, and in particular: a) Whether introducing a flag to control poison vs target-specific value for fptoui/fptosi is reasonable. Looking through the language reference, it is somewhat unusual to have target-specific behavior for a fundamental instruction. b) Whether introducing first-class saturating float-to-int cast intrinsics is reasonable. Regards, Nikita _______________________________________________ LLVM Developers mailing list llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev -- Hal Finkel Lead, Compiler Technology and Programming Languages Leadership Computing Facility Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181105/4deee5a0/attachment.html>
I would be interested in learning what the set of used semantics for float-to-int conversion is. If the only two used are 1) undefined behavior if unrepresentable and 2) saturate to int_{min,max} with NaN going to zero, then I think it makes sense to expose both of those natively in the IR. If the set is much larger, I think separate intrinsics for each behavior would make sense. It would be nice to get rid of the wasm-specific intrinsic for behavior (2) and replace it with a target-independent intrinsic or IR, since this behavior is not actually particular to WebAssembly. On Mon, Nov 5, 2018 at 2:37 PM Finkel, Hal J. via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > On 11/05/2018 07:26 AM, Nikita Popov via llvm-dev wrote: > > Hi everyone! > > The fptoui/fptosi instructions are currently specified to return a poison > value if the rounded-towards-zero floating point number cannot be > represented by the target integer type. The motivation for this behavior is > that overflowing float to int casts in C are undefined behavior. > > However, many newer languages prefer to have a float to integer cast that > is well-defined for all input values. A commonly chosen semantic is to > saturate towards the minimum and maximum values of the integer type, and > represent NaN values as zero. > > > I think that this is fine, motivationally, and we might even want a > dedicated intrinsic if the IR needed to represent the lowering will, later > in the pipeline, be difficult to pattern match. However, if you want the > casts to be well defined, then you should define their behavior. "Do some > fast thing" is not really a definition, and I don't believe that we should > give target-independent constructs target-dependent behavior. > > -Hal > > An extensive discussion of this issue for the Rust language can be found > at https://github.com/rust-lang/rust/issues/10184. > > Unfortunately, implementing this behavior in an efficient manner is not > easy right now, because depending on the target architecture different > instruction sequences need to be generated. On ARM the vcvt instruction > directly exposes the desired saturation behavior. On X86 good instruction > sequences vary depending on the size of the floating point number, and the > size and signedness of the target integer type. > > I think there are broadly three ways in which the current situation can be > improved: > > 1. Provide a fptoui/fptosi variant to produces target-specific values > instead of a poison value for unrepresentable values. The result would be > whatever is fastest for the given target. > > 2. Provide an intrinsic for saturating floating point to int conversions, > as described above. > > 3. Provide an intrinsic for floating point to int conversions, which > additionally indicates whether the value was representable, similarly to > the existing XXX.with.overflow family of intrinsics. > > I think that point 1 is both the most pressing and the easiest to realize. > This would resolve the immediate soundness problem in Rust (if not in a > great way). Even if Rust specifies that float-to-int conversions are > saturating we'd still want to support this kind of operation for > performance reasons, and it would be preferable if performing a fast > float-to-int conversion did not require dropping into unsafe code. > > The way I would imagine this to work is that fptoui/fptosi gain a flag > similar to add nsw/nuw -- let's call it "fptoui representable" for now. If > the flag is not specified the return value for unrepresentable values is > target-specific. If it is specified, the return value is poison. > (Alternatively the meaning of the flag could be inverted.) > > From a cursory inspection of the code, there should not be too many places > that care about the presence of this flag. The main one is of course > constant folding, but there are probably others (I could imagine that the > Float2Int pass makes assumptions here, but haven't looked too carefully.) > > Point 2 is also important, because specifying saturation as the default > behavior for float-to-int casts is becoming increasingly common. This would > need two new intrinsics, such as: > > iYY llvm.fptoui.sat.fXX.iYY(fXX %a) > iYY llvm.fptosi.sat.fXX.iYY(fXX %a) > > There is some precedent here with the recently introduced llvm.sadd.sat > and llvm.uadd.sat intrinsics for saturating integer addition. The wasm > backend also has custom llvm.wasm.trunc.saturate intrinsics for this > purpose. > > These intrinsics would also need corresponding SelectionDAG nodes. A > generic lowering would use a number of comparison (or min/max) > instructions, while target-specific lowerings will be able to do better > (e.g. single instruction on arm or wasm). > > Point 3 is less important. Having a "with overflow" intrinsic would allow > to easily implement custom handling of unrepresentable values, e.g. to > generate an error in debug builds. The intrinsics would go something like > this: > > {iYY, i1} llvm.fptoui.with.overflow.fXX.iYY(fXX %a) > {iYY, i1} llvm.fptosi.with.overflow.fXX.iYY(fXX %a) > > If the overflow flag is true, the result could be specified to either be > target-specific or undef. > > --- > > I would like to have some feedback on whether there is interest in > improving this area, and in particular: > > a) Whether introducing a flag to control poison vs target-specific value > for fptoui/fptosi is reasonable. Looking through the language reference, it > is somewhat unusual to have target-specific behavior for a fundamental > instruction. > > b) Whether introducing first-class saturating float-to-int cast intrinsics > is reasonable. > > Regards, > Nikita > > > _______________________________________________ > LLVM Developers mailing listllvm-dev at lists.llvm.orghttp://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > > -- > Hal Finkel > Lead, Compiler Technology and Programming Languages > Leadership Computing Facility > Argonne National Laboratory > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181105/a6b1aa29/attachment.html>
Reasonably Related Threads
- Safe fptoui/fptosi casts
- [LLVMdev] Using MSVC _ftol2 runtime function for fptoui on Win32
- Why is there still ineffective code after -o3 optimization?
- [LLVMdev] Best way to interface with MSVC _ftol2 runtime function for fptoui?
- Change undef to poison in a few operations