Hi,
I am working on x86-64 and am working with a C frontend that folds
sizeof(long double) into 12. For now, I am trying to avoid modifying that
part of the frontend by finding a way to tell LLVM that 12 bytes are
allocated for x86_fp80. To explore this, I tried an experiment with
llvm-gcc:
% llvm-gcc --version | head -1
llvm-gcc (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2.8)
% llvmc --version
Low Level Virtual Machine (http://llvm.org/):
llvm version 2.8
Optimized build.
Built Oct 25 2010 (10:39:46).
Host: x86_64-unknown-linux-gnu
Host CPU: corei7
Registered Targets:
(none)
% cat > test.c
#include <stdio.h>
#include <stdlib.h>
#define SIZE 5
int
main(void)
{
long double *a = malloc(sizeof(long double) * SIZE);
for (int i = 0; i < SIZE; ++i)
a[i] = i+1;
for (int i = 0; i < SIZE; ++i)
printf ("a[%d] = %Lf\n", i, a[i]);
free (a);
return 0;
}
% llvm-gcc -std=c99 -m96bit-long-double -emit-llvm -S -o test.ll test.c
% llvmc test.ll
% valgrind ./a.out |& grep Invalid
==3882== Invalid write of size 4
==3882== Invalid write of size 2
==3882== Invalid read of size 4
==3882== Invalid read of size 2
Looking inside test.ll, I see f80:128:128, but I also see sizeof(long
double)*5 folded into 60. Changing f80:128:128 to f80:96:96 does not fix
the errors reported by valgrind. If I instead fix the folded constant,
the errors go away, of course.
Does llvm-gcc not intend to support -m96bit-long-double?
Is there any way to instruct LLVM to assume 12 bytes are allocated for
x86_fp80? I suppose I could use [12 x i8] and then bitcast to x86_fp80
when I want to access the value. Is that the best way to handle this
(other than changing the way my frontend folds sizeof)?
Thanks.
Hi Joel,> I am working on x86-64 and am working with a C frontend that folds > sizeof(long double) into 12. For now, I am trying to avoid modifying that > part of the frontend by finding a way to tell LLVM that 12 bytes are > allocated for x86_fp80.I think the "fact" that x86 long double is 16 byte aligned on x86-64 is hard-wired in. I'm not sure how hard this would be to control via a command line option (i.e. -m96bit-long-double).> Looking inside test.ll, I see f80:128:128, but I also see sizeof(long > double)*5 folded into 60. Changing f80:128:128 to f80:96:96 does not fix > the errors reported by valgrind. If I instead fix the folded constant, > the errors go away, of course.If you had built llvm-gcc with checking enabled it would have aborted.> Does llvm-gcc not intend to support -m96bit-long-double?Please open a bug report asking for support for -m96bit-long-double, to ensure this issue is not forgotten. Ciao, Duncan.
On 01 Nov 2010, at 18:37, Duncan Sands wrote:> I think the "fact" that x86 long double is 16 byte aligned on x86-64 is > hard-wired in.Note that it's not just about alignment, but mainly about the reserved storage size.> I'm not sure how hard this would be to control via a > command line option (i.e. -m96bit-long-double).Is there no different way to go about this? Our compiler currently supports the x87 long double type as a) 10 bytes, for Turbo Pascal and Delphi-compatibility b) 12 bytes, for non-Darwin ABI/C x86 compatibility c) 16 bytes, for Darwin x86 ABI/C and x86_64 ABI/C compatibility long doubles of type a) and c), or of type b) and c), can occur in the same compilation unit. A command line switch does not offer sufficient flexibility in this case. Jonas
Hi Duncan, On Mon, 1 Nov 2010, Duncan Sands wrote:> > Does llvm-gcc not intend to support -m96bit-long-double? > > Please open a bug report asking for support for -m96bit-long-double, > to ensure this issue is not forgotten.Given the thread that followed this post, I'm not sure whether you still want a bug report. Is the plan to fix -m96bit-long-double or just remove it and expect users to bitcast a [12 x i8] instead? Thanks.