Jon Chesterfield via llvm-dev
2017-Sep-15 21:28 UTC
[llvm-dev] What should a truncating store do?
They are starting to look complicated. The patch linked is interesting, perhaps v1 vectors are special cased. It shouldn't be too onerous to work out what one or two in tree back ends do by experimentation. Thanks again, it's great to have context beyond the source. On Fri, Sep 15, 2017 at 9:41 PM, Friedman, Eli <efriedma at codeaurora.org> wrote:> On 9/15/2017 12:10 PM, Jon Chesterfield wrote: > >> OK, I'm clear on scalars. Data races are thankfully OK in this context. >> >> Densely packing vectors sounds efficient and is clear in the case where >> lanes * width is a multiple of 8 bits. I don't think I understand how it >> works in other cases. >> >> If we could take store <4 x i8> truncating to <4 x i7> as an example. >> This can be converted into four scalar i8 -> i7 stores with corresponding >> increments to the address, in which case the final layout in memory >> is 0b01111111011111110111111101111111. Or it can be written as a packed >> vector which I think would resemble 0b00001111111111111111111111111111. >> >> This would mean the memory layout changes depending on how/whether the >> legaliser breaks large vectors down into smaller types. Is this the case? >> For example, <4xi32> => <4 x i31> converts to two <2 x i32> => <2 x i31> >> stores on a target with <2 x i32> legal but would not be split if <4 x i32> >> were declared legal. >> > > Vectors get complicated; I don't recall all the details of what the code > generator currently does/is supposed to do. See also > https://bugs.llvm.org/show_bug.cgi?id=31265 . > > > -Eli > > -- > Employee of Qualcomm Innovation Center, Inc. > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux > Foundation Collaborative Project > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170915/9db3438d/attachment.html>
Björn Pettersson A via llvm-dev
2017-Sep-25 16:14 UTC
[llvm-dev] What should a truncating store do?
(Not sure if this exactly maps to “truncating store”, but I think it at least touches some of the subjects discussed in this thread) Our out-of-tree-target need several patches to get things working correctly for us. We have introduced i24 and i40 types in ValueTypes/MachineValueTypes (in addition to the normal pow-of-2 types). And we have vectors of those (v2i40, v4i40). And the byte size in our target is 16 bits. When storing an i40 we need to store it as three 16-bit bytes, i.e. 48 bits. When storing a v4i40 vector it will be stored as 4x48 bits. One thing that we have had to patch is the getStoreSize() method in ValueTypes/MachineValueTypes where we assume that vectors are bitpacked when the element size is smaller than the byte size (“BitsPerByte”): /// Return the number of bytes overwritten by a store of the specified value /// type. unsigned getStoreSize() const { - return (getSizeInBits() + 7) / 8; + // We assume that vectors with elements smaller than the byte size are + // bitpacked. And that elements larger than the byte size should be padded + // (e.g. i40 type for Phoenix is stored using 3 bytes (48 bits)). + bool PadElementsToByteSize + isVector() && getScalarSizeInBits() >= BitsPerByte; + if (PadElementsToByteSize) + return getVectorNumElements() * getScalarType().getStoreSize(); + return (getSizeInBits() + (BitsPerByte-1)) / BitsPerByte; } The patch seems to work for in-tree-target tests as well as our out-of-tree target. If it is a correct assumption for all targets is beyond my knowledge. Maybe only i1 vectors should be bitpacked? Anyway, I think the bitpacked cases is very special (we do not use it for our target…). AFAIK bitcast is defined as writing to memory followed by a load using a different type. And I think that doing several scalar operations should give the same result as when using vectors. So bitcast of bitpacked vectors should probably be avoided? This also reminded me of the following test case that is in trunk: test/CodeGen/X86/pr20011.ll %destTy = type { i2, i2 } define void @crash(i64 %x0, i64 %y0, %destTy* nocapture %dest) nounwind { ; X64-LABEL: crash: ; X64: # BB#0: ; X64-NEXT: andl $3, %esi ; X64-NEXT: movb %sil, (%rdx) ; X64-NEXT: andl $3, %edi ; X64-NEXT: movb %dil, (%rdx) ; X64-NEXT: retq %x1 = trunc i64 %x0 to i2 %y1 = trunc i64 %y0 to i2 %1 = bitcast %destTy* %dest to <2 x i2>* %2 = insertelement <2 x i2> undef, i2 %x1, i32 0 %3 = insertelement <2 x i2> %2, i2 %y1, i32 1 store <2 x i2> %3, <2 x i2>* %1, align 1 ret void } As you can see by the “X64” checks the behavior is quite weird. Both movb instructions writes to the same address. So the result of the store <2 x i2> will be the same as when only storing one of the elements. Is this really expected? We have emailed Simon Pilgrim who added the test case to show that we no longer crash on this test case (see https://bugs.llvm.org/show_bug.cgi?id=20011). But even if the compiler doesn’t crash, the behavior seems wrong to me. Regards, Björn Pettersson From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] On Behalf Of Jon Chesterfield via llvm-dev Sent: den 15 september 2017 23:28 To: Friedman, Eli <efriedma at codeaurora.org> Cc: llvm-dev <llvm-dev at lists.llvm.org> Subject: Re: [llvm-dev] What should a truncating store do? They are starting to look complicated. The patch linked is interesting, perhaps v1 vectors are special cased. It shouldn't be too onerous to work out what one or two in tree back ends do by experimentation. Thanks again, it's great to have context beyond the source. On Fri, Sep 15, 2017 at 9:41 PM, Friedman, Eli <efriedma at codeaurora.org<mailto:efriedma at codeaurora.org>> wrote: On 9/15/2017 12:10 PM, Jon Chesterfield wrote: OK, I'm clear on scalars. Data races are thankfully OK in this context. Densely packing vectors sounds efficient and is clear in the case where lanes * width is a multiple of 8 bits. I don't think I understand how it works in other cases. If we could take store <4 x i8> truncating to <4 x i7> as an example. This can be converted into four scalar i8 -> i7 stores with corresponding increments to the address, in which case the final layout in memory is 0b01111111011111110111111101111111. Or it can be written as a packed vector which I think would resemble 0b00001111111111111111111111111111. This would mean the memory layout changes depending on how/whether the legaliser breaks large vectors down into smaller types. Is this the case? For example, <4xi32> => <4 x i31> converts to two <2 x i32> => <2 x i31> stores on a target with <2 x i32> legal but would not be split if <4 x i32> were declared legal. Vectors get complicated; I don't recall all the details of what the code generator currently does/is supposed to do. See also https://bugs.llvm.org/show_bug.cgi?id=31265 . -Eli -- Employee of Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170925/4ce078cf/attachment.html>
Friedman, Eli via llvm-dev
2017-Sep-25 18:08 UTC
[llvm-dev] What should a truncating store do?
On 9/25/2017 9:14 AM, Björn Pettersson A wrote:> > (Not sure if this exactly maps to “truncating store”, but I think it > at least touches some of the subjects discussed in this thread) > > Our out-of-tree-target need several patches to get things working > correctly for us. > > We have introduced i24 and i40 types in ValueTypes/MachineValueTypes > (in addition to the normal pow-of-2 types). And we have vectors of > those (v2i40, v4i40). > > And the byte size in our target is 16 bits. > > When storing an i40 we need to store it as three 16-bit bytes, i.e. 48 > bits. > > When storing a v4i40 vector it will be stored as 4x48 bits. > > One thing that we have had to patch is the getStoreSize() method in > ValueTypes/MachineValueTypes where we assume that vectors are > bitpacked when the element size is smaller than the byte size > (“BitsPerByte”): > > /// Return the number of bytes overwritten by a store of the > specified value > > /// type. > > unsigned getStoreSize() const { > > - return (getSizeInBits() + 7) / 8; > > + // We assume that vectors with elements smaller than the byte > size are > > + // bitpacked. And that elements larger than the byte size > should be padded > > + // (e.g. i40 type for Phoenix is stored using 3 bytes (48 bits)). > > + bool PadElementsToByteSize > > + isVector() && getScalarSizeInBits() >= BitsPerByte; > > + if (PadElementsToByteSize) > > + return getVectorNumElements() * getScalarType().getStoreSize(); > > + return (getSizeInBits() + (BitsPerByte-1)) / BitsPerByte; > > } > > The patch seems to work for in-tree-target tests as well as our > out-of-tree target. > > If it is a correct assumption for all targets is beyond my knowledge. > Maybe only i1 vectors should be bitpacked? > > Anyway, I think the bitpacked cases is very special (we do not use it > for our target…). > > AFAIK bitcast is defined as writing to memory followed by a load using > a different type. And I think that doing several scalar operations > should give the same result as when using vectors. So bitcast of > bitpacked vectors should probably be avoided? >Yes, store+load is the right definition of bitcast. And in fact, the backend will lower a bitcast to a store+load to a stack temporary in cases where there isn't some other lowering specified. The end result is probably going to be pretty inefficient unless your target has a special instruction to handle it (x86 has pmovmskb for i1 vector bitcasts, but otherwise you probably end up with some terrible lowering involving a lot of shifts).> This also reminded me of the following test case that is in trunk: > test/CodeGen/X86/pr20011.ll > > %destTy = type { i2, i2 } > > define void @crash(i64 %x0, i64 %y0, %destTy* nocapture %dest) nounwind { > > ; X64-LABEL: crash: > > ; X64: # BB#0: > > ; X64-NEXT: andl $3, %esi > > ; X64-NEXT: movb %sil, (%rdx) > > ; X64-NEXT: andl $3, %edi > > ; X64-NEXT: movb %dil, (%rdx) > > ; X64-NEXT: retq > > %x1 = trunc i64 %x0 to i2 > > %y1 = trunc i64 %y0 to i2 > > %1 = bitcast %destTy* %dest to <2 x i2>* > > %2 = insertelement <2 x i2> undef, i2 %x1, i32 0 > > %3 = insertelement <2 x i2> %2, i2 %y1, i32 1 > > store <2 x i2> %3, <2 x i2>* %1, align 1 > > ret void > > } > > As you can see by the “X64” checks the behavior is quite weird. > > Both movb instructions writes to the same address. So the result of > the store <2 x i2> will be the same as when only storing one of the > elements. > > Is this really expected? > > We have emailed Simon Pilgrim who added the test case to show that we > no longer crash on this test case (see > https://bugs.llvm.org/show_bug.cgi?id=20011). But even if the compiler > doesn’t crash, the behavior seems wrong to me. >Yes, the behavior here is wrong. DAGTypeLegalizer::SplitVecOp_STORE/DAGTypeLegalizer::SplitVecRes_LOAD/etc. assume the element size is a multiple of 8. I'm sure this has been discussed before, but I guess nobody ever wrote a patch to fix it...? -Eli -- Employee of Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170925/ef96db6e/attachment.html>