similar to: Handling post-inc users in LSR

Displaying 20 results from an estimated 1000 matches similar to: "Handling post-inc users in LSR"

2016 May 27
0
Handling post-inc users in LSR
> On May 27, 2016, at 2:50 PM, via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hello, > > For a very simple loop where all IV users are post-inc users, I observed redundant add instructions in AArch64. > > From LSR debug, I can see initial formula for icmp is the one that transformed to a post-inc form in OptimizeLoopTermCond() and later expanded in post-inc
2020 Jun 01
3
Aarch64: unaligned access despite -mstrict-align
Hi, I experienced a crash in code compiled with Clang 10.0.0 due to a misaligned 64-bit data access. The (ARMv8) CPU is configured with SCTL.A == 1 (alignment check enable). With SCTLR.A == 0 the code runs as expected. After some investigation I came up with the following reproducer: ---8<-------8<-------8<-------8<-------8<-------8<-------8<------- $ cat test.c extern char
2014 Jun 27
3
[LLVMdev] Contributing the Apple ARM64 compiler backend
AArch64AddressTypePromotion.cpp does a fair bit of work to help make these things work out well. It could probably be generalized for non-AArch64 targets as per the comment in the file header. > On Jun 26, 2014, at 10:42 AM, Sanjay Patel <spatel at rotateright.com> wrote: > > Cool HW trick. :) > Are those 'sxtw' ops free? > That’ll depend on the details of the
2014 Jun 26
2
[LLVMdev] Contributing the Apple ARM64 compiler backend
Hi Sanjay, The behaviour I’m talking about I’ve actually pinned down to CodeGenPrepare not working too well with ISA’s that don’t have a good scaled load. I have a patch to fix it that is going through performance testing now. Your testcase seems specific to x86 – for aarch64 we get the rather spiffy: _Z3fooPii: // @_Z3fooPii // BB#0:
2014 Sep 02
3
[LLVMdev] LICM promoting memory to scalar
All, If we can speculatively execute a load instruction, why isn’t it safe to hoist it out by promoting it to a scalar in LICM pass? There is a comment in LICM pass that if a load/store is conditional then it is not safe because it would break the LLVM concurrency model (See commit 73bfa4a). It has an IR test for checking this in test/Transforms/LICM/scalar-promote-memmodel.ll However, I have
2017 May 30
3
[atomics][AArch64] Possible bug in cmpxchg lowering
Currently the AtomicExpandPass will lower the following IR: define i1 @foo(i32* %obj, i32 %old, i32 %new) { entry: %v0 = cmpxchg weak volatile i32* %obj, i32 %old, i32 %new _*release acquire*_ %v1 = extractvalue { i32, i1 } %v0, 1 ret i1 %v1 } to the equivalent of the following on AArch64: _*ldxr w8, [x0]*_ cmp w8, w1 b.ne .LBB0_3 // BB#1:
2014 Sep 02
2
[LLVMdev] LICM promoting memory to scalar
I think gcc is right. It inserted a branch for n == 0 (the cbz at the top), so that's not a problem. In all other regards, this is safe: if you examine the sequence of loads and stores, it eliminated all but the first load and all but the last store. How's that unsafe? If I had to guess, the bug here is that LLVM doesn't want to hoist the load over the condition (which it is right
2014 Sep 03
3
[LLVMdev] LICM promoting memory to scalar
Thanks for the background on the concurrent memory model. So, is it sufficient that the loop entry is guarded by condition (cbz at top) for preventing the race? The loop entry will be guarded by condition if loop has been rotated by loop rotate pass. Since LICM runs after loop rotate, we can use ScalarEvolution::isLoopEntryGuardedByCond to check if we can speculatively execute load without
2015 Feb 19
2
[LLVMdev] ScheduleDAGInstrs computes deps using IR Values that may be invalid
Hi All, I've encountered an issue where tail merging MIs is causing a problem with the post-RA MI scheduler dependency analysis and I'm not sure of the best way to address the problem. In my case, the branch folding pass (lib/CodeGen/BranchFolding.cpp) is merging common code from BB#14 and BB#15 into BB#16. It's clear that there are 4 common instructions (marked with an *) in BB#14
2020 Jun 01
2
Aarch64: unaligned access despite -mstrict-align
Sorry, quick message to ignore what I wrote before, I got myself confused (probably you too), With a recent trunk build I get this: f: adrp x8, g ldr x8, [x8, :lo12:g] mov w2, #16 mov x1, x0 mov x0, x8 b memcmp This looks more correct, and I need to look a bit more into this (and how clang 10.0.0 behaves).
2014 Jun 26
2
[LLVMdev] Contributing the Apple ARM64 compiler backend
HI James, Thanks for your reply and hints on what can be done for the Aarch64 backend optimization for llvm We have SPEC license and v8 hardware. So I will start looking into it warm regards Manjunath On Wed, Jun 25, 2014 at 8:42 PM, James Molloy <james.molloy at arm.com> wrote: > Hi Manjunath, > > At the time of writing that status we had only done our initial analysis. >
2016 Mar 04
2
[PATCH v3 0/2] v2v: Copy *.dll files since they can be part of the
v2 -> v3 - Don't make a special case for WdfCoInstaller* files. There is a difference of opinion about whether copying these is necessary, but it seems like it is not harmful. Rich.
2015 Nov 17
4
Re: Fwd: [PATCH] v2v: virtio-win: include *.dll too
I think one - maybe final? - problem. How can I tell the difference between drivers for "client" versions of Windows (eg. Windows 7) and server versions of Windows (eg. Windows 2008 Server)? It seems in many or most cases the drivers are identical, eg: $ md5sum viostor/2k12/amd64/* viostor/w8/amd64/* bbe250c13bf891fd7292ccab9908a63a viostor/2k12/amd64/viostor.cat
2016 Mar 04
2
[PATCH v2 0/2] v2v: Copy *.dll files since they can be part of the driver (RHBZ#1311373).
Since v1: - Fix a bug in the calculation of lc_basename. By luck this doesn't affect anything given the contents of the current ISO. - Don't copy the WdfCoInstaller*.dll files. Rich.
2015 Nov 17
8
[PATCH 0/3] v2v: windows: Use '*.inf' files to control how Windows drivers are installed.
https://github.com/rwmjones/libguestfs/tree/rewrite-virtio-copy-drivers Instead of trying to split and parse elements from virtio-win paths, use the '*.inf' files supplied with the drivers to control how Windows drivers are installed. The following emails best explain how this works: https://www.redhat.com/archives/libguestfs/2015-October/msg00352.html
2015 Nov 05
6
[PATCH 0/4] Provide better fake virtio-* test data for virt-v2v.
Patch 1 moves the v2v/fake-virtio-win and v2v/fake-virt-tools directories to the recently created test-data/ hierarchy. This is just refactoring with no functional change at all. Patches 2-4 then extend the available (fake) virtio-win drivers: - Patch 2 adds all of the drivers from the virtio-win RPM. - Patch 3 adds all of the drivers from the virtio-win ISO (which are different from the
2015 Sep 28
4
varargs, the x86, and clang
When I use clang on an x86-64 to spit out the LLVM, like this clang -O -S -emit-llvm varargstest.c where varargstest.c looks like this int add_em_up(int count, ...) { va_list ap; int i, sum; va_start(ap, count); sum = 0; for (i = 0; i < count; i++) sum += va_arg(ap, int); va_end(ap); return sum; } I see LLVM that looks like it's been customized for the x86-64, versus
2015 Nov 18
2
Re: [PATCH] v2v: virtio-win: include *.dll too
+Li Jin ----- Original Message ----- > From: "Vadim Rozenfeld" <vrozenfe@redhat.com> > To: "Richard W.M. Jones" <rjones@redhat.com> > Cc: "Roman Kagan" <rkagan@virtuozzo.com>, libguestfs@redhat.com, "Amnon Ilan" <ailan@redhat.com>, "Jeff Nelson" > <jenelson@redhat.com>, "Yan Vugenfirer"
2017 Dec 19
4
MemorySSA question
Hi, I am new to MemorySSA and wanted to understand its capabilities. Hence I wrote the following program (test.c): int N; void test(int *restrict a, int *restrict b, int *restrict c, int *restrict d, int *restrict e) { int i; for (i = 0; i < N; i = i + 5) { a[i] = b[i] + c[i]; } for (i = 0; i < N - 5; i = i + 5) { e[i] = a[i] * d[i]; } } I compiled this program using
2014 May 15
2
convert physical windows 8 machine to virtual machine
I dual boot w8 and Arch Linux. Both are on the same ssd drive (each OS have of course a partition). Now I would like to virtualize w8 and run it inside Arch Linux using KVM/QEMU and Libvirt. w8 has already been installed on a ntfs partition. As it is brand new, it will not be difficult to reinstall it on a image.raw I have been reading some articles and found myself a little bit confused. 1-