similar to: [LLVMdev] Question about lli

Displaying 20 results from an estimated 5000 matches similar to: "[LLVMdev] Question about lli"

2018 Jan 28
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
This part is only for objects with /Z7 debug information in them right? I think most of the third parties are either: .lib/obj without debug information, the same with information on pdb files. Rewriting all .lib/.obj with /Z7 information seems doable with a small python script, the pdb one is going to be more work, but I always wanted to know how a pdb file is structured so "fun" times
2018 Jan 28
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Look for this code in lld/coff/pdb.cpp if (Config->DebugGHashes) { ArrayRef<GloballyHashedType> Hashes; std::vector<GloballyHashedType> OwnedHashes; if (Optional<ArrayRef<uint8_t>> DebugH = getDebugH(File)) Hashes = getHashesFromDebugH(*DebugH); else { OwnedHashes = GloballyHashedType::hashTypes(Types); Hashes = OwnedHashes; } In the else block there, add a log
2018 Jan 28
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Ok I went for kind of middle ground solution, I patch in the obj files, but as adding a new section didn't seem to work, I add a "shadow" section, by editing the pointer to line number and the virtual size on the .debug$T section. Although technically broken, both link.exe and lld-link.exe don't seem to mind the alterations and as the shadow .debug$H is not really a section
2018 Jan 29
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
About incremental linking, the only thing from my benchmark that needs to be incremental is the pdb patching as generating the binary seems faster than incremental linking on link.exe, so did anyone propose renaming the current binary, writing a new one and then diffing the coff obj and using that info to just rewriting that part of the pdb. Or another idea is making the build system feed into the
2018 Jan 29
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
I cleaned up my tests and figured that the obj file generated with problems was only with msvc 2015, so trying again with msvc 2017 I get: lld-link: 4s lld-link /debug: 1m30s and ~20gb of ram lld-link /debug:ghash: 59s and ~20gb of ram link: 13s link /debug:fastlink: 43s and 1gb of ram link specialpdb: 1m10s and 4gb of ram link /debug: 9m16s min and >14gb of ram link incremental: 8s when it
2018 Jan 28
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
I don’t have pgo numbers. When I build using -flto=thin the link time is significantly faster than msvc /ltcg and runtime is slightly faster, but I haven’t tested on a large variety of different workloads, so YMMV. Link time will definitely be faster though On Sun, Jan 28, 2018 at 2:20 PM Leonardo Santagada <santagada at gmail.com> wrote: > This part is only for objects with /Z7 debug
2018 Jan 29
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Yeah true, is there any switches to profile the linker? On 29 Jan 2018 18:43, "Zachary Turner" <zturner at google.com> wrote: > Part of the reason why lld is so fast is because we map every input file > into memory up front and rely on the virtual memory manager in the kernel > to make this fast. Generally speaking, this is a lot faster than opening a > file, reading
2018 Jan 31
1
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
The quickest route would be for you to upload the patches for review and then go through a couple of iterations until it's cleaned up. Do you want to go that route? If not you can upload them anyway and whenever I get some spare cycles I can take it over and get it committed. But that might be slower since I have other things on my plate at the moment so I'm not sure when I'll be
2018 Jan 29
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Part of the reason why lld is so fast is because we map every input file into memory up front and rely on the virtual memory manager in the kernel to make this fast. Generally speaking, this is a lot faster than opening a file, reading it and processing a file, and closing the file. The downside, as you note, is that it uses a lot of memory. But there's a catch. The kernel is smart enough
2018 Jan 26
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
We don't generate any .lib as those don't work well with incremental linking (and give zero advantages when linking AFAIK), and it would be pretty easy to have a modern format for having a .ghash for multiple files, something simple like size prefixed name and then size prefixed ghash blobs. On Fri, Jan 26, 2018 at 8:44 PM, Zachary Turner <zturner at google.com> wrote: > We
2018 Jan 29
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Not a lot. /TIME will show high level timing of the various phases (this is the same option MSVC uses). If you want anything more detailed than that, vTune or ETW+WPA ( https://github.com/google/UIforETW/releases) are probably what you'll need to do. (We'd definitely love patches to improve performance, or even just ideas about how to make things faster. Improving link speed is one of
2018 Jan 31
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Uhmm I changed only type hashing... Ok back to trying it again. Let's me find where it is looking at 20 bytes instead of using the size of global type hash. On 30 Jan 2018 21:33, "Zachary Turner" <zturner at google.com> wrote: > Did you change both the compiler and linker (or make sure that your > objcopy was updated to write your 64 bit hashes)? > > The linker is
2018 Jan 30
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Does packing obj files in .lib helps linking in any way? My understanding is that there would be no difference. It could help if I could make a pdb per lib, but there is no way to do so... Maybe we could implement this on lld? On 29 Jan 2018 22:14, "Zachary Turner" <zturner at google.com> wrote: > Yes we've discussed many different ideas for incremental linking, but our
2018 Jan 30
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Nice and why are you trying blake2 instead of a faster hash algorithm? And do you have any guess as to why xxHash64 wasn't faster than SHA1? I still have to see how many collision I get with it, but it seems so improbable that collisions on 64 bit hashes would be the problem. On 30 Jan 2018 18:39, "Zachary Turner" <zturner at google.com> wrote: It turns out there were some
2018 Jan 30
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Today I played around replacing the sha1 with xxHash64 and the results so far are bad. Linking times almost doubled and I can't really explain why, the only thing that comes to mind is hash collisions but on type names they should be very few in 64bit hashes. Any reason why you are trying blake2 and not murmurhash3 or xxHash64? About creating a pdb per lib, you can say to msvc to put the pdb
2018 Jan 26
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
it does. I just had an epiphany: why not just write a .ghash file and have lld read those if they exist for an .obj file? Seem much simpler than trying to wire up a 20 year old file format. I will try to do this, is something like this acceptable for LLD? The cool thing is that I can generate .ghash for .lib or any obj lying around (maybe even for pdb in the future). On Fri, Jan 26, 2018 at
2018 Jan 31
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
So I found all 20 bytes and changed then to GHASH_SIZE (a const I defined in typehashing.h) and finished the switch to xxHash64, that saved me around 50 seconds to 56s, then I changed it to uint64_t instead of a 8 byte uint_8 array and that gave me 48s. With release config and a pgo pass I'm now linking in 38s... so faster than link.exe in vs 2017 (which is faster than vs 2015) doing fastlink.
2018 Jan 29
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Yes we've discussed many different ideas for incremental linking, but our conclusion is that you can only get one of Fast|Simple. If you want it to be fast it has to be complicated and if you want it to be simple then it's going to be slow. Consider the case where you edit one .cpp file and change this: int x = 0, y = 7; to this: int x = 0; short y = 7; Because different instructions
2018 Jan 26
0
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
We considered that early on, but most object files actually end up in .lib files so unless there were a way to connect the objects in the .lib to the corresponding .ghash files, this would disable ghash usage for a large amount of inputs. Supporting both is an option, but it adds a bit of complexity and I’m not totally convinced it’s worth it On Fri, Jan 26, 2018 at 11:38 AM Leonardo Santagada
2018 Jan 30
2
[lldb-dev] Trying out lld to link windows binaries (using msvc as a compiler)
Did you change both the compiler and linker (or make sure that your objcopy was updated to write your 64 bit hashes)? The linker is hardcodes to expect 20-byte sha 1s , anything else and it will recompute them in serial On Tue, Jan 30, 2018 at 12:28 PM Leonardo Santagada <santagada at gmail.com> wrote: > Nice and why are you trying blake2 instead of a faster hash algorithm? And > do