Pavel Labath via llvm-dev
2017-May-31 11:06 UTC
[llvm-dev] Running lit (googletest) tests remotely
Thank you all for the pointers. I am going to look at these to see if there is anything that we could reuse, and come back. In the mean time, I'll reply to Mathiass's comments: On 26 May 2017 at 19:11, Matthias Braun <mbraun at apple.com> wrote:>> Based on a not-too-detailed examination of the lit codebase, it does >> not seem that it would be too difficult to add this capability: During >> test discovery phase, we could copy the required files to the remote >> host. Then, when we run the test, we could just prefix the run command >> similarly to how it is done for running the tests under valgrind. It >> would be up to the user to provide a suitable command for copying and >> running files on the remote host (using rsync, ssh, telnet or any >> other transport he chooses). > > This seems to be the crux to me: What does "required files" mean? > - All the executables mentioned in the RUN line? What llvm was compiled as a library, will we copy those too?For executables, I was considering just listing them explicitly (in lit.local.cfg, I guess), although parsing the RUN line should be possible as well. Even with RUN parsing, I expect we would some way to explicitly add files to the copy list (e.g. for lldb tests we also need to copy the program we are going to debug). As for libraries, I see a couple of solutions: - declare these configurations unsupported for remote executions - copy over ALL shared libraries - have automatic tracking of runtime dependencies - all of this information should pass through llvm_add_library macro, so it should be mostly a matter of exporting this information out of cmake. These can be combined in the sense that we can start in the "unsupported" state, and then add some support for it once there is a need for it (we don't need it right now).> - Can tests include other files? Do they need special annotations for that?My initial idea was to just copy over all files in the Inputs folder. Do you know of any other dependencies that I should consider?> > As another example: The llvm-testsuite can perform remote runs (test-suite/litsupport/remote.py if you want to see the implementation) that code makes the assumption that the remote devices has an NFS mount so the relevant parts of the filesystem look alike on the host and remote device. I'm not sure that is the best solution as NFS introduces its own sort of flakiness and potential skew in I/O heavy benchmarks but it avoids the question of what to copy to the device.Requiring an NFS mount is a non-starter for us (no way to get an android device to create one), although if we would be able to hook in a custom script which does a copy to simulate the "mount", we might be able to work with it. Presently I am mostly thinking about correctness tests, and I am not worried about benchmark skews regards, pl
Matthias Braun via llvm-dev
2017-May-31 17:44 UTC
[llvm-dev] Running lit (googletest) tests remotely
> On May 31, 2017, at 4:06 AM, Pavel Labath <labath at google.com> wrote: > > Thank you all for the pointers. I am going to look at these to see if > there is anything that we could reuse, and come back. In the mean > time, I'll reply to Mathiass's comments: > > On 26 May 2017 at 19:11, Matthias Braun <mbraun at apple.com> wrote: >>> Based on a not-too-detailed examination of the lit codebase, it does >>> not seem that it would be too difficult to add this capability: During >>> test discovery phase, we could copy the required files to the remote >>> host. Then, when we run the test, we could just prefix the run command >>> similarly to how it is done for running the tests under valgrind. It >>> would be up to the user to provide a suitable command for copying and >>> running files on the remote host (using rsync, ssh, telnet or any >>> other transport he chooses). >> >> This seems to be the crux to me: What does "required files" mean? >> - All the executables mentioned in the RUN line? What llvm was compiled as a library, will we copy those too? > For executables, I was considering just listing them explicitly (in > lit.local.cfg, I guess), although parsing the RUN line should be > possible as well. Even with RUN parsing, I expect we would some way to > explicitly add files to the copy list (e.g. for lldb tests we also > need to copy the program we are going to debug). > > As for libraries, I see a couple of solutions: > - declare these configurations unsupported for remote executions > - copy over ALL shared libraries > - have automatic tracking of runtime dependencies - all of this > information should pass through llvm_add_library macro, so it should > be mostly a matter of exporting this information out of cmake. > These can be combined in the sense that we can start in the > "unsupported" state, and then add some support for it once there is a > need for it (we don't need it right now).Sounds good. An actively managed list of files to copy in the lit configuration is a nice simple solution provided we have some regularily running public bot so we can catch missing things. But I assume setting up a bot was your plan anyway.> >> - Can tests include other files? Do they need special annotations for that? > My initial idea was to just copy over all files in the Inputs folder. > Do you know of any other dependencies that I should consider?I didn't notice that we had already developed a convention with the "Inputs" folders, so I guess all that is left to do is making sure all tests actually follow that convention.> >> >> As another example: The llvm-testsuite can perform remote runs (test-suite/litsupport/remote.py if you want to see the implementation) that code makes the assumption that the remote devices has an NFS mount so the relevant parts of the filesystem look alike on the host and remote device. I'm not sure that is the best solution as NFS introduces its own sort of flakiness and potential skew in I/O heavy benchmarks but it avoids the question of what to copy to the device. > > Requiring an NFS mount is a non-starter for us (no way to get an > android device to create one), although if we would be able to hook in > a custom script which does a copy to simulate the "mount", we might be > able to work with it. Presently I am mostly thinking about correctness > tests, and I am not worried about benchmark skewsSure, I don't think I would end up with an NFS mount strategy if I would start fresh today. Also the test-suite benchmarks (esp. the SPEC) ones tend to have more complicated harder to track inputs. - Matthias
David Blaikie via llvm-dev
2017-Jun-01 17:40 UTC
[llvm-dev] Running lit (googletest) tests remotely
On Wed, May 31, 2017 at 10:44 AM Matthias Braun via llvm-dev < llvm-dev at lists.llvm.org> wrote:> > > On May 31, 2017, at 4:06 AM, Pavel Labath <labath at google.com> wrote: > > > > Thank you all for the pointers. I am going to look at these to see if > > there is anything that we could reuse, and come back. In the mean > > time, I'll reply to Mathiass's comments: > > > > On 26 May 2017 at 19:11, Matthias Braun <mbraun at apple.com> wrote: > >>> Based on a not-too-detailed examination of the lit codebase, it does > >>> not seem that it would be too difficult to add this capability: During > >>> test discovery phase, we could copy the required files to the remote > >>> host. Then, when we run the test, we could just prefix the run command > >>> similarly to how it is done for running the tests under valgrind. It > >>> would be up to the user to provide a suitable command for copying and > >>> running files on the remote host (using rsync, ssh, telnet or any > >>> other transport he chooses). > >> > >> This seems to be the crux to me: What does "required files" mean? > >> - All the executables mentioned in the RUN line? What llvm was compiled > as a library, will we copy those too? > > For executables, I was considering just listing them explicitly (in > > lit.local.cfg, I guess), although parsing the RUN line should be > > possible as well. Even with RUN parsing, I expect we would some way to > > explicitly add files to the copy list (e.g. for lldb tests we also > > need to copy the program we are going to debug).> > > As for libraries, I see a couple of solutions: > > - declare these configurations unsupported for remote executions > > - copy over ALL shared libraries > > - have automatic tracking of runtime dependencies - all of this > > information should pass through llvm_add_library macro, so it should > > be mostly a matter of exporting this information out of cmake. > > These can be combined in the sense that we can start in the > > "unsupported" state, and then add some support for it once there is a > > need for it (we don't need it right now). > Sounds good. An actively managed list of files to copy in the lit > configuration is a nice simple solution provided we have some regularily > running public bot so we can catch missing things. But I assume setting up > a bot was your plan anyway. > > > > >> - Can tests include other files? Do they need special annotations for > that? > > My initial idea was to just copy over all files in the Inputs folder. > > Do you know of any other dependencies that I should consider? > I didn't notice that we had already developed a convention with the > "Inputs" folders, so I guess all that is left to do is making sure all > tests actually follow that convention. >The Google-internal execution of LLVM's tests relies on this property - so at least for the common tests and the targets Google cares about, this property is pretty well enforced.> > > > >> > >> As another example: The llvm-testsuite can perform remote runs > (test-suite/litsupport/remote.py if you want to see the implementation) > that code makes the assumption that the remote devices has an NFS mount so > the relevant parts of the filesystem look alike on the host and remote > device. I'm not sure that is the best solution as NFS introduces its own > sort of flakiness and potential skew in I/O heavy benchmarks but it avoids > the question of what to copy to the device. > > > > Requiring an NFS mount is a non-starter for us (no way to get an > > android device to create one), although if we would be able to hook in > > a custom script which does a copy to simulate the "mount", we might be > > able to work with it. Presently I am mostly thinking about correctness > > tests, and I am not worried about benchmark skews > > Sure, I don't think I would end up with an NFS mount strategy if I would > start fresh today. Also the test-suite benchmarks (esp. the SPEC) ones tend > to have more complicated harder to track inputs. > > - Matthias > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170601/2f3cde92/attachment.html>