Displaying 20 results from an estimated 500 matches similar to: "[LLVMdev] LNT compile-time performance testing"
2012 Dec 19
0
[LLVMdev] LNT compile-time performance testing
There is currently no one publicly using the compile tests. We use it internally around the clock. I am not sure what Daniel's vision for the use of this externally (i.e. I am not sure how beta/non-beta he thinks this is), but regardless the way to use it is:
1. Create a directory.
2. Stick a bunch of tar balls you want into the directory.
3. Create a project_list.json file and write a
2012 Jun 18
6
Trying to speed up an if/else statement in simulations
Dear R-help,
I am trying to write a function to simulate datasets of size n which contain
two time-to-event outcome variables with associated 'Event'/'Censored'
indicator variables (flag1 and flag2 respectively). One of these indicator
variables needs to be dependent on the other, so I am creating the first and
trying to use this to create the second using an if/else statement.
2017 Aug 24
2
llvm-mc-[dis]assemble-fuzzer status?
>
>
> I'd like llvm-isel-fuzzer to be added once its committed
consider it done (once it's there)
> (which should
> be as soon as LLVM fuzzers work in release builds again). One potential
> issue is that llvm-isel-fuzzer is more of a collection of fuzzers, and
> it needs some arguments to run (ie, to choose the backend).
>
I have the same problem with
2007 May 03
7
How to create a drop-down list with Markaby?
Hi
I couldn''t figure out, how to create a drop-down list with Markaby. How
would I create something like this:
<select name="character">
<option value="marvin">Marvin the paranoid Android</option>
<option value="arthur">Arthur Dent</option>
<option value="zaphod">Zaphod
2017 Aug 24
2
llvm-mc-[dis]assemble-fuzzer status?
On Tue, Aug 22, 2017 at 4:34 PM, Kostya Serebryany <kcc at google.com> wrote:
>
>
> On Tue, Aug 22, 2017 at 4:21 PM, George Karpenkov <ekarpenkov at apple.com>
> wrote:
>
>> Hi,
>>
>> As a part of a recent move of libFuzzer from LLVM to compiler-rt I am
>> looking into updating the build code
>> for the libraries which use libFuzzer.
2007 Dec 30
1
ReOrdering Wx::TreeCtrl Items
Given the following hash:
#...
@project_list = {
''Contract0'' => nil,
''Contract1'' => {
''Project1'' => nil,
''Project2'' => nil,
''Project3'' => {
''task1'' =>
2008 Feb 03
5
[PATCH] Simplify paging_invlpg when flush is not required.
Simplify paging_invlpg when flush is not required.
New ''flush'' parameter is added to paging_invlpg, to allow
caller assigning whether flush check is required. It''s
wasteful to always validate shadow linear mapping if caller
doesn''t check return value at all.
Signed-off-by Kevin Tian <kevin.tian@intel.com>
Thanks,
Kevin
2012 Apr 14
0
[LLVMdev] Is there a lnt server that I can submit lnt result?
Hi all,
I am looking for a public server that I can submit lnt performance
result. Does anyone happen to know what it might be?
Thanks.
Regards,
chenwj
--
Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
Homepage: http://people.cs.nctu.edu.tw/~chenwj
2013 Jun 28
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 28 June 2013 10:28, David Tweed <david.tweed at arm.com> wrote:
> (Inicidentally, responding to the earlier email below, I think you don't
> really want to compare moving averages but use some statistical test to
> quantify if the separation between the set of points within the "earlier
> window" are statistically significantly higher than the "later
2013 Jul 01
1
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On Jun 30, 2013, at 6:02 PM, Chris Matthews <chris.matthews at apple.com> wrote:
> This is probably another area where a bit of dynamic behavior could help. When we find a regressions, kick off some runs to bisect back to where it manifests. This is what we would be doing manually anyway. We could just search back with the set of regressing benchmarks, meaning the whole suite does not
2013 Jul 25
0
[LLVMdev] [LNT][Patch] Bug 16261 - lnt incorrectly builds timeit-target when one is using a simulator
Okay to push this change?
On 07/23/2013 05:17 PM, reed kotler wrote:
> Hi Daniel,
>
> In this case we are not using lnt under Qemu user mode for benchmarking;
> just as a way to run test-suite to test whether the code is correct.
>
> Qemu user mode emulates target instructions, but when it gets a Unix
> Kernel trap, it uses the host to emulate those.
>
> For example,
2014 Aug 12
2
[LLVMdev] [LNT] running LNT in 'the cloud'
In terms of cost, I thought an LNT instance would exhaust the free database rows much faster than the free dynos. The price for the small database was only 8 dollars a month though.
> On Aug 12, 2014, at 1:23 PM, Renato Golin <renato.golin at linaro.org> wrote:
>
> Hi Chris,
>
> Nice setup!
>
>
> On 12 August 2014 19:01, Chris Matthews <chris.matthews at
2013 Jun 28
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 28 June 2013 14:06, David Tweed <david.tweed at arm.com> wrote:
> That's a viewpoint; another one is that statisticians might well have very
> good reasons why they spend so long coming up with statistical tests in
> order to create the most powerful tests so they can deal with marginal
> quantities of data.
>
87.35% of all statistics are made up, 55.12% of them could
2013 Jun 30
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
Hi Tobias,
> I trust your knowledge about statistics, but am wondering why ministat (and
> it's t-test) is promoted as a statistical sane tool for benchmarking
> results.
I do not know... Ask author of ministat?
> Is the use of the t-test for benchmark results a bad idea in
> general?
No, in general. But one should be aware about the assumptions of the
underlying theory.
2013 Jun 30
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
> Getting 10 samples at different commits will give you similar accuracy if
> behaviour doesn't change, and you can rely on 10-point blocks before and > after each change to have the same result.
Right. But this way you will have 10-commits delay. So, you will need
3-4 additional test runs to pinpoint the offending commit in the worst
case.
> This is why I proposed something like
2017 Aug 02
2
[LNT] new server instance http://lnt.llvm.org seems unstable
> On Aug 2, 2017, at 12:43 PM, Tobias Grosser <tobias.grosser at inf.ethz.ch> wrote:
>
> On Tue, Aug 1, 2017, at 00:33, Matthias Braun via llvm-dev wrote:
>> The run page problem were triggered by one of my commits (sorry) and
>> should be mitigated now, see the thread at
>> http://lists.llvm.org/pipermail/llvm-dev/2017-July/115971.html
>>
2013 Jul 01
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 06/23/2013 11:12 PM, Star Tan wrote:
> Hi all,
>
>
> When we compare two testings, each of which is run with three samples, how would LNT show whether the comparison is reliable or not?
>
>
> I have seen that the function get_value_status in reporting/analysis.py uses a very simple algorithm to infer data status. For example, if abs(self.delta) <= (self.stddev *
2013 Jun 27
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
Hi Chris,
Amazing that someone is finally looking at that with a proper background.
You're much better equipped than I am to deal with that, so I'll trust you
on your judgements, as I haven't paid much attention to benchmarks, more
correctness. Some comments inline.
On 27 June 2013 19:14, Chris Matthews <chris.matthews at apple.com> wrote:
> 1) Some benchmarks are bi-modal
2013 Jul 02
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 07/01/2013 09:41 AM, Renato Golin wrote:
> On 1 July 2013 02:02, Chris Matthews <chris.matthews at apple.com> wrote:
>
>> One thing that LNT is doing to help “smooth” the results for you is by
>> presenting the min of the data at a particular revision, which (hopefully)
>> is approximating the actual runtime without noise.
>>
>
> That's an
2013 Jun 30
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
Hi Tobi,
First of all, all this is http://llvm.org/bugs/show_bug.cgi?id=1367 :)
> The statistical test ministat is performing seems simple and pretty
> standard. Is there any reason we could not do something similar? Or are we
> doing it already and it just does not work as expected?
The main problem with such sort of tests is that we cannot trust them, unless:
1. The data has the