Displaying 20 results from an estimated 30 matches for "0.0081".
Did you mean:
0.0080
2017 Mar 17
7
Saving Compile Time in InstCombine
Hi,
One of the most time-consuming passes in LLVM middle-end is InstCombine (see e.g. [1]). It is a very powerful pass capable of doing all the crazy stuff, and new patterns are being constantly introduced there. The problem is that we often use it just as a clean-up pass: it's scheduled 6 times in the current pass pipeline, and each time it's invoked it checks all known patterns. It
2011 Nov 24
2
Question on density values obtained from kde2d() from package MASS
Hello,
I am a little bit confused regarding the density values obtained from the function kde2d() from the package MASS because the are not in the intervall [0,1] as I would expect them to be. Here is an example:
x <- c(0.0036,0.0088,0.0042,0.0022,-0.0013,0.0007,0.0028,-0.0028,0.0019,0.0026,-0.0029,-0.0081,-0.0024,0.0090,0.0088,0.0038,0.0022,0.0068,0.0089,-0.0015,-0.0062,0.0066)
y <-
2017 Mar 18
4
Saving Compile Time in InstCombine
On 03/17/2017 04:30 PM, Mehdi Amini via llvm-dev wrote:
>
>> On Mar 17, 2017, at 11:50 AM, Mikhail Zolotukhin via llvm-dev
>> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>
>> Hi,
>>
>> One of the most time-consuming passes in LLVM middle-end is
>> InstCombine (see e.g. [1]). It is a very powerful pass capable
2012 Jun 20
2
[LLVMdev] Exception handling slowdown?
Did something change with exception handling recently? A bunch of lit bots are
showing slower compile times for many tests.
Ciao, Duncan.
On 20/06/12 07:53, llvm-testresults at cs.uiuc.edu wrote:
>
> lab-mini-03__O0-g__clang_DEV__x86_64 test results
> <http://llvm.org/perf/db_default/v4/nts/1283?compare_to=1278&baseline=999>
>
> Run Order Start Time Duration
>
2017 Mar 20
2
Saving Compile Time in InstCombine
> On Mar 17, 2017, at 6:12 PM, David Majnemer <david.majnemer at gmail.com> wrote:
>
> Honestly, I'm not a huge fan of this change as-is. The set of transforms that were added behind ExpensiveChecks seems awfully strange and many would not lead the reader to believe that they are expensive at all (the SimplifyDemandedInstructionBits and foldICmpUsingKnownBits calls being the
2012 Jun 25
0
[LLVMdev] Exception handling slowdown?
Nothing that I'm aware of has changed with EH. Is it possible to bisect the problem?
-bw
On Jun 20, 2012, at 12:38 AM, Duncan Sands <baldrick at free.fr> wrote:
> Did something change with exception handling recently? A bunch of lit bots are
> showing slower compile times for many tests.
>
> Ciao, Duncan.
>
> On 20/06/12 07:53, llvm-testresults at cs.uiuc.edu
2012 Jul 05
2
[LLVMdev] Exception handling slowdown?
Hi Bill,
> Nothing that I'm aware of has changed with EH. Is it possible to bisect the problem?
I don't see any relevant LLVM changes, so I guess clang C++ compilation slowed
down due to some clang changes. I'm not going to investigate this.
Ciao, Duncan.
>
> -bw
>
> On Jun 20, 2012, at 12:38 AM, Duncan Sands <baldrick at free.fr> wrote:
>
>> Did
2005 Dec 06
3
reading in data with variable length
I have very large csv files (up to 1GB each of ASCII text). I'd like to be able to read them directly in to R. The problem I am having is with the variable length of the data in each record.
Here's a (simplified) example:
$ cat foo.csv
Name,Start Month,Data
Foo,10,-0.5615,2.3065,0.1589,-0.3649,1.5955
2002 Jun 21
1
lme: anova vs. intervals
Windows 2000 (v.5.00.2195), R 1.5.1
I have an lme object, fm0, which I examine with anova() and intervals().
The anova output indicates one of the interaction terms is significant, but
the intervals output shows that the single parameter for that term includes
0.0 in its 95% CI. I believe that the anova is a conditional (sequential)
test; is intervals marginal or approximate? Which should I
2009 Feb 08
0
Initial values of the parameters of a garch-Model
Dear all,
I'm using R 2.8.1 under Windows Vista on a dual core 2,4 GhZ with 4 GB
of RAM.
I'm trying to reproduce a result out of "Analysis of Financial Time
Series" by Ruey Tsay.
In R I'm using the fGarch library.
After fitting a ar(3)-garch(1,1)-model
> model<-garchFit(~arma(3,0)+garch(1,1), analyse)
I'm saving the results via
> result<-model
2017 Mar 21
2
Saving Compile Time in InstCombine
> On Mar 17, 2017, at 6:12 PM, David Majnemer via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> Honestly, I'm not a huge fan of this change as-is. The set of transforms that were added behind ExpensiveChecks seems awfully strange and many would not lead the reader to believe that they are expensive at all (the SimplifyDemandedInstructionBits and foldICmpUsingKnownBits calls
2007 Sep 18
0
[LLVMdev] 2.1 Pre-Release Available (testers needed)
On Fri, Sep 14, 2007 at 11:42:18PM -0700, Tanya Lattner wrote:
> The 2.1 pre-release (version 1) is available for testing:
> http://llvm.org/prereleases/2.1/version1/
>
> [...]
>
> 2) Download llvm-2.1, llvm-test-2.1, and the llvm-gcc4.0 source.
> Compile everything. Run "make check" and the full llvm-test suite
> (make TEST=nightly report).
>
> Send
2012 Jul 06
0
[LLVMdev] Exception handling slowdown?
On Jul 5, 2012, at 1:33 AM, Duncan Sands wrote:
> Hi Bill,
>
>> Nothing that I'm aware of has changed with EH. Is it possible to bisect the problem?
>
> I don't see any relevant LLVM changes, so I guess clang C++ compilation slowed
> down due to some clang changes. I'm not going to investigate this.
>
Crumbs.
John, Do you know of anything that went into
2017 Mar 22
3
Saving Compile Time in InstCombine
> To (hopefully) make it easier to answer this question, I've posted my
(work-in-progress) patch which adds a known-bits cache to InstCombine.
> I rebased it yesterday, so it should be fairly easy to apply:
https://reviews.llvm.org/D31239 - Seeing what this does to the performance
of the
> benchmarks mentioned in this thread (among others) would certainly be
interesting.
Thanks! I
2005 Jul 01
0
[LLVMdev] execution time of bytecode and native
On Thu, 30 Jun 2005, Tanu Sharma wrote:
> I am compiling SPEC 2000 benchmarks with llvm .Got stuck with
> calculating "execution time" of all the .bc and native files.
>
> The log for nightly test itself gives execution times but I am passing
> the bytecode files to my pass which gives another bytecode file.I have
> to calculate execution time of such bytecode and
2005 Jul 01
1
[LLVMdev] execution time of bytecode and native
Hello ,
I am compiling SPEC 2000 benchmarks with llvm .Got stuck with calculating "execution time" of all the .bc and native files.
The log for nightly test itself gives execution times but I am passing the bytecode files to my pass which gives another bytecode file.I have to calculate execution time of such bytecode and native files as well.If i simply do this:
time lli
2005 Jul 21
1
[LLVMdev] execution time of bytecode and native
Hello All,
Thanks for the reply.I can generate the reports by compiling Spec through llvm, but that couldn't resolve my problem.
I m trying to determine execution time for the bytecode and native files , which are obtained as a result of running my pass over the original bytecode .I am running these experiments on spec benchmark.
In SPEC we have command line tools such as runspec where
2002 Jul 23
0
Comparing slopes of several linear models
Dear all
I have the following data (a shortened extract
shown; some replictates of time deleted) to which I
fitted the linear model given below:
time group mass
11 control 0.019
11 control 0.014
14 control 0.0306
14 control 0.0289
14 control 0.0236
17 control 0.0469
17 control 0.0709
11 five 0.0077
11 five
2011 Jul 24
2
[LLVMdev] [llvm-testresults] bwilson__llvm-gcc_PROD__i386 nightly tester results
A big compile time regression. Any ideas?
Ciao, Duncan.
On 22/07/11 19:13, llvm-testresults at cs.uiuc.edu wrote:
>
> bwilson__llvm-gcc_PROD__i386 nightly tester results
>
> URL http://llvm.org/perf/db_default/simple/nts/253/
> Nickname bwilson__llvm-gcc_PROD__i386:4
> Name curlew.apple.com
>
> Run ID Order Start Time End Time
> Current 253 0 2011-07-22 16:22:04
2003 Apr 21
2
Anyone Familiar with Using arima function with exogenous variables?
I've posted this before but have not been able to locate what I'm doing
wrong. I cannot determine how the forecast is made using the estimated
coefficients from a simple AR(2) model when there is an exogenous
variable. Does anyone know what the problem is? The help file for arima
doesn't show the model with any exogenous variables. I haven't been able
to locate any documents