Displaying 20 results from an estimated 36 matches for "0.0022".
Did you mean:
0.0025
2020 Aug 23
2
sum() vs cumsum() implicit type coercion
Hi
I noticed a small inconsistency when using sum() vs cumsum()
I have a char-based series
> tryjpy$long
[1] "0.0022" "-0.0002" "-0.0149" "-0.0023" "-0.0342" "-0.0245" "-0.0022"
[8] "0.0003" "-0.0001" "-0.0004" "-0.0036" "-0.001" "-0.0011"
2020 Aug 25
1
sum() vs cumsum() implicit type coercion
>>>>> Tomas Kalibera
>>>>> on Tue, 25 Aug 2020 09:29:05 +0200 writes:
> On 8/23/20 5:02 PM, Rory Winston wrote:
>> Hi
>>
>> I noticed a small inconsistency when using sum() vs cumsum()
>>
>> I have a char-based series
>>
>> > tryjpy$long
>>
>> [1]
2010 Jun 18
1
12th Root of a Square (Transition) Matrix
Dear R-tisans,
I am trying to calculate the 12th root of a transition (square) matrix, but can't seem to obtain an accurate result. I realize that this post is laced with intimations of quantitative finance, but the question is both R-related and broadly mathematical. That said, I'm happy to post this to R-SIG-Finance if I've erred in posting this to the general list.
I've
2020 Aug 25
0
sum() vs cumsum() implicit type coercion
On 8/23/20 5:02 PM, Rory Winston wrote:
> Hi
>
> I noticed a small inconsistency when using sum() vs cumsum()
>
> I have a char-based series
>
> > tryjpy$long
>
> [1] "0.0022" "-0.0002" "-0.0149" "-0.0023" "-0.0342" "-0.0245" "-0.0022"
>
> [8] "0.0003" "-0.0001"
2008 Feb 03
0
[LLVMdev] 2.2 Prerelease available for testing
Target: FreeBSD 6.2-STABLE on i386
autoconf says:
configure:2122: checking build system type
configure:2140: result: i386-unknown-freebsd6.2
[...]
configure:2721: gcc -v >&5
Using built-in specs.
Configured with: FreeBSD/i386 system compiler
Thread model: posix
gcc version 3.4.6 [FreeBSD] 20060305
[...]
objdir != srcdir, for both llvm and gcc.
Release build.
llvm-gcc 4.2 from source.
2008 Jan 24
6
[LLVMdev] 2.2 Prerelease available for testing
LLVMers,
The 2.2 prerelease is now available for testing:
http://llvm.org/prereleases/2.2/
If anyone can help test this release, I ask that you do the following:
1) Build llvm and llvm-gcc (or use a binary). You may build release
(default) or debug. You may pick llvm-gcc-4.0, llvm-gcc-4.2, or both.
2) Run 'make check'.
3) In llvm-test, run 'make TEST=nightly report'.
4) When
2010 Mar 09
0
Tukey test for Mixed Effects Model with more than 1 fixed effect?
I am trying to decipher, via post hoc test (Tukey), which of my sites
differ from eachother. I have 4 sites, 2 sets of In vs Out (MPA) in
separate Regions. Therefore my Mixed Effects Model code has 2 fixed
effects:
CB.lme <- lme(AsinCB~ In_Out*Region, random = (~1| site.trans/Quadrat)
, data = Subsampled_props, control = lmeControl(maxIter = 500,
msMaxIter = 500, msMaxEval = 500))
2008 Jan 28
0
[LLVMdev] 2.2 Prerelease available for testing
Target: FreeBSD 7.0-RC1 on amd64.
autoconf says:
configure:2122: checking build system type
configure:2140: result: x86_64-unknown-freebsd7.0
[...]
configure:2721: gcc -v >&5
Using built-in specs.
Target: amd64-undermydesk-freebsd
Configured with: FreeBSD/amd64 system compiler
Thread model: posix
gcc version 4.2.1 20070719 [FreeBSD]
[...]
objdir != srcdir, for both llvm and gcc.
Release
2011 Jun 22
1
caret's Kappa for categorical resampling
Hello,
When evaluating different learning methods for a categorization problem with
the (really useful!) caret package, I'm getting confusing results from the
Kappa computation. The data is about 20,000 rows and a few dozen columns,
and the categories are quite asymmetrical, 4.1% in one category and 95.9% in
the other. When I train a ctree model as:
model <- train(dat.dts,
2009 Feb 08
0
Initial values of the parameters of a garch-Model
Dear all,
I'm using R 2.8.1 under Windows Vista on a dual core 2,4 GhZ with 4 GB
of RAM.
I'm trying to reproduce a result out of "Analysis of Financial Time
Series" by Ruey Tsay.
In R I'm using the fGarch library.
After fitting a ar(3)-garch(1,1)-model
> model<-garchFit(~arma(3,0)+garch(1,1), analyse)
I'm saving the results via
> result<-model
2008 Jul 30
1
Mixed effects model where nested factor is not the repeated across treatments lme???
Hi,
I have searched the archives and can't quite confirm the answer to this.
I appreciate your time...
I have 4 treatments (fixed) and I would like to know if there is a
significant difference in metal volume (metal) between the treatments.
The experiment has 5 blocks (random) in each treatment and no block is
repeated across treatments. Within each plot there are varying numbers
of
2017 Dec 20
2
outlining (highlighting) pixels in ggplot2
Using the small reproducible example below, I'd like to know if one can
somehow use the matrix "sig" (defined below) to add a black outline (with
lwd=2) to all pixels with a corresponding value of 1 in the matrix 'sig'?
So for example, in the ggplot2 plot below, the pixel located at [1,3] would
be outlined by a black square since the value at sig[1,3] == 1. This is my
first
2012 Nov 23
2
[LLVMdev] [cfe-dev] costing optimisations
On 23.11.2012, at 15:12, john skaller <skaller at users.sourceforge.net> wrote:
>
> On 23/11/2012, at 5:46 PM, Sean Silva wrote:
>
>> Adding LLVMdev, since this is intimately related to the optimization passes.
>>
>>> I think this is roughly because some function level optimisations are
>>> worse than O(N) in the number of instructions.
>>
2015 Feb 26
5
[LLVMdev] [RFC] AArch64: Should we disable GlobalMerge?
Hi all,
I've started looking at the GlobalMerge pass, enabled by default on
ARM and AArch64. I think we should reconsider that, at least for
AArch64.
As is, the pass just merges all globals together, in groups of 4KB
(AArch64, 128B on ARM).
At the time it was enabled, the general thinking was "it's almost
free, it doesn't affect performance much, we might as well use it".
2005 Jul 01
0
[LLVMdev] execution time of bytecode and native
On Thu, 30 Jun 2005, Tanu Sharma wrote:
> I am compiling SPEC 2000 benchmarks with llvm .Got stuck with
> calculating "execution time" of all the .bc and native files.
>
> The log for nightly test itself gives execution times but I am passing
> the bytecode files to my pass which gives another bytecode file.I have
> to calculate execution time of such bytecode and
2011 Apr 09
2
[LLVMdev] dragonegg/llvm-gfortran/gfortran benchmarks
With the case-insensitive file system patch from http://llvm.org/bugs/show_bug.cgi?id=9656#c15
applied to dragonegg 2.9, the following Polyhedron 2005 benchmarks are seen on x86_64-apple-darwin10
under gcc 4.5.3svn using the dragonegg plugin...
================================================================================
Date & Time : 8 Apr 2011 19:52:56
Test Name :
2005 Jul 01
1
[LLVMdev] execution time of bytecode and native
Hello ,
I am compiling SPEC 2000 benchmarks with llvm .Got stuck with calculating "execution time" of all the .bc and native files.
The log for nightly test itself gives execution times but I am passing the bytecode files to my pass which gives another bytecode file.I have to calculate execution time of such bytecode and native files as well.If i simply do this:
time lli
2005 Jul 21
1
[LLVMdev] execution time of bytecode and native
Hello All,
Thanks for the reply.I can generate the reports by compiling Spec through llvm, but that couldn't resolve my problem.
I m trying to determine execution time for the bytecode and native files , which are obtained as a result of running my pass over the original bytecode .I am running these experiments on spec benchmark.
In SPEC we have command line tools such as runspec where
2009 Nov 20
1
Help with multiple comparisons on a 2-way repeated measures ANOVA
Hi everyone,
I'm trying to do a 2-way repeated measures ANOVA with data that looks like
this:
subject block rep day light response
1 1 1 one L1 5.5
2 1 2 one L1 4.5
3 1 1 one L2 4
4 1 2 one L2 5.1
5 2 1 one L1 5.3
6 2 2 one L1
2011 Apr 30
0
bootcov or robcov for odds ratio?
Dear list,
I made a logistic regression model (MyModel) using lrm and penalization
by pentrace for data of 104 patients, which consists of 5 explanatory
variables and one binary outcome (poor/good). Then, I found bootcov and
robcov function in rms package for calculation of confidence range of
coefficients and odds ratio by bootstrap covariance matrix and
Huber-White sandwich method,