search for: 0.0111

Displaying 19 results from an estimated 19 matches for "0.0111".

Did you mean: 0.0011
2011 Jan 07
1
Currency return calculations
Dear sir, I am extremely sorry for messing up the logic asking for help w.r.t. my earlier mails   I have tried to explain below what I am looking for.     I have a database (say, currency_rates) storing datewise currency exchange rates with some base currency XYZ.   currency_rates <- data.frame(date = c("12/31/2010", "12/30/2010", "12/29/2010",
2011 Mar 16
2
Removing Bad Data
    I created a couple of timeSeries objects - when I was merging them , I got an error. Looking at the data , I see that one of the time series has   06/30/2007  0.0028       0.0183  0.0122      0.0042  0.0095    -          07/31/2007 -0.0111       0.0255  0.0096     -0.0069 -0.0024  0.0043       08/31/2007 -0.0108      -0.0237 -0.0062     -0.0138 -0.0173 -0.0065       09/30/2007 
2011 Jan 07
0
Odp: Currency return calculations
My mistake sir. I was literally engrossed in my stupid logic, and while doing so, overlooked the simple and very effective solution you had offered. Sorry once again sir and will certainly try to be very careful in future. Thanks again and have a great weekend sir. Regards Amelia --- On Fri, 7/1/11, Petr PIKAL <petr.pikal@precheza.cz> wrote: From: Petr PIKAL
2006 Sep 22
0
$theta of frailty in coxph
Dear all, Does the frailty.object$history[[1]]$theta returns the Variance of random effect? Why is the value different? Here is an example with kidney data: > library(survival) > data(kidney) > frailty.object<-coxph(Surv(time, status)~ age + sex + disease + frailty(id), kidney) > frailty.object Call: coxph(formula = Surv(time, status) ~ age + sex + disease + frailty(id), data
2012 Aug 09
1
Factor moderators in metafor
I'm puzzled by the behaviour of factors in rma models, see example and comments below. I'm sure there's a simple explanation but can't see it... Thanks for any input John Hodgson ------------------------------------- code/selected output ----------------- library(metafor) ## Set up data (from Lenters et al A Meta-analysis of Asbestos and Lung Cancer... ##
2007 Sep 18
0
[LLVMdev] 2.1 Pre-Release Available (testers needed)
On Fri, Sep 14, 2007 at 11:42:18PM -0700, Tanya Lattner wrote: > The 2.1 pre-release (version 1) is available for testing: > http://llvm.org/prereleases/2.1/version1/ > > [...] > > 2) Download llvm-2.1, llvm-test-2.1, and the llvm-gcc4.0 source. > Compile everything. Run "make check" and the full llvm-test suite > (make TEST=nightly report). > > Send
2008 Oct 07
0
Algorithm = "port" convergence codes
Hello all, I am fitting a Gamma distribution to some data I have using nls(). The function obviously runs into issues when using a 0 as a parameter value. I understand the line alg = "port" can be used to set the lower bounds to prevent this from happening. When I run the code I get the following: Algorithm "port", convergence message: relative convergence (4). I have
2006 Jul 04
0
who can explain the difference between the R and SAS on the results of GLM
Dear friends, I used R and SAS to analyze my data through generalized linear model, and there is some difference between them. Results from R: glm(formula = snail ~ grass + gheight + humidity + altitude + soiltemr + airtemr, family = Gamma) Deviance Residuals: Min 1Q Median 3Q Max -1.23873 -0.41123 -0.08703 0.24339 1.21435 Coefficients:
2002 Jul 11
1
nls() singular graident matrix error
R-helpers; I used Proc Model in SAS to fit the following model to data: proc model data = dbsmv; a = a1*F**2; b = b1*F + b2*T + b3*F*T; tph2 = tph1 *((1 - exp(-a*age2)) / (1 - exp(-a*age)))**-b; fit tph2; and yielded the following estimated parameters after iterations: a1 = -0.15943, a2 = -1.8177, b1 = -0.01911, b2
2013 Feb 15
0
CVlim
Can anyone help explain to me why the two codes below have different result? I thought I can use log(time)~. to replace log(time)~dist+climb+timef.I am using CVlm from DAAG package. I think nihills is preloaded with the package. Thanks in advance. > CVlm(df=nihills, form.lm=formula(log(time)~.),plotit="Observed",m=2)Analysis of Variance Table Response: log(time) Df Sum Sq
2006 Jun 13
2
Garch Warning
Dear all R-users, I wanted to fit a Garch(1,1) model to a dataset by: >garch1 = garch(na.omit(dat)) But I got a warning message while executing, which is: >Warning message: >NaNs produced in: sqrt(pred$e) The garch parameters that I got are: > garch1 Call: garch(x = na.omit(dat)) Coefficient(s): a0 a1 b1 1.212e-04 1.001e+00 1.111e-14 Can any one
2008 Sep 19
1
Type I SS and Type III SS problem
Dear all: I m a newer on R.? I have some problem when I use?anova function.? I use anova function to get Type I SS results, but I also need to get Type III SS results.? However, in my code, there is some different between the result of Type I SS and Type III SS.? I don?t know why the ?seqe? factor disappeared in the result of Type III SS.? How can I do?? Here is my example and result.
2007 Aug 31
3
Choosing the optimum lag order of ARIMA model
Dear all R users, I am really struggling to determine the most appropriate lag order of ARIMA model. My understanding is that, as for MA [q] model the auto correlation coeff vanishes after q lag, it says the MA order of a ARIMA model, and for a AR[p] model partial autocorrelation vanishes after p lags it helps to determine the AR lag. And most appropriate model choosed by this argument gives
2005 Dec 12
2
convergence error (lme) which depends on the version of nlme (?)
Dear list members, the following hlm was constructed: hlm <- groupedData(laut ~ design | grpzugeh, data = imp.not.I) the grouped data object is located at and can be downloaded: www.anicca-vijja.de/lg/hlm_example.Rdata The following works: library(nlme) summary( fitlme <- lme(hlm) ) with output: ... AIC BIC logLik 425.3768 465.6087 -197.6884 Random effects:
2007 Sep 15
22
[LLVMdev] 2.1 Pre-Release Available (testers needed)
LLVMers, The 2.1 pre-release (version 1) is available for testing: http://llvm.org/prereleases/2.1/version1/ I'm looking for members of the LLVM community to test the 2.1 release. There are 2 ways you can help: 1) Download llvm-2.1, llvm-test-2.1, and the appropriate llvm-gcc4.0 binary. Run "make check" and the full llvm-test suite (make TEST=nightly report). 2) Download
2012 Nov 23
2
[LLVMdev] [cfe-dev] costing optimisations
On 23.11.2012, at 15:12, john skaller <skaller at users.sourceforge.net> wrote: > > On 23/11/2012, at 5:46 PM, Sean Silva wrote: > >> Adding LLVMdev, since this is intimately related to the optimization passes. >> >>> I think this is roughly because some function level optimisations are >>> worse than O(N) in the number of instructions. >>
2008 Feb 03
0
[LLVMdev] 2.2 Prerelease available for testing
Target: FreeBSD 6.2-STABLE on i386 autoconf says: configure:2122: checking build system type configure:2140: result: i386-unknown-freebsd6.2 [...] configure:2721: gcc -v >&5 Using built-in specs. Configured with: FreeBSD/i386 system compiler Thread model: posix gcc version 3.4.6 [FreeBSD] 20060305 [...] objdir != srcdir, for both llvm and gcc. Release build. llvm-gcc 4.2 from source.
2015 Feb 26
5
[LLVMdev] [RFC] AArch64: Should we disable GlobalMerge?
Hi all, I've started looking at the GlobalMerge pass, enabled by default on ARM and AArch64. I think we should reconsider that, at least for AArch64. As is, the pass just merges all globals together, in groups of 4KB (AArch64, 128B on ARM). At the time it was enabled, the general thinking was "it's almost free, it doesn't affect performance much, we might as well use it".
2008 Jan 24
6
[LLVMdev] 2.2 Prerelease available for testing
LLVMers, The 2.2 prerelease is now available for testing: http://llvm.org/prereleases/2.2/ If anyone can help test this release, I ask that you do the following: 1) Build llvm and llvm-gcc (or use a binary). You may build release (default) or debug. You may pick llvm-gcc-4.0, llvm-gcc-4.2, or both. 2) Run 'make check'. 3) In llvm-test, run 'make TEST=nightly report'. 4) When