similar to: how to remove time series trend in R?

Displaying 11 results from an estimated 11 matches similar to: "how to remove time series trend in R?"

2011 Jun 28
2
gam confidence interval (package mgcv)
Dear R-helpers, I am trying to construct a confidence interval on a prediction of a gam fit. I have the Wood (2006) book, and section 5.2.7 seems relevant but I am not able to apply that to this, different, problem. Any help is appreciated! Basically I have a function Y = f(X) for two different treatments A and B. I am interested in the treatment ratios : Y(treatment = B) / Y(treatment = A) as
2004 Jul 20
1
Performance problem
Dear all, I have a performance problem in terms of computing time. I estimate mixed models on a fairly large number of subgroups (10000) using lme(.) within the by(.) function and it takes hours to do the calculation on a fast notebook under Windows. I suspect by(.) to be a poor implementation for doing individual analysis on subgroups. Is there an alternative and more efficient way for doing
2007 Dec 10
1
Having trouble getting GARCH parameters (basic/newbie)
I'm having no luck getting GARCH parameter estimations. It seems simple enough, but I don't know what I'm doing. I'm a newbie both at R and GARCH models, so whatever is going wrong, it's probably very basic. Here's what I do: 1. I first load the tseries package with: library("tseries") 2. I then load the data with: g <-
2005 Apr 09
4
make check-all fails (PR#7784)
Full_Name: Ed Borasky Version: R-beta 2.1.0 2005-04-08 OS: Linux 2.6.11 GCC 3.3.5 Submission from: (NULL) (24.21.57.139) I downloaded the latest R-beta tarball and did a build with the default options. OS is Linux 2.6.11 and compiler is GCC 3.3.5. "make check-all" failed with the following message: make[3]: Entering directory `/home/znmeb/R-beta/tests' running code in
2002 Aug 05
2
options(digits) (PR#1879)
[this message needed manual improvement by the mailing list administrator since it was `HTMLified' .. ``please do not''] Apologies for bothering you about a fairly trivial matter. I have been getting some inconsistencies with the display digits in R V1.5. I have been using the hypergeometric distribution function, and have found that when printing out the results from this
2011 Mar 16
5
Strange R squared, possible error
k=lm(y~x) summary(k) returns R^2=0.9994 lm(y~x) is supposed to find coef. a anb b in y=a*x+b l=lm(y~x+0) summary(l) returns R^2=0.9998 lm(y~x+0) is supposed to find coef. a in y=a*x+b while setting b=0 The question is why do I get better R^2, when it should be otherwise? Im sorry to use the word "MS exel" here, but I verified it in exel and it gives: R^2=0.9994 when y=a*x+b is used
2012 Feb 10
3
problem subsetting data frame with variable instead of constant
Hello, I've encountered a very weird issue with the method subset(), or maybe this is something I don't know about said method that when you're subsetting based on the columns of a data frame you can only use constants (0.1, 2.3, 2.2) instead of variables? Here's a look at my data frame called 'ea.cad.pwr': *>ea.ca.pwr[1:5,] MAF OR POWER 1 0.02 0.01 0.9999 2 0.02
2005 Apr 28
3
have to point it out again: a distribution question
Stock returns and other financial data have often found to be heavy-tailed. Even Cauchy distributions (without even a first absolute moment) have been entertained as models. Your qq function subtracts numbers on the scale of a normal (0,1) distribution from the input data. When the input data are scaled so that they are insignificant compared to 1, say, then you get essentially the
2011 Feb 18
6
sort a 3 dimensional array across third dimension ?
I'm attempting to sort a 3 dimensional array that looks like this > x , , 1 [,1] [,2] [1,] 9 9 [2,] 7 9 , , 2 [,1] [,2] [1,] 6 5 [2,] 4 6 , , 3 [,1] [,2] [1,] 2 1 [2,] 3 2 Such that it ends up like this .... > y , , 1 [,1] [,2] [1,] 2 1 [2,] 3 2 , , 2 [,1] [,2] [1,] 6 5 [2,] 4 6 , , 3 [,1] [,2]
2002 Aug 06
1
timing predict.tree()
Hi all, I am running R1.5.0 under Unix. I am repeating my earlier question with a few details added. I have the following tree fitted as the tree object 'my.tree': node), split, n, deviance, yval * denotes terminal node 1) root 5807 0.9998 0.0001722 2) V604 < 0.5 5798 0.0000 0.0000000 * 3) V604 > 0.5 9 0.8889 0.1111000 * And I have a data.frame called
2001 Jul 02
2
Shapiro-Wilk test
Hi, does the shapiro wilk test in R-1.3.0 work correctly? Maybe it does, but can anybody tell me why the following sample doesn't give "W = 1" and "p-value = 1": R> x<-1:9/10;x [1] 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 R> shapiro.test(qnorm(x)) Shapiro-Wilk normality test data: qnorm(x) W = 0.9925, p-value = 0.9986 I can't imagine a sample being