similar to: mgcv: lowest estimated degrees of freedom

Displaying 20 results from an estimated 2000 matches similar to: "mgcv: lowest estimated degrees of freedom"

2012 Feb 13
3
mgcv: increasing basis dimension
hi Using a ts or tprs basis, I expected gcv to decrease when increasing the basis dimension, as I thought this would minimise gcv over a larger subspace. But gcv increased. Here's an example. thanks for any comments. greg #simulate some data set.seed(0) x1<-runif(500) x2<-rnorm(500) x3<-rpois(500,3) d<-runif(500) linp<--1+x1+0.5*x2+0.3*exp(-2*d)*sin(10*d)*x3
2003 Sep 30
2
cluster & mgcv update
Hello, After reinstalling the whole OS and R as well, I tried to update.packages() and get the follwing error message: concerning the mgcv update: atlas2-base is installed and blas as well (on debian). I haven't found lf77blas, I assume it's a library or something similar associated with blas. any suggestion how to solve that, thanks Martin * Installing *source* package
2008 Dec 09
1
update.packages() for R 2.7.1: mgcv fails
Hi I just upgraded my debian/stable to R 2.7.1 via apt-get install r-base r-base-core r-base-dev, and then began to update.packages() > update.packages(lib.loc="/usr/local/lib/R/site-library") > update.packages(lib.loc="/usr/lib/R/library") but I get: .... * Installing *source* package 'mgcv' ... ** libs gcc -std=gnu99 -I/usr/share/R/include -fpic -g
2012 Jun 21
2
MGCV: Use of irls.reg option
Hi, In the help files in the ?mgcv package for the gam.control() function, there is an option irls.reg. The help files describe this option as: For most models this should be 0. The iteratively re-weighted least squares method by which GAMs are fitted can fail to converge in some circumstances. For example, data with many zeroes can cause problems in a model with a log link, because a mean of
2007 Jun 22
1
two basic question regarding model selection in GAM
Qusetion #1 ********* Model selection in GAM can be done by using: 1. step.gam {gam} : A directional stepwise search 2. gam {mgcv} : Smoothness estimation using GCV or UBRE/AIC criterion Suppose my model starts with a additive model (linear part + spline part). Using gam() {mgcv} i got estimated degrees of freedom(edf) for the smoothing splines. Now I want to use the functional form of my model
2008 May 06
1
mgcv::gam shrinkage of smooths
In Dr. Wood's book on GAM, he suggests in section 4.1.6 that it might be useful to shrink a single smooth by adding S=S+epsilon*I to the penalty matrix S. The context was the need to be able to shrink the term to zero if appropriate. I'd like to do this in order to shrink the coefficients towards zero (irrespective of the penalty for "wiggliness") - but not necessarily all the
2004 Nov 01
2
Compilation error on mgcv_1.1-7 on OS X (10.3)
Greetings I run into a compilation error when updating to mgcv_1.1-7 in R 2.0.0 on OS X 10.3. Note that other pacakges have compiled nicely. Some details are given below, but in short it looks like it's seeking for /usr/local/lib/powerpc-apple-darwin6.8/3.4.2/ which I don't have. But I do have /usr/lib/gcc/darwin/3.3 i.e a lower version of GCC in a different directory. More
2010 Jun 04
1
package mgcv inconsistency in help files? cyclic P-spline "cs" not cyclic?
Dear all, I'm a bit stunned by the behaviour of a gam model using cyclic P-spline smoothers. I cannot provide the data, as I have about 61.000 observations from a time series. I use the following model : testgam <- gam(NO~s(x)+s(y,bs="cs")+s(DD,bs="cs")+s(TT),data=Final) The problem lies with the cyclic smoother I use for seasonal trends. The variable Final$y is a
2003 Nov 25
1
Something broken with update?
Updating my 1.8.0 R installation (>update.packages() ) I obtain the following (SORRY FOR THE LENGTH OF THE LOG BUT IT HELPS!!!): ................ downloaded 135Kb KernSmooth : Version 2.22-11 in /usr/lib/R/library Version 2.22-12 on CRAN Update (y/N)? y mgcv : Version 0.9-3.1 in /usr/lib/R/library Version 0.9-6 on CRAN Update (y/N)? y trying URL
2012 Jun 21
2
check.k function in mgcv packages
Hi,everyone, I am studying the generalized additive model and employ the package 'mgcv' developed by professor Wood. However,I can not understand the example listed in check.in function. For example, library(mgcv) set.seed(1) dat <- gamSim(1,n=400,scale=2) ## fit a GAM with quite low `k' b<-gam(y~s(x0,k=6)+s(x1,k=6)+s(x2,k=6)+s(x3,k=6),data=dat) plot(b,pages=1,residuals=TRUE)
2012 Jul 24
1
questions on R CMD INSTALL et al
Greetings, I am learning R My machine has these; CPU: 3cores amd64 OS pure-64bit CBLFS liux compiled from sources (kernel 3.2.1, gcc-4.6.2 R-2.15 When I compiled R the compiler spewed out lines like these:- make[3]: Entering directory `/tmp/RtmpiHdDJy/R.INSTALL472339eeb23a/mgcv/src' gcc -m64 -std=gnu99 -I/home/Rman/R-2.15.0/include -DNDEBUG - I/usr/local/atlas/include
2010 Jun 16
3
mgcv, testing gamm vs lme, which degrees of freedom?
Dear all, I am using the "mgcv" package by Simon Wood to estimate an additive mixed model in which I assume normal distribution for the residuals. I would like to test this model vs a standard parametric mixed model, such as the ones which are possible to estimate with "lme". Since the smoothing splines can be written as random effects, is it correct to use an (approximate)
2003 Sep 23
1
Very small estimated random effect variance (lme)
Dear R-helpers, i get some strange results using a linear mixed-effects model (lme), of the type: lme1 <- lme(y ~ x, random=~x|group, ...) For some datasets, i obtain very small standard deviations of the random effects. I compared these to standard deviations of the slope and intercept using a lmList approach. Of course, the SD from the lme is always smaller (shrinkage estimator), but in
2005 Feb 15
1
shrinkage estimates in lme
Hello. Slope estimates in lme are shrinkage estimates which pull the OLS slope estimates towards the population estimates, the degree of which depends on the group sample size and the distance between the group-based estimate and the overall population estimate. Although these shrinkage estimates as said to be more precise with respect to the true values, they are also biased. So there is a
2005 Apr 19
2
cross validation and parameter determination
Hi all, In Tibshirani's PNAS paper about nearest shrunken centroid analysis of microarrays (PNAS vol 99:6567), they used cross validation to choose the amount of shrinkage used in the model, and then test the performance of the model with the cross-validated shrinkage in separate independent testing set. If I don't have the luxury of having independent testing set, can I just use the
2012 Jul 30
2
mgcv 1.7-19, vis.gam(): "invalid 'z' limits'
Hi everyone, I ran a binomial GAM consisting of a tensor product of two continuous variables, a continuous parametric term and crossed random intercepts on a data set with 13,042 rows. When trying to plot the tensor product with vis.gam(), I get the following error message: Error in persp.default(m1, m2, z, col = col, zlim = c(min.z, max.z), xlab = view[1], : invalid 'z' limits In
2013 Aug 23
1
Setting up 3D tensor product interactions in mgcv
Hi, I am trying to fit a smoothing model where there are three dimensions over which I can smooth (x,y,z). I expect interactions between some, or all, of these terms, and so I have set up my model as mdl <- gam(PA ~ s(x) + s(y) + s(z) + te(x,y) + te(x,z) + te(y,z) + te(x,y,z),...) I have recently read about the ti(), "tensor product interaction smoother", which takes care of these
2006 May 27
2
boosting - second posting
Hi I am using boosting for a classification and prediction problem. For some reason it is giving me an outcome that doesn't fall between 0 and 1 for the predictions. I have tried type="response" but it made no difference. Can anyone see what I am doing wrong? Screen output shown below: > boost.model <- gbm(as.factor(train$simNuance) ~ ., # formula +
2009 Aug 14
1
Permutation test and R2 problem
Hi, I have optimized the shrinkage parameter (GCV)for ridge and got my r2 value is 70% . to check the sensitivity of the result, I did permutation test. I permuted the response vector and run for 1000 times and draw a distribution. But now, I get r2 values highest 98% and some of them more than 70 %. Is it expected from such type of test? *I was under impression that, r2 with real data set
2011 Jun 07
2
gam() (in mgcv) with multiple interactions
Hi! I'm learning mgcv, and reading Simon Wood's book on GAMs, as recommended to me earlier by some folks on this list. I've run into a question to which I can't find the answer in his book, so I'm hoping somebody here knows. My outcome variable is binary, so I'm doing a binomial fit with gam(). I have five independent variables, all continuous, all uniformly