similar to: naive "collinear" weighted linear regression

Displaying 20 results from an estimated 1100 matches similar to: "naive "collinear" weighted linear regression"

2000 Jun 20
1
density estimation in two dimensions
Hello, I am a newbie to R and the subject of density estimation in two dimensions or more. I would like to have some advice concerning a comparison between the R packages for density estimation in bivariate or higher order problems; I mean explicitly the packages: 1) ash 2) KernSmooth 3) locfit 4) sm. My specific problem now is having a set of numerical pairs (x_i, y_i), arising from a
2008 Dec 18
1
big data file versus ram memory
Hi there I am new to R and would like to ask some questions which might not make perfect sense. Anyhow, here they are: 1) I would like very much to use R for processing some big data files (around 1.7 or more GB) for spatial analysis, wavelets, and power spectra estimation; is this possible with R? Within IDL, such a big data set seems to be tractable... 2) I have heard/read that R
2006 Feb 28
1
Collinearity in nls problem
Dear R-Help list, I have a nonlinear least squares problem, which involves a changepoint; at the beginning, the outcome y is constant, and after a delay, t0, y follows a biexponential decay. I log-transform the data, to stabilize the error variance. At time t < t0, my model is log(y_i)=log(exp(a0)+exp(b0)) at time t >= t0, the model is log(y_i)=log(exp(a0-a1*(t_i - t0))+exp(b0=b1*(t_i -
2003 Feb 24
1
Mass: lda and collinear variables
hello list, when I use method lda of the MASS package I experience a warning: variables are collinear in: lda.default(data[train, ], classes[train]) Is there an easy way to recover from this issue within the MASS package? Or how can I tell how severe this issue is at all? I understand that I shouldn't use lda at all with collinear data and should use "quadratische" (squared?)
2012 Jul 26
0
lda, collinear variables and CV
Dear R-help list, apparently lda from the MASS package can be used in situations with collinear variables. It only produces a warning then but at least it defines a classification rule and produces results. However, I can't find on the help page how exactly it does this. I have a suspicion (it may look at the hyperplane containing the class means, using some kind of default/trivial
2012 Aug 14
0
Problems with lda-CV, and collinear variables in lda
Dear R-help list, two issues regarding lda. 1) I'm puzzled by the fact that lda's in-build cross-validation gives results different from the manual cross-validation routine that I run (of course mine may be wrong, but I don't think so). See here: library(MASS) set.seed(12345) n <- 50 p <- 10 # or p<- 200 testdata <- matrix(ncol=p,nrow=n) for (i in 1:p) testdata[,i]
2001 Nov 25
1
readline 4.2 versus 4.1
Hi there, I would like to try the newest version of R (1.3.1), under my Red Hat Linux 7.2, which uses the library readline 4.2. In some page of CRAN and in the corresponding rpm package, it is stated, when trying to install R, that the readline 4.1 is necessary... Shouldn't the newer readline 4.2 be completely compatible with the present version of R (1.3.1)? Will I have any problems if I
2016 May 24
0
RStudio 0.99.902 installation under sid: libgstreamer0.10-0 and libgstreamer-plugins-based0.10-0 missing
Dear r-sig'ers I use Debian sid (amd64). I downloaded the most recent version of rstudio .deb from their official site and issued: dpkg -i rstudio-0.99.902-amd64.deb to no avail: dpkg: dependency problems prevent configuration of rstudio: rstudio depends on libgstreamer0.10-0; however: Package libgstreamer0.10-0 is not installed. rstudio depends on libgstreamer-plugins-base0.10-0;
2004 Dec 15
2
how to fit a weighted logistic regression?
I tried lrm in library(Design) but there is always some error message. Is this function really doing the weighted logistic regression as maximizing the following likelihood: \sum w_i*(y_i*\beta*x_i-log(1+exp(\beta*x_i))) Does anybody know a better way to fit this kind of model in R? FYI: one example of getting error message is like: > x=runif(10,0,3) > y=c(rep(0,5),rep(1,5)) >
2000 Jun 08
1
packages installation
Hi folks I am a complete beginner with R. I have been able to install it under Windows 95 on my PC. I use to use it, as a GUI, from a shortcut icon on my Desktop workspace. I am very much in need of using some packages for density estimation from a two-dimensional data set of simulated points. I need to build the corresponding continuous or smoothed probability density function, and
2018 Jan 17
1
mgcv::gam is it possible to have a 'simple' product of 1-d smooths?
I am trying to test out several mgcv::gam models in a scalar-on-function regression analysis. The following is the 'hierarchy' of models I would like to test: (1) Y_i = a + integral[ X_i(t)*Beta(t) dt ] (2) Y_i = a + integral[ F{X_i(t)}*Beta(t) dt ] (3) Y_i = a + integral[ F{X_i(t),t} dt ] equivalents for discrete data might be: 1) Y_i = a + sum_t[ L_t * X_it * Beta_t ] (2) Y_i
2012 Nov 21
2
Weighted least squares
Hi everyone, I admit I am a bit of an R novice, and I was hoping someone could help me with this error message: Warning message: In lm.fit(x, y, offset = offset, singular.ok = singular.ok, ...) : extra arguments weigths are just disregarded. My equation is: lm( Y ~ X1 + X2 + X3, weigths = seq(0.1, 1, by = 0.1)) -- View this message in context:
2013 Feb 06
1
how to extract test for collinearity and constantcy used in lda
Hi everyone, I'm trying to vectorize an application of lda to each 2D slice of a 3D array, but am running into trouble: It seems there are quite a few 2D slices that trigger either the "variables are collinear" warning, or worse, trigger a "variable appears to be constant within groups" error and fails (i.e., ceases computation rather than skips bad slice). There are
2010 Feb 06
1
Canberra distance
Hi the list, According to what I know, the Canberra distance between X et Y is : sum[ (|x_i - y_i|) / (|x_i|+|y_i|) ] (with | | denoting the function 'absolute value') In the source code of the canberra distance in the file distance.c, we find : sum = fabs(x[i1] + x[i2]); diff = fabs(x[i1] - x[i2]); dev = diff/sum; which correspond to the formula : sum[ (|x_i - y_i|) /
2011 Feb 25
0
e1071's Naive Bayes with Weighted Data
Hello fellow R programmers, I'm trying to use package e1071's naiveBayes function to create a model with weighted data. See example below, variable "d" is a count variable that provides the # of records for the given observation combination. Is anyone aware of a "weight" argument to this method? I've been unsuccessful in my research. Thanks, Mike
2008 May 23
1
maximizing the gamma likelihood
for learning purposes and also to help someone, i used roger peng's document to get the mle's of the gamma where the gamma is defined as f(y_i) = (1/gammafunction(shape)) * (scale^shape) * (y_i^(shape-1)) * exp(-scale*y_i) ( i'm defining the scale as lambda rather than 1/lambda. various books define it differently ). i found the likelihood to be n*shape*log(scale) +
2003 Oct 23
1
Variance-covariance matrix for beta hat and b hat from lme
Dear all, Given a LME model (following the notation of Pinheiro and Bates 2000) y_i = X_i*beta + Z_i*b_i + e_i, is it possible to extract the variance-covariance matrix for the estimated beta_i hat and b_i hat from the lme fitted object? The reason for needing this is because I want to have interval prediction on the predicted values (at level = 0:1). The "predict.lme" seems to
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as sum(|x_i - y_i| / |x_i + y_i|) Obviously this is undefined for cases where both x_i and y_i are zeros. Since double zeros are common in many data sets, this is a nuisance. In our field (from which the distance is coming), it is customary to remove double zeros: contribution to distance is zero when both x_i
2001 Mar 05
1
Canberra dist and double zeros
Canberra distance is defined in function `dist' (standard library `mva') as sum(|x_i - y_i| / |x_i + y_i|) Obviously this is undefined for cases where both x_i and y_i are zeros. Since double zeros are common in many data sets, this is a nuisance. In our field (from which the distance is coming), it is customary to remove double zeros: contribution to distance is zero when both x_i
2010 Apr 25
1
function pointer question
Hello, I have the following function that receives a "function pointer" formal parameter name "fnc": loocv <- function(data, fnc) { n <- length(data.x) score <- 0 for (i in 1:n) { x_i <- data.x[-i] y_i <- data.y[-i] yhat <- fnc(x=x_i,y=y_i) score <- score + (y_i - yhat)^2 } score <- score/n