similar to: why does lm() not allow for negative weights?

Displaying 20 results from an estimated 20000 matches similar to: "why does lm() not allow for negative weights?"

2005 Oct 13
3
Optim with two constraints
Hi R-list, I am new to optimization in R and would appreciate help on the following question. I would like to minimize the following function using two constraints: ###### fn <- function(par,H,F){ fval <- 0.5 * t(par) %*% H %*% par + F%*% par fval } # matrix H is (n by k) # matrix F is (n by 1) # par is a (n by 1) set of weights # I need two constraints: # 1.
1999 Dec 07
1
using weights in lm()
Hello! When I know the vector of the variance of the disturbances (i.e. the structure of heteroskedasticity), say Var(u_{i})=v_{i}, what is the weights I should use as argument to lm(): M <- lm(y~x,weigths=1/v) or M <- lm(y~x,weights=1/(v^0.5)) ??? In the help pages I did not find a clear answer to this question, so please could someone help me! Thanks, Wolfgang Koller
2006 Mar 16
2
DIfference between weights options in lm GLm and gls.
Dear R-List users, Can anyone explain exactly the difference between Weights options in lm glm and gls? I try the following codes, but the results are different. > lm1 Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.1183 7.3075 > lm2 Call: lm(formula = y ~ x, weights = W) Coefficients: (Intercept) x 0.04193 7.30660 > lm3 Call:
2006 May 20
1
(PR#8877) predict.lm does not have a weights argument for newdata
Dear R developers, I am a little disappointed that my bug report only made it to the wishlist, with the argument: Well, it does not say it has. Only relevant to prediction intervals. predict.lm does calculate prediction intervals for linear models from weighted regression, so they should be correct, right? As far as I can see they are bound to be wrong in almost all cases, if no weights
2010 Oct 22
1
lm looking for weights outside of the user-defined function
Dear R'ers, I am fighting with a problem that is driving me crazy. I use "lm" in my user-defined function, but it seems to be looking for weights outside of my function's environment: ### Generating example data: x<-data.frame(y=rnorm(100,0,1),a=rnorm(100,1,1),b=rnorm(100,2,1)) myweights<-runif(100) data.for.regression<-x[1:3] ### Creating function
2006 Dec 27
2
proposal: allowing alternative variance estimators in glm/lm
There has been recent discussion about alternatives to the model-based standard error estimators for lm. While some people like the sandwich estimator and others don't, it is clear that neither estimator dominates the other for any sane loss function. It is also worth noting that the sandwich estimator is the default for t.test(). I think it would be useful for models using other
2004 Nov 08
2
Nonlinear weighted least squares estimation
Hi there, I'm trying to fit a growth curve to some data and need to use a weighted least squares estimator to account for heteroscedasticity in the data. A weights argument is available in nls that would appear to be appropriate for this purpose, but it is listed as 'not yet implemented'. Is there another package which could implement this procedure? Regards, Robert Brown
2011 Jul 14
1
WLS regression, lm() with weights as a matrix
Dear All, I've been trying to run a Weighted Least Squares (WLS) regression: Dependent variables: a 60*200 matrix (*Rit*) with 200 companies and 60 dates for each company Independent variables: a 60*4 matrix (*Ft*) with 4 factors and 60 dates for each factor Weights: a 60*200 matrix (*Wit*) with weights for 200 companies and 60 dates for each company The WLS regression I would like to run
2009 Aug 17
1
lm.fit algo
Hi, everyone, This is a little silly, but I cant figure out the algorithm behind lm.fit function used in the context of promax rotation algorithm: The promax function is: promax <- function(x, m = 4) { if(ncol(x) < 2) return(x) dn <- dimnames(x) xx <- varimax(x) x <- xx$loadings Q <- x * abs(x)^(m-1) U <- lm.fit(x, Q)$coefficients d <-
2010 Jul 22
1
does package "QuantPsych" function lm.beta can handle results of a regression with weights?
Hello, and sorry for not providing an example. I run a regular linear regression (using lm) and use weights with it (weights = ...). I use "QuantPsych" package, its function lm.beta to extract standardized regression weights from my lm regression object. When I don't use weights, everything is fine. But when I do use weights, I get an error that refers to lm.beta code: "In b *
2013 Mar 11
1
glm and lm can't find weights
Hello, and apologies for not providing an example. However, my question is more general. I have a lengthy function. This function is using another internal function that modifies the data frame I am reading in. This internal function is using the command model.frame (with data and weights inside) and returns a data frame I am using for further analyses. However, when I try to run my function
2017 Oct 07
1
Discourage the weights= option of lm with summarized data
In the Details section of lm (linear models) in the Reference manual, it is suggested to use the weights= option for summarized data. This must be discouraged rather than encouraged. The motivation for this is as follows. With summarized data the standard errors get smaller with increasing numbers of observations. However, the standard errors in lm do not get smaller when for instance all weights
2004 Oct 28
2
Weighted regresion using lm
Hi: Could anyone help me to clarify this: are the weights normalized inside lm function (package:stats) before applied to the error term? For example: >lm (cost ~ material, weights=quatity, data=receipt) will lm normalize quatity such that sum(quatity) = 1? I traced to lm.wfit and then the weights get transferred into a precompiled FORTRAN module so I can't figure out. Thanks!
2016 Apr 08
0
R.squared in summary.lm with weights
On 07/04/2016 5:21 PM, Murray Efford wrote: > Following some old advice on this list, I have been reading the code for summary.lm to understand the computation of R-squared from a weighted regression. Usually weights in lm are applied to squared residuals, but I see that the weighted mean of the observations is calculated as if the weights are on the original scale: > > [...] > f
2008 Aug 07
1
Fitted values with small weights in lm.wfit (PR#11979)
Full_Name: Alexander Blocker Version: 2.7.1 OS: Ubuntu 8.04 / Windows XP Submission from: (NULL) (76.119.235.225) When running lm(modeleq, weights=wt, data=dataset) with small weights (<1e-10), I have encountered an odd phenomenon with fitted values. Due to numerical precision issues, the fitted values and residuals returned by lm.wfit (from its .Fortran call to dqrls) can differ greatly from
2020 Aug 09
2
lm() takes weights from formula environment
Doesn't this preclude "y ~ ." style notations? > On Aug 9, 2020, at 11:56 AM, Duncan Murdoch <murdoch.duncan at gmail.com> wrote: > > This is fairly clearly documented in ?lm: > > "All of weights, subset and offset are evaluated in the same way as variables in formula, that is first in data and then in the environment of formula." > > There
2004 Feb 23
2
orthonormalization with weights
Hello List, I would like to orthonormalize vectors contained in a matrix X taking into account row weights (matrix diagonal D). ie, I want to obtain Z=XA with t(Z)%*%D%*%Z=diag(1) I can do the Gram-Schmidt orthogonalization with subsequent weighted regressions. I know that in the case of uniform weights, qr can do the trick. I wonder if there is a way to do it in the case of non uniform
2016 Apr 07
0
R.squared in summary.lm with weights
Do you mean w <- z$residuals ? Type names(z) to see the list of item in your model. I ran your code on a lm and it work fine. You don't need the brackets around mss <- Michael Long On 04/07/2016 02:21 PM, Murray Efford wrote: > Following some old advice on this list, I have been reading the code for summary.lm to understand the computation of R-squared from a weighted
2020 Aug 09
3
lm() takes weights from formula environment
I know this programmers can reason this out from R's late parameter evaluation rules PLUS the explicit match.call()/eval() lm() does to work with the passed in formula and data frame. But, from a statistical user point of view this seems to be counter-productive. At best it works as if the user is passing in the name of the weights variable instead of values (I know this is the obvious
2020 Aug 10
1
lm() takes weights from formula environment
Thank you for your suggestion. I do know how to work around the issue. I usually build a fresh environment as a child of base-environment and then insurt the weights there. I was just trying to provide an example of the issue. emptyenv() can not be used, as it is needed for the eval (errors out even if weights are not used with "could not find function list"). For some applications