On Oct 31, 2010, at 2:44 AM, Jim Silverton wrote:
> Hello everyone,
> I have 3 variables Y, X1 and X2. Each variables lies between 0 and
> 1. I want
> to do a constrained regression such that a>0 and (1-a) >0
>
> for the model:
>
> Y = a*X1 + (1-a)*X2
It would not accomplish the constraint that a > 0 but you could
accomplish the other constraint within an lm fit:
X3 <- X1-X2
lm(Y ~ X3 + offset(X2) )
Since beta1 is for the model Y ~ 1 + beta1(X1- X2) + 1*X2)
Y ~ intercept + beta1*X1 + (1 -beta1)*X2
... so beta1 is a.
In the case beta < 0 then I suppose a would be assigned 0. This might
be accomplished within an iterative calculation framework by a large
penalization for negative values. In a reply (1) to a question by
Carlos Alzola in 2008 on rhalp, Berwin Turlach offered a solution to a
similar problem ( sum(coef) == 1 AND coef non-negative). Modifying his
code to incorporate the above strategy (and choosing two variables for
which parameter values might be inside the constraint boundaries) we
get:
library(MASS) ## to access the Boston data
designmat <- model.matrix(medv~I(age-lstat) +offset(lstat),
data=Boston)
Dmat <-crossprod(designmat, designmat); dvec <- crossprod(designmat,
Boston$medv)
Amat <- cbind(1, diag(NROW(Dmat)))
bvec <- c(1,rep(0,NROW(Dmat)))
meq <- 1
library(quadprog)
res <- solve.QP(Dmat, dvec, Amat, bvec, meq)
> zapsmall(res$solution)
[1] 0.686547 0.313453
Turlach specifically advised against any interpretation of this
particular result which was only contructed to demonstrate the
mathematical mechanics.
>
> I tried the help on the constrained regression in R but I concede
> that it
> was not helpful.
I must not have that package installed because I got nothing that
appeared to be useful with ??"constrained regression" .
David Winsemius, MD
West Hartford, CT
1) http://finzi.psych.upenn.edu/Rhelp10/2008-March/155990.html