Hi there, I am trying to use linear regression to solve the following equation - y <- c(0.2525, 0.3448, 0.2358, 0.3696, 0.2708, 0.1667, 0.2941, 0.2333, 0.1500, 0.3077, 0.3462, 0.1667, 0.2500, 0.3214, 0.1364) x2 <- c(0.368, 0.537, 0.379, 0.472, 0.401, 0.361, 0.644, 0.444, 0.440, 0.676, 0.679, 0.622, 0.450, 0.379, 0.620) x1 <- 1-x2 # equation lmFit <- lm(y ~ x1 + x2) lmFit Call: lm(formula = y ~ x1 + x2) Coefficients: (Intercept) x1 x2 0.30521 -0.09726 NA I would like to *constraint the coefficients of x1 and x2 to be between 0,1*. Is there a way of adding constraints to lm? I looked through the old help files and found a solution by Emmanuel using least squares. The method (with modification) is as follows - Data1<- data.frame(y=y,x1=x1, x2=x2) # The objective function : least squares. e<-expression((y-(c1+c2*x1+c3*x2))^2) foo<-deriv(e, name=c("c1","c2","c3")) # Objective objfun<-function(coefs, data) { return(sum(eval(foo,env=c(as.list(coefs), as.list(data))))) } # Objective's gradient objgrad<-function(coefs, data) { return(apply(attr(eval(foo,env=c(as.list(coefs), as.list(data))), "gradient"),2,sum)) } D1.unbound<-optim(par=c(c1=0.5, c2=0.5, c3=0.5), fn=objfun, gr=objgrad, data=Data1, method="L-BFGS-B", lower=rep(0, 3), upper=rep(1, 3)) D1.unbound $par c1 c2 c3 0.004387706 0.203562156 0.300825550 $value [1] 0.07811152 $counts function gradient 8 8 $convergence [1] 0 $message [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH" Any suggestion on how to fix the error "CONVERGENCE: REL_REDUCTION_OF_F <FACTR*EPSMCH"? [[alternative HTML version deleted]]
Due to perfect collinearity, your regression isn't unique so you're not going to be able to even solve the unconstrained version of this problem. Michael On Tue, Mar 20, 2012 at 12:54 AM, priya fernandes <priyyafernandes at gmail.com> wrote:> Hi there, > > I am trying to use linear regression to solve the following equation - > > y <- c(0.2525, 0.3448, 0.2358, 0.3696, 0.2708, 0.1667, 0.2941, 0.2333, > 0.1500, 0.3077, 0.3462, 0.1667, 0.2500, 0.3214, 0.1364) > x2 <- c(0.368, 0.537, 0.379, 0.472, 0.401, 0.361, 0.644, 0.444, 0.440, > 0.676, 0.679, 0.622, 0.450, 0.379, 0.620) > x1 <- 1-x2 > > # equation > lmFit <- lm(y ~ x1 + x2) > > lmFit > Call: > lm(formula = y ~ x1 + x2) > > Coefficients: > (Intercept) ? ? ? ? ? x1 ? ? ? ? ? x2 > ? ?0.30521 ? ? -0.09726 ? ? ? ? ? NA > > I would like to *constraint the coefficients of x1 and x2 to be between 0,1*. > Is there a way of adding constraints to lm? > > I looked through the old help files and found a solution by Emmanuel using > least squares. The method (with modification) is as follows - > > ?Data1<- data.frame(y=y,x1=x1, x2=x2) > > # The objective function : least squares. > > e<-expression((y-(c1+c2*x1+c3*x2))^2) > > foo<-deriv(e, name=c("c1","c2","c3")) > > # Objective > > objfun<-function(coefs, data) { > > return(sum(eval(foo,env=c(as.list(coefs), as.list(data))))) > > } > > # Objective's gradient > > objgrad<-function(coefs, data) { > > return(apply(attr(eval(foo,env=c(as.list(coefs), as.list(data))), > > ?"gradient"),2,sum)) > > ?} > > D1.unbound<-optim(par=c(c1=0.5, c2=0.5, c3=0.5), > > ?fn=objfun, > > gr=objgrad, > > data=Data1, > > ?method="L-BFGS-B", > > ?lower=rep(0, 3), > > upper=rep(1, 3)) > > > D1.unbound > > > $par > ? ? ? ? c1 ? ? ? ? ?c2 ? ? ? ? ?c3 > 0.004387706 0.203562156 0.300825550 > > $value > [1] 0.07811152 > > $counts > function gradient > ? ? ? 8 ? ? ? ?8 > > $convergence > [1] 0 > > $message > [1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH" > > Any suggestion on how to fix the error ?"CONVERGENCE: REL_REDUCTION_OF_F <> FACTR*EPSMCH"? > > ? ? ? ?[[alternative HTML version deleted]] > > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.
On Tue, Mar 20, 2012 at 12:54 AM, priya fernandes <priyyafernandes at gmail.com> wrote:> Hi there, > > I am trying to use linear regression to solve the following equation - > > y <- c(0.2525, 0.3448, 0.2358, 0.3696, 0.2708, 0.1667, 0.2941, 0.2333, > 0.1500, 0.3077, 0.3462, 0.1667, 0.2500, 0.3214, 0.1364) > x2 <- c(0.368, 0.537, 0.379, 0.472, 0.401, 0.361, 0.644, 0.444, 0.440, > 0.676, 0.679, 0.622, 0.450, 0.379, 0.620) > x1 <- 1-x2 > > # equation > lmFit <- lm(y ~ x1 + x2) > > lmFit > Call: > lm(formula = y ~ x1 + x2) > > Coefficients: > (Intercept) ? ? ? ? ? x1 ? ? ? ? ? x2 > ? ?0.30521 ? ? -0.09726 ? ? ? ? ? NA > > I would like to *constraint the coefficients of x1 and x2 to be between 0,1*. > Is there a way of adding constraints to lm? >Assuming we set the intercept to zero the unconstrained solution does satisfy those constraints: lm(y ~ x1 + x2 + 0) An approach which explicitly set the constraints (also removing the intercept) would be nls: nls(y ~ a * x1 + b * x2, lower = c(a = 0, b = 0), upper = c(a = 1, b = 1), start = c(a = 0.5, b = 0.5), alg = "port") -- Statistics & Software Consulting GKX Group, GKX Associates Inc. tel: 1-877-GKX-GROUP email: ggrothendieck at gmail.com