search for: reparametr

Displaying 20 results from an estimated 30 matches for "reparametr".

Did you mean: parametr
2006 Sep 28
1
Nonlinear fitting - reparametrization help
Hi, I am trying to fit a function of the form: y = A0 + A1 * exp( -0.5* ( (X - Mu1) / Sigma1 )^2 ) - A2 * exp ( -0.5* ( (X-Mu2)/Sigma2 )^2 ) i.e. a mean term (A0) + a difference between two gaussians. The constraints are A1,A2 >0, Sigma1,Sigma2>0, and usually Sigma2>Sigma1. The plot looks like a "Mexican Hat". I had trouble (poor fits) fitting this function to toy data
2004 Aug 09
4
linear constraint optim with bounds/reparametrization
...es constrOptim exits with an error saying that the initial point is not feasible, which it isn't because it is not in the interior of the constraint space. Is there an alternative to constrOptim that can handle such strict (equality) linear constraints? 2) Another option of course would be to reparametrize the problem as follows. I will illustrate with an example: I have parameters: > p [,1] [1,] 0.8 [2,] 0.2 [3,] 0.2 [4,] 0.8 [5,] 0.6 [6,] 0.1 [7,] 0.3 [8,] 0.1 [9,] 0.3 [10,] 0.6 [11,] 0.5 [12,] 0.5 and the following constraints (all these constraint amount to cert...
2007 Aug 07
0
Automatic implementation of "trivial" constraints in optimization
Hi all, I am wondering if anyone has implemented (or at least tried to) an automatic reparametrization in order to satisfy "trivial" constraints (in the sense of Dennis & Schnabel, 1983) in optimization problems. To be perhaps clearer let us consider a simple bi-exponential model for some recorded signal (sorry for the LaTex notations I hope they aren't too confusing): $s(t)...
2000 Jul 13
1
documentation for contrasts and contrasts<- (PR#607)
...s not list all the arguments for those functions. In addition to x, the factor whose contrasts are being extracted or set, contrasts() has the argument 'contrasts=TRUE', and contrasts<-() has the argument 'how.many'. It was this latter that had me flummoxed, because I wanted to reparametrize a model by specifying a full-rank contrast matrix (fitting without an intercept). R. Woodrow Setzer, Jr. Phone: (919) 541-0128 Biostatistics and Fax: (919) 541-4002 Research Support Staff NHEERL MD-55; US EPA; RTP, NC 27711 -.-.-.-.-.-.-.-....
2004 Mar 28
1
GLM for logistic regression and WEIGHTS
...endent variable is a proportion so that the weight is the total from which this proportion is derived. So what should I do if I want to use logistic regression but want to use weight to give more importance to certain observations (e.g. weight=0.87) and less to others (e.g. weight=.45) ? Should I reparametrize everything in terms of counts or is there an easier way out? Thanks in advance M-P [[alternative HTML version deleted]]
2000 Aug 01
1
Testing for parallel slopes
...49 - 154 has a very nice example of an ANOCOVA, ending with a discussion of this very operation. My question has to do with the issue of parametrization. When I use the example data set "whiteside", I get the same results that VR get (see p. 154) except I'm getting them *without* reparametrizing the model. (In fact, I get the same answers with the default parametrization and the alternative). Since I'm only interested in the interaction term, is it even necessary to change the structure of the model matrix, or will I get the answer I need using the default contrast matrices (...
2005 Oct 13
3
Optim with two constraints
...d="L-BFGS-B", H=H, F=f ,control=list(fnscale=-1), lower=0, upper=1) ###### If I understand this correctly, using L-BFGS-B with lower=0 and upper=1 should take care of constraint 1 (box constraints). What I am lacking is the skill to include constraint no 2. I guess I could solve this by reparametrization but I am not sure how exactly. I could not find (i.e. wasn't able to infer) the answer to this in the archives despite the many comments on optim and constrained optimization (sorry if I missed it there). I am using version 2.1.1 under windows XP. Thank you very much. Jens
2001 Nov 14
0
Fitting Pareto dist in a mixture
...ing S language functions (and reference to MASS and S Programming), and even when considering lognormal mixtures as we are. However, I have been attempting to use the Pareto density form f(x) = ( a * k^a ) / ( x^{a + 1} ) ; k > 0, a > 0, x >= k as part of a mixture with a lognormal. Reparametrizing the density to f(x) = ( a * k^a ) / ( (x + k)^{a + 1} ) ; k > 0, a > 0, x > 0 seems to help but the optimization routines remain balky and / or give convergences that do not make sense to me. (I understand that starting values, scales, etc. are very important.) I suspect that th...
2006 Nov 08
1
nls
> y [1] 1 11 42 64 108 173 214 > t [1] 1 2 3 4 5 6 7 > nls(1/y ~ c*exp(-a*b*t)+1/b, start=list(a=0.001,b=250,c=5), trace=TRUE) 29.93322 : 0.001 250.000 5.000 Error in numericDeriv(form[[3]], names(ind), env) : Missing value or an infinity produced when evaluating the model # the start value for b is almost close to final estimates, # a is usually
2007 Aug 01
1
constrOptim
Hi, I'm having trouble using the constrOptim function to generate the 9-component vector argmin of the function ELfsds: ELfsds <- function(pvechat){ LG=0 for(i in 1:9){ LG=LG+log(pvechat[i]) } return(-LG) } with accompanying gradient function: gradfunc <- function(thetavec){ g=1/(9*thetavec) return(g) } The constraints on the optimization problem are: 1 - components of
2009 Dec 10
1
MLE for a t distribution
Given X1,...,Xn ~ t_k(mu,sigma) student t distribution with k degrees of freedom, mean mu and standard deviation sigma, I want to obtain the MLEs of the three parameters (mu, sigma and k). When I try traditional optimization techniques I don't find the MLEs. Usually I just get k->infty. Does anybody know of any algorithms/functions in R that can help me obtain the MLEs? I am especially
2011 Jan 17
0
R-help Digest, Vol 95, Issue 17
...le" limits. They don't need to be too tight, just enough to keep the parameters from giving a silly objective function 2) do some evaluations of the objective to make sure it is really being properly calculated. Never hurts to have some "known" outcomes. Beyond this, we get into reparametrizations. Great idea, but far too much work for most of us, even if we work in the field. Best, JN On 01/17/2011 06:00 AM, r-help-request at r-project.org wrote: > From: Uwe Ligges <ligges at statistik.tu-dortmund.de> > To: Jinrui Xu <jinruixu at umich.edu> > Cc: r-help at r...
2011 Jan 17
0
[Fwd: Re: R-help Digest, Vol 95, Issue 17]
...le" limits. They don't need to be too tight, just enough to keep the parameters from giving a silly objective function 2) do some evaluations of the objective to make sure it is really being properly calculated. Never hurts to have some "known" outcomes. Beyond this, we get into reparametrizations. Great idea, but far too much work for most of us, even if we work in the field. Best, JN On 01/17/2011 06:00 AM, r-help-request at r-project.org wrote: > From: Uwe Ligges <ligges at statistik.tu-dortmund.de> > To: Jinrui Xu <jinruixu at umich.edu> > Cc: r-help at r...
2008 Aug 06
1
Numerical optimisation and "non-feasible" regions
...cially the BFGS method, probably due to the estimation of the gradient) the optimization is really sensitive to this "strategy" and fails (quite often). As I'm (really) not an expert in optimization problems, do you know good ways to deal with non-feasible regions? Or do I need to reparametrize my model so that all parameters belong to $\mathbb{R}$ - which should be not so easy... Thanks for your expertise! Best, Mathieu -- Institute of Mathematics Ecole Polytechnique F?d?rale de Lausanne STAT-IMA-FSB-EPFL, Station 8 CH-1015 Lausanne Switzerland http://stat.epfl.ch/ Tel: + 41 (0...
2008 Dec 31
3
WinBUGS posterior samples (via R2WinBUGS)?
Hi all, I did some analysis using package R2WinBUGS to call WinBUGS. I set the iterations to 50000 (fairly a large number, I think), but after the program was done, the effective posterior samples contained only 7 draws. I don't know why. By the way, I checked posterior sample size by using bugsobj$n.sims. And, for my previous practice with WinBUGS/R2WinBUGS, no such strange thing happend.
2005 Feb 01
3
polynomials REML and ML in nlme
Hello everyone, I hope this is a fair enough question, but I don’t have access to a copy of Bates and Pinheiro. It is probably quite obvious but the answer might be of general interest. If I fit a fixed effect with an added quadratic term and then do it as an orthogonal polynomial using maximum likelihood I get the expected result- they have the same logLik.
2006 May 29
2
parameter-restrictions in OPTIM
Ein eingebundener Text mit undefiniertem Zeichensatz wurde abgetrennt. Name: nicht verf?gbar URL: https://stat.ethz.ch/pipermail/r-help/attachments/20060529/2d606d35/attachment.pl
2010 Sep 02
1
Help on glm and optim
Dear all, I'm trying to use the "optim" function to replicate the results from the "glm" using an example from the help page of "glm", but I could not get the "optim" function to work. Would you please point out where I did wrong? Thanks a lot. The following is the code: # Step 1: fit the glm clotting <- data.frame( u =
2005 Nov 28
3
optimization with inequalities
I have to estimate the following model for several group of observations : y(1-y) = p[1]*(x^2-y) + p[2]*y*(x-1) + p[3]*(x-y) with constraints : p[1]+p[3] >= 1 p[1]+p[2]+p[3]+1 >= 0 p[3] >= 0 I use the following code : func <- sum((y(1-y) - p[1]*(x^2-y) + p[2]*y*(x-1) + p[3]*(x-y))^2) estim <- optim( c(1,0,0),func, method="L-BFGS-B" , lower=c(1-p[3], -p[1]-p[3]-1,
2006 Dec 08
1
MAXIMIZATION WITH CONSTRAINTS
Dear R users, I?m a graduate students and in my master thesis I must obtain the values of the parameters x_i which maximize this Multinomial log?likelihood function log(n!)-sum_{i=1]^4 log(n_i!)+sum_ {i=1}^4 n_i log(x_i) under the following constraints: a) sum_i x_i=1, x_i>=0, b) x_1<=x_2+x_3+x_4 c)x_2<=x_3+x_4 I have been using the ?ConstrOptim? R-function with the instructions