Hello, I'm optimizing a log-likelihood function using the built-in optim() function with the default method. The model is essentially a Weibull distribution where the rate parameter changes based on a single covariate, coded 1 or 0 (that is, when the indicator is 1, the rate parameter changes to a different value, Beta, where this Beta is included as a model parameter). I successfully programmed the model and it seems to work fine. I've done extensive checking, and am confident the model is coded correctly. But I was experimenting, and ran into something odd: if I set the entire covariate vector to zero (meaning Beta is always being multiplied by zero), but leave Beta as a variable to be optimized over, optim() still changes the value of Beta (even though it's always multiplied by zero), and furthermore gives different results for the other parameters, as compared to when I force Beta to be zero and don't optimize over it. When I first saw this, I imagined I had made a coding error somewhere else. BUT, here's the kicker: I modified the code so that the log-likelihood function still includes Beta in the vector of parameters, but draws it (Beta <- parameters[4]) and then immediately after that sets it to zero. So essentially, Beta is still in the parameters being optimized, but is immediately set to zero after being called. When I run this model... it does the same thing!! optim() changes the value of Beta, which then changes the values of the other parameters. The ONLY DIFFERENCE between this model and the fixed zero Beta is that the function being optimized still calls Beta from the parameters vector (but, again, sets it to zero immediately--Beta is always zero in any of the calculations). I hope I've explained this clearly. Does anyone have any idea what could be causing this phenomenon? Specifically, it troubles me that the other parameters change. Thanks in advance, Ryan [[alternative HTML version deleted]]