similar to: non linear modelling with nls: starting values

Displaying 20 results from an estimated 4000 matches similar to: "non linear modelling with nls: starting values"

2013 Feb 25
3
Empirical Bayes Estimator for Poisson-Gamma Parameters
Dear Sir/Madam, I apologize for any cross-posting. I got a simple question, which I thought the R list may help me to find an answer. Suppose we have Y_1, Y_2, ., Y_n ~ Poisson (Lambda_i) and Lambda_i ~Gamma(alpha_i, beta_i). Empirical Bayes Estimator for hyper-parameters of the gamma distr, i.e. (alpha_t, beta_t) are needed. y=c(12,5,17,14) n=4 What about a Hierarchal B ayes
2009 Jun 16
1
turning off escape sequences for a string
Hello, I would like to create a matrix with one of the columns named $\delta$. I have also created columns $\beta_1$ , $\beta_2$, etc. However, it seems like \d is an escape sequence which gets automatically removed. (Using these names such that they work right in xtable -> latex) colnames(simpleReg.mat) <- c("$\beta_1$","$SE(\beta_1)$", "$\beta_2$",
2013 Oct 19
2
ivreg with fixed effect in R?
I want to estimate the following fixed effect model: y_i,t = alpha_i + beta_1 x1_t + beta_2 x2_i,tx2_i,t = gamma_i + gamma_1 x1_t + gamma_2 Z1_i + gamma_3 Z2_i I can use ivreg from AER to do the iv regression. fm <- ivreg(y_i,t ~ x1_t + x2_i,t | x1_t + Z1_i + Z2_i, data = DataSet) But, I'm not sure how can I add the fixed effects. Thanks! [[alternative HTML
2006 Nov 17
2
effects in ANCOVA
Dear R users, I am trying to fit the following ANCOVA model in R2.4.0 Y_ij=mu+alpha_i+beta*(X_ij-X..)+epsilon_ij Particularly I am interested in obtaining estimates for mu, and the effects alpha_i I have this data (from the book Applied Linear Statistical Models by Neter et al (1996), page 1020) y<-c(38,43,24,39,38,32,36,38,31,45,27,21,33,34,28)
2012 Oct 04
1
(no subject)
producing a multi-figure plot, i am try to add beta_1, beta_2,.. beta_9 to ylab using expression or substitution, but cannot work out like for (i in 1:9){ plot(seq(1/m, 1-1/m, 1/m), beta.q[,i], type="l", col=1, ylim=range(beta.q), xlab="quantile", ylab=expresion(beta[i])) } any suggestions will be greatly appreciated. DL [[alternative HTML version deleted]]
2012 Feb 21
0
BHHH algorithm on duration time models for stock prices
I am currently trying to find MLE of a function with four parameters. My codes run well but i don't get the results. I get the following message: BHHH maximisation Number of iterations: 0 Return code: 100 Initial value out of range. I don't know this is so because of the way i have written my loglikelihood or what. The loglikelihood LogLik<-function(param){ beta_1<-param[1]
2018 Feb 16
2
[FORGED] Re: SE for all levels (including reference) of a factor atfer a GLM
On 16/02/18 15:28, Bert Gunter wrote: > This is really a statistical issue. What do you think the Intercept term > represents? See ?contrasts. > > Cheers, > Bert > > > > Bert Gunter > > "The trouble with having an open mind is that people keep coming along and > sticking things into it." > -- Opus (aka Berkeley Breathed in his "Bloom
2004 Aug 23
1
Two factor ANOVA with lm()
The following is a data frame > "jjd" <- structure(list(Observations = c(6.8, 6.6, 5.3, 6.1, 7.5, 7.4, 7.2, 6.5, 7.8, 9.1, 8.8, 9.1), LevelA = structure(c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3), .Label = c("A1", "A2", "A3"), class = "factor"), LevelB = structure(c(1, 1, 2, 2, 1, 1, 2, 2, 1, 1, 2, 2), .Label =
2011 Aug 19
3
Calculating p-value for 1-tailed test in a linear model
Hello, I'm having trouble figuring out how to calculate a p-value for a 1-tailed test of beta_1 in a linear model fit using command lm. My model has only 1 continuous, predictor variable. I want to test the null hypothesis beta_1 is >= 0. I can calculate the p-value for a 2-tailed test using the code "2*pt(-abs(t-value), df=degrees.freedom)", where t-value and degrees.freedom
2018 Feb 16
0
SE for all levels (including reference) of a factor atfer a GLM
This is really a statistical issue. What do you think the Intercept term represents? See ?contrasts. Cheers, Bert Bert Gunter "The trouble with having an open mind is that people keep coming along and sticking things into it." -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip ) On Thu, Feb 15, 2018 at 5:27 PM, Marc Girondot via R-help < r-help at
2011 Apr 12
2
Testing equality of coefficients in coxph model
Dear all, I'm running a coxph model of the form: coxph(Surv(Start, End, Death.ID) ~ x1 + x2 + a1 + a2 + a3) Within this model, I would like to compare the influence of x1 and x2 on the hazard rate. Specifically I am interested in testing whether the estimated coefficient for x1 is equal (or not) to the estimated coefficient for x2. I was thinking of using a Chow-test for this but the Chow
2018 Feb 16
2
SE for all levels (including reference) of a factor atfer a GLM
Dear R-er, I try to get the standard error of fitted parameters for factors with a glm, even the reference one: a <- runif(100) b <- sample(x=c("0", "1", "2"), size=100, replace = TRUE) df <- data.frame(A=a, B=b, stringsAsFactors = FALSE) g <- glm(a ~ b, data=df) summary(g)$coefficients # I don't get SE for the reference factor, here 0:
2005 Dec 29
0
calculating recursive sequences
Hi, I was trying to repeat the estimation of threshold GARCH models from the book "Analysis of Financial Time Series" by Ruey S. Tsay, and I was succesfull, but I had to use "for" loop, which is quite slow. The loop is necessary, since you need to calculate recursive sequence. Is there a faster way to do this in R, without using loops? The model is such: r_t = \mu + \alpha_2
2006 Oct 31
2
Put a normal curve on plot
I would like to be able to place a normal distribution surrounding the predicted values at various places on a plot. Below is some toy code that creates a scatterplot and plots a regression line through the data. library(MASS) mu <- c(0,1) Sigma <- matrix(c(1,.8,.8,1), ncol=2) set.seed(123) x <- mvrnorm(50,mu,Sigma) plot(x) abline(lm(x[,2] ~ x[,1])) Say I want to add a normal
2005 Dec 01
2
Minimizing a Function with three Parameters
Hi, I'm trying to get maximum likelihood estimates of \alpha, \beta_0 and \beta_1, this can be achieved by solving the following three equations: n / \alpha + \sum\limits_{i=1}^{n} ln(\psihat(i)) - \sum\limits_{i=1}^{n} ( ln(x_i + \psihat(i)) ) = 0 \alpha \sum\limits_{i=1}^{n} 1/(psihat(i)) - (\alpha+1) \sum\limits_{i=1}^{n} ( 1 / (x_i + \psihat(i)) ) = 0 \alpha \sum\limits_{i=1}^{n} (
2013 Apr 30
0
Ridge regression
Hi all, I have run a ridge regression on a data set 'final' as follows: reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$u, lambda=seq(0,10,0.01)) Then I enter : select(reg) and it returns: modified HKB estimator is 19.3409 modified L-W estimator is 36.18617 smallest value of GCV at 10 I think it
2007 Aug 15
1
Polynomial fitting
Hi everybody! I'm looking some way to do in R a polynomial fit, say like polyfit function of Octave/MATLAB. For who don't know, c = polyfit(x,y,m) finds the coefficients of a polynomial p(x) of degree m that fits the data, p(x[i]) to y[i], in a least squares sense. The result c is a vector of length m+1 containing the polynomial coefficients in descending powers: p(x) = c[1]*x^n +
2005 Nov 24
1
residuals in logistic regression model
In the logistic regression model, there is no residual log (pi/(1-pi)) = beta_0 + beta_1*X_1 + ..... But glm model will return residuals What is that? How to understand this? Can we put some residual in the logistic regression model by replacing pi with pi' (the estimated pi)? log (pi'/(1-pi')) = beta_0 + beta_1*X_1 + .....+ ei Thanks! [[alternative HTML version deleted]]
2007 Sep 13
1
Problem using xtable on an array
Hi all I know about producing a minimal example to show my problem. But I'm having trouble producing a minimal example that displays this behaviour, so please bear with me to begin with. Observe: I create an array called model.mat. Some details on this: > str(model.mat) num [1:18, 1:4] -0.170 -0.304 -2.617 2.025 -1.610 ... - attr(*, "dimnames")=List of 2 ..$ : chr
2008 May 16
1
SE of difference in fitted probabilities from logistic model.
I am fitting a logistic binomial model of the form glm(y ~ a*x,family=binomial) where a is a factor (with 5 levels) and x is a continuous predictor. To assess how much ``impact'' x has, I want to compare the fitted success probability when x = its maximum value with the fitted probability when x = its mean value. (The mean and the max are to be taken by level of the factor