Displaying 20 results from an estimated 100 matches similar to: "BHHH algorithm on duration time models for stock prices"
2013 Apr 30
0
Ridge regression
Hi all,
I have run a ridge regression on a data set 'final' as follows:
reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$u,
lambda=seq(0,10,0.01))
Then I enter :
select(reg) and it returns: modified HKB estimator is 19.3409
modified L-W estimator is 36.18617
smallest value of GCV at 10
I think it
2003 Aug 04
1
BHHH algorithm
Dear R users,
Could you tell me where a I find some references of BHHH algorithm ? I need
write it in R.
Thank you.
2007 Apr 09
1
R:Maximum likelihood estimation using BHHH and BFGS
Dear R users,
I am new to R. I would like to find *maximum likelihood estimators for psi
and alpha* based on the following *log likelihood function*, c is
consumption data comprising 148 entries:
fn<-function(c,psi,alpha)
{
s1<-sum(for(i in 1:n){(c[i]-(psi^(-1/alpha)*(lag(c[i],-1))))^2*
(lag(c[i],-1)^((-2)*(alpha+1))
)});
s2<- sum(for(m in 1:n){log(lag(c[m],-1)^(((2)*alpha)+2))});
2016 Jul 09
2
Red Neuronal complicada categorías
Hola,
Esta es una forma de hacerlo...
Mira que lo primero que he modificado es el fichero "x.csv" para sustituir
los espacios en los nombres por "_". Y también he quitado los acentos y las
eñes...
He utilizado el paquete RNNS y la función "mlp()" para ajustar la red.
#-------------------------------------------
> x <- read.csv("x.csv",
2011 Mar 12
2
Identifying unique pairs
Dear R helpers
Suppose I have a data frame as given below
mydat = data.frame(x = c(1,1,1, 2, 2, 2, 2, 2, 5, 5, 6), y = c(10, 10, 10, 8, 8, 8, 7, 7, 2, 2, 4))
mydat
x y
1 1 10
2 1 10
3 1 10
4 2 8
5 2 8
6 2 8
7 2 7
8 2 7
9 5 2
10 5 2
11 6 4
unique(mydat$x) will give me 1,
2012 Dec 24
1
How to do it through 1 step?
A data set(dat),has 2 variables: x and a, and 100 rows.
I wanna add 2 variables,and call the new data set dat1:
var1:f = a/median(a)
var2:x_new = x*f
My solution:
dat1<-transform(dat,f = a/median(a),x_new = x*f)
But gets error reply which says that "f" is not exits since dat has no variables called "f".
So I have to do through 2 steps:
2010 Apr 22
1
Convert character string to top levels + NAN
Dear all,
I have several character strings with a high number of different levels.
unique(x) gives me values in the range of 100-200.
This creates problems as I would like to use them as predictors in a coxph
model.
I therefore would like to convert each of these strings to a new string
(x_new).
x_new should be equal to x for the top n categories (i.e. the top n levels
with the highest
2009 Jul 12
0
Plotting problem [lars()/elasticnet()]
Dear all,
I am using modified LARS algorithm (ref: The Adaptive Lasso and Its Oracle
Properties, Zou 2006) for adaptive lasso penalized linear regression.
1. w(j) <- |beta_ols(j)|^(-gamma) gamma>0 and j = 1,...,p
2. define x_new(j) <- x(j)*w(j)
3. apply LARS to solve modified lasso problem
out.adalasso <- lars(X_new,y,type="lasso") or enet(X_new,
2010 Aug 07
3
plot the dependent variable against one of the predictors with other predictors as constant
Hi, folks,
Happy work in weekends >_<
My question is how to plot the dependent variable against one of the
predictors with other predictors as constant. Not for the original data, but
after prediction. It means y is the predicted value of the dependent
variables. The constane value of the other predictors may be the average or
some fixed value.
#######
y=1:10
x=10:1
z=2:11
2012 Apr 11
1
inference for customized regression in R?
Hi all,
Are there functions in R that could help me do the following?
We have a special type of regression which is called Geometric Mean
Regression.
We have done some search and found the following:
https://stat.ethz.ch/pipermail/r-help/2011-July/285022.html
The question is: how to do the statistical inference on GMR results?
More specifically, we are looking for the prediction interval:
2009 Aug 12
3
Obtaining the value of x at a given value of y in a smooth.spline object
I have some data fit to a smooth.spline object as follows: (x=vector of data
for the predictor variable, y=vector of data for the response variable)
fit <- smooth.spline(x,y)
Now, given a spline fit point y_new, I want to be able to find out what
value of x_new yielded this fit value. How to do so?
(This problem is the inverse of the predict.smooth.spline function, which
takes x_new as input
2012 Dec 18
2
Set a zero at minimum row by group
Dear R Helpers,
I'm struggling with a data preparation problem. I feel that it is a quite
easy task but I don't get it done. I hope you can help me with that.
I have a data frame looking like this:
ID <- c(1,1,1,2,2,3,3,3,3)
T <- c(1,2,3,1,4,3,5,6,8)
x <- rep(1,9)
df <- data.frame(ID,T,x)
>df
ID T x
1 1 1
1 2 1
1 3 1
2 1 1
2 4 1
3 3 1
3 5 1
3 6 1
3 8 1
I want to
2009 Feb 15
0
feature extraction on time series data
Hi,
This is a practical question and I am sure there are many statisticians can
give me a hand.
I have 500 time series data (500 rows), each row contains 100 intervals,
i.e., on each row, I have X1, X2, ..... X100. I am trying to reduce the
dimension of this input because the data at the end of each row does not
have significant meaning to the project I am doing.
I used cubic splines on ea.
2006 Sep 18
1
non linear modelling with nls: starting values
Hi,
I'm trying to fit the following model to data using 'nls':
y = alpha_1 * beta_1 * exp(-beta_1 * x) +
alpha_2 * beta_2 * exp(-beta_2 * x)
and the call I've been using is:
nls(y ~ alpha_1 * beta_1 * exp(-beta_1 * x) +
alpha_2 * beta_2 * exp(-beta_2 * x),
start=list(alpha_1=4, alpha_2=2, beta_1=3.5, beta_2=2.5),
trace=TRUE, control=nls.control(maxiter =
2016 Jul 07
2
Red Neuronal complicada categorías
Estimados
Les consulto por redes neuronales, hay diversos artículos como los siguientes (el último tienen un error actualmente). Pero mi pregunta va un poco por otro lado.
http://www.r-bloggers.com/build-your-own-neural-network-classifier-in-r/
http://www.r-bloggers.com/classification-using-neural-net-in-r/
Básicamente se puede calcular un valor, por ejemplo doblar 2,4 grados a la derecha, luego 1
2011 Sep 05
3
function censReg in panel data setting
Hello all,
I have a problem estimating Random Effects model using censReg function.
small part of code:
UpC <- censReg(Power ~ Windspeed, left = -Inf, right =
2000,data=PData_In,method="BHHH",nGHQ = 4)
Error in maxNRCompute(fn = logLikAttr, fnOrig = fn, gradOrig = grad,
hessOrig = hess, :
NA in the initial gradient
...then I tried to set starting values myself and here is
2009 Jun 16
1
turning off escape sequences for a string
Hello,
I would like to create a matrix with one of the columns named
$\delta$. I have also created columns $\beta_1$ , $\beta_2$, etc.
However, it seems like \d is an escape sequence which gets
automatically removed. (Using these names such that they work right in
xtable -> latex)
colnames(simpleReg.mat) <- c("$\beta_1$","$SE(\beta_1)$", "$\beta_2$",
2011 May 03
3
help with the maxBHHH routine
Hello R community,
I have been using R's inbuilt maximum likelihood functions, for the
different methods (NR, BFGS, etc).
I have figured out how to use all of them except the maxBHHH function. This
one is different from the others as it requires an observation level
gradient.
I am using the following syntax:
maxBHHH(logLik,grad=nuGradient,finalHessian="BHHH",start=prm,iterlim=2)
2003 Jul 10
0
FW: Maximum Likelihood Estimation and Optimisation
Have a look at ?optim. I don't think it has the BHHH algorithm as an
option, though.
===========================================
David Barron
Jesus College
University of Oxford
-----Original Message-----
From: r-help-bounces at stat.math.ethz.ch
[mailto:r-help-bounces at stat.math.ethz.ch]On Behalf Of Harold Doran
Sent: 10 July 2003 15:43
To: Fohr, Marc [AM]; R-help at stat.math.ethz.ch
2003 Mar 12
1
problems with numerical optimisation
Dear list,
this is not a particular R question but perhaps someone can help.
I am running a maximum likelihood estimation (competing risk duration
model with unobserved heterogeneity) on 30 different datasets. The
problem is that on 2 datasets the model does not converge. I am
interested if there are any methods, based on the gradients or (an
approximation of) the hessian which helps to