Displaying 20 results from an estimated 10000 matches similar to: "nlminb() - how do I constrain the parameter vector properly?"
2011 Aug 18
3
Error message: object of type 'closure' is not subsettable
Dear R-users
I need to calibrate kappa, rho, eta, theta, v0 in the following code, see
below. However when I run it, I get:
y <- function(kappahat, rhohat, etahat, thetahat, v0hat) {sum(difference(k,
t, S0, X, r, implvol, q, kappahat, rhohat, etahat, thetahat, v0hat)^2)}
> nlminb(start=list(kappa, rho, eta, theta, v0), objective = y, lower =lb,
> upper =ub)
Error in dots[[1L]][[1L]] :
2008 Mar 22
1
Vectorization Problem
I have the code for the bivariate Gaussian copula. It is written with
for-loops, it works, but I wonder if there is a way to vectorize the
function.
I don't see how outer() can be used in this case, but maybe one can
use mapply() or Vectorize() in some way? Could anyone help me, please?
## Density of Gauss Copula
rho <- 0.5 #corr
R <- rbind(c(1,rho),c(rho,1)) #vcov matrix
id <-
2010 Sep 29
1
nlminb and optim
I am using both nlminb and optim to get MLEs from a likelihood function I have developed. AFAIK, the model I has not been previously used in this way and so I am struggling a bit to unit test my code since I don't have another data set to compare this kind of estimation to.
The likelihood I have is (in tex below)
\begin{equation}
\label{eqn:marginal}
L(\beta) = \prod_{s=1}^N \int
2010 Mar 13
2
dmvnorm masked by emdbook
I am using curve3d in the emdbook package to graph a gaussian copula density
function generated via the copula package. Unfortunately, it appears that
emdbook masks dmvnorm from the package mvtnorm in a way that prohibits
copula from generating the gaussian copula. (Sounds very confusing!) For
example,
> library(copula)
> f<-function(x,y) dcopula(normalCopula(0),c(x,y))
>
2010 Jun 23
3
integrate dmvtnorm
Hello, everyone,
I have a question about integration of product of two densities.
Here is the sample code; however the mean of first density is a function of
another random variable, which is to be integrated.
##
f=function(x) {dmvnorm(c(0.6, 0.8), mean=c(0.75, 0.75/x))*dnorm(x, mean=0.6,
sd=0.15)}
integrate(f, lower=-Inf, upper=Inf)
## error message
Error in dmvnorm(c(0.6, 0.8), mean = c(0.75,
2001 Aug 30
1
MCMC coding problem
Dear All,
I am trying to convert some S-plus code that I have to run MCMC into
R-code. The program works in S-plus, but runs slowly.
I have managed to source the program into R. R recognizes that the program
is there; for example, it will display the code when I type the function
name at the prompt. However, the program will not run. When I try to run
the program, I get the following error
2009 Nov 18
2
Error "system is computationally singular" by using function dmvnorm
Dear R users,
i try to use function dmvnorm(x, mean, sigma, log=FALSE)
from R package mvtnorm to calculate the probability of x
under the multivariate normal distribution with mean equal
to mean and covariance matrix sigma.
I become the following
Error in solve.default(cov, ...) :
system is computationally singular: reciprocal condition
number = 1.81093e-19
What could be the reason of it?
2008 Oct 01
2
Bivariate normal
Package mvtnorm provides dmvnorm, pmvnorm that can be used to compute
Pr(X=x,Y=y) and Pr(X<x,Y<y) for a bivariate normal.
Are there functions that would compute Pr(X<x,Y=y)?
I'm currently using "integrate" with dmvnorm but it is too slow.
2012 Feb 15
3
(sin asunto)
Hola
Alguien me podrĂa decir como hacer una grafica
del tipo persp() de la densidad de una distribucion
normal bivariante estandarzada con correlacion 0.5??
gracias
[[alternative HTML version deleted]]
2010 Dec 07
1
Using nlminb for maximum likelihood estimation
I'm trying to estimate the parameters for GARCH(1,1) process.
Here's my code:
loglikelihood <-function(theta) {
h=((r[1]-theta[1])^2)
p=0
for (t in 2:length(r)) {
h=c(h,theta[2]+theta[3]*((r[t-1]-theta[1])^2)+theta[4]*h[t-1])
p=c(p,dnorm(r[t],theta[1],sqrt(h[t]),log=TRUE))
}
-sum(p)
}
Then I use nlminb to minimize the function loglikelihood:
nlminb(
2011 Feb 09
1
Plot bivariate density with densities margins
Dear R users,
I would like to plot the bivariate density surface with its marginal
densities on the sides of the 3D box, just like in the picture I attach. I
tried to found information about how to do it but did not find anything.
Does anyone know how to do it?
Thanks in advance,
Eduardo.
2008 Aug 01
2
contour lines in windows device but neither in pdf nor in postscript
library(mvtnorm)
x = seq(-4,4,length=201)
xy = expand.grid(x,x)
sigma = (diag(c(1,1))+1)/2
d2 = matrix(dmvnorm(xy,sigma=sigma),201)
xsamp = rmvnorm(200,sigma=sigma)
contour(x,x,d2)
points(xsamp,col=3,pch=16)
pdf("pdftry.pdf")
contour(x,x,d2)
points(xsamp,col=3,pch=16)
dev.off()
postscript("pstry.ps")
contour(x,x,d2)
points(xsamp,col=3,pch=16)
dev.off()
# I can see
2011 Jun 25
2
Multivariate normal density in C for R
Does anyone know of a package that uses C code to calculate a multivariate
normal density?
My goal is to find a faster way to calculate MVN densities and avoid R loops
or apply functions, such as when X and mu are N x K matrices, as opposed to
vectors, and in this particular case, speed really matters. I would like to
be able to use .C or .Call to pass X, mu, Sigma, and N to a C program and
have
2008 Sep 19
2
Error: function cannot be evaluated at initial parameters
I have an error for a simple optimization problem. Is there anyone knowing
about this error?
lambda1=-9
lambda2=-6
L<-function(a){
s2i2f<-(exp(-lambda1*(250^a)-lambda2*(275^a-250^a))
-exp(-lambda1*(250^a)-lambda2*(300^a-250^a)))
logl<-log(s2i2f)
return(-logl)}
optim(1,L)
Error in optim(1, L) : function cannot be evaluated at initial parameters
Thank you in advance
--
View this
2012 Apr 25
2
comparison of bivariate normal distributions
sorry for cross-posting
Dear all,
I have tow (several) bivariate distributions with a known mean and variance-covariance structure (hence a known density function) that I would like to compare in order to get an intersect that tells me something about "how different" these distributions are (as t-statistics for univariate distributions).
In order to visualize what I mean hear a little
2011 Aug 16
2
Calibrating the risk free interest rate using nlminb
Dear R-users
I am trying to find a value for the risk free rate minimizing the difference
between a BS call value with impl. volatilities minus the market price of a
call (assuming this is just the average bid ask price)
Here is my data:
http://r.789695.n4.nabble.com/file/n3747509/S%26P_500_calls%2C_jan-jun_2010.csv
S%26P_500_calls%2C_jan-jun_2010.csv
S0 <- 1136.03
q <- 0.02145608
S0
2009 Nov 29
1
optim or nlminb for minimization, which to believe?
I have constructed the function mml2 (below) based on the likelihood function described in the minimal latex I have pasted below for anyone who wants to look at it. This function finds parameter estimates for a basic Rasch (IRT) model. Using the function without the gradient, using either nlminb or optim returns the correct parameter estimates and, in the case of optim, the correct standard
2006 Jul 20
2
Timing benefits of mapply() vs. for loop was: Wrap a loop inside a function
List:
Thank you for the replies to my post yesterday. Gabor and Phil also gave
useful replies on how to improve the function by relying on mapply
rather than the explicit for loop. In general, I try and use the family
of apply functions rather than the looping constructs such as for, while
etc as a matter of practice.
However, it seems the mapply function in this case is slower (in terms
of CPU
2005 May 31
1
Solved: linear regression example using MLE using optim()
Thanks to Gabor for setting me right. My code is as follows. I found
it useful for learning optim(), and you might find it similarly
useful. I will be most grateful if you can guide me on how to do this
better. Should one be using optim() or stats4::mle?
set.seed(101) # For replicability
# Setup problem
X <- cbind(1, runif(100))
theta.true <- c(2,3,1)
y <- X
2011 Aug 17
2
An example of very slow computation
This message is about a curious difference in timing between two ways of computing the
same function. One uses expm, so is expected to be a bit slower, but "a bit" turned out to
be a factor of >1000. The code is below. We would be grateful if anyone can point out any
egregious bad practice in our code, or enlighten us on why one approach is so much slower
than the other. The problem