Displaying 20 results from an estimated 9000 matches similar to: "gradient"
2011 Aug 29
3
gradient function in OPTIMX
Dear R users
When I use OPTIM with BFGS, I've got a significant result without an error
message. However, when I use OPTIMX with BFGS( or spg), I've got the
following an error message.
----------------------------------------------------------------------------------------------------
> optimx(par=theta0, fn=obj.fy, gr=gr.fy, method="BFGS",
>
2009 Jun 22
1
The gradient of a multivariate normal density with respect to its parameters
Does anybody know of a function that implements the derivative (gradient) of
the multivariate normal density with respect to the *parameters*?
It?s easy enough to implement myself, but I?d like to avoid reinventing the
wheel (with some bugs) if possible. Here?s a simple example of the result
I?d like, using numerical differentiation:
library(mvtnorm)
library(numDeriv)
f=function(pars, xx, yy)
2009 May 10
4
Partial Derivatives in R
Quick question:
Which function do you use to calculate partial derivatives from a model
equation?
I've looked at deriv(), but think it gives derivatives, not partial
derivatives. Of course my equation isn't this simple, but as an example,
I'm looking for something that let's you control whether it's a partial or
not, such as:
somefunction(y~a+bx, with respect to x,
2006 Sep 30
1
Gradient problem in nlm
Hello everyone!
I am having some trouble supplying the gradient function to nlm in R for
windows version 2.2.1.
What follows are the R-code I use:
fredcs39<-function(a1,b1,b2,x){return(a1+exp(b1+b2*x))}
loglikcs39<-function(theta,len){
value<-sum(mcs39[1:len]*fredcs39(theta[1],theta[2],theta[3],c(8:(7+len))) -
pcs39[1:len] * log(fredcs39(theta[1],theta[2],theta[3],c(8:(7+len)))))
2009 Aug 01
4
Likelihood Function for Multinomial Logistic Regression and its partial derivatives
Hi,
I would like to apply the L-BFGS optimization algorithm to compute the MLE
of a multilevel multinomial Logistic Regression.
The likelihood formula for this model has as one of the summands the formula
for computing the likelihood of an ordinary (single-level) multinomial logit
regression. So I would basically need the R implementation for this formula.
The L-BFGS algorithm also requires
2012 Aug 31
3
fitting lognormal censored data
Hi ,
I am trying to get some estimator based on lognormal distribution when we have left,interval, and right censored data. Since, there is now avalible pakage in R can help me in this, I had to write my own code using Newton Raphson method which requires first and second derivative of log likelihood but my problem after runing the code is the estimators were too high. with this email ,I provide
2008 Mar 27
1
A faster way to compute finite-difference gradient of a scalar function of a large number of variables
Hi All,
I would like to compute the simple finite-difference approximation to the
gradient of a scalar function of a large number of variables (on the order
of 1000). Although a one-time computation using the following function
grad() is fast and simple enough, the overhead for repeated evaluation of
gradient in iterative schemes is quite significant. I was wondering whether
there are
2018 Feb 09
1
Optim function returning always initial value for parameter to be optimized
Hello,
I'm trying to fminimize the following problem:
You have a data frame with 2 columns.
data.input= data.frame(state1 = (1:500), state2 = (201:700) )
with data that partially overlap in terms of values.
I want to minimize the assessment error of each state by using this function:
err.th.scalar <- function(threshold, data){
state1 <- data$state1
state2 <- data$state2
2009 Mar 02
1
initial gradient and vmmin not finite
Dear Rhelpers
I have the problem with initial values, could you please tell me how to solve it?
Thank you
June
> p = summary(maxLik(fr,start=c(0,0,0,1,0,-25,-0.2)))
Error in maxRoutine(fn = logLik, grad = grad, hess = hess, start = start, :
NA in the initial gradient
> p = summary(maxLik(fr,start=c(0,0,0,1,0,-25,-0.2),method="BFGS"))
Error in optim(start, func, gr =
2009 Nov 20
2
Problem with Numerical derivatives (numDeriv) and mvtnorm
I'm trying to obtain numerical derivative of a probability computed
with mvtnorm with respect to its parameters using grad() and
jacobian() from NumDeriv.
To simplify the matter, here is an example:
PP1 <- function(p){
thetac <- p
thetae <- 0.323340333
thetab <- -0.280970036
thetao <- 0.770768082
ssigma <- diag(4)
ssigma[1,2] <- 0.229502120
2011 Nov 16
1
Cubic Gradient Descent Package
R -
Does anyone know of a cubic gradient descent package? I found grad.desc()
but that only allows for a 2d function. I have 3 free parameters and thus
am looking for a 3d function.
Thank you,
--
Edward H. Patzelt
Research Assistant – TRiCAM Lab
University of Minnesota – Psychology/Psychiatry
VA Medical Center
S355 Elliot Hall: 612-626-0072
www.psych.umn.edu/research/tricam
[[alternative
2012 Jun 21
1
R function similar to gradient function in Matlab?
Hi,
I am trying to convert some Matlab code into R for running some
experiments and I was wondering if there is some function in R which
does the work of the gradient function in Matlab calculating the
"gradient" of 1-, 2- and 3-d images. I only need the 3-d calculations
for running these experiments.
Many thanks and best wishes,
Ranjan
2001 Sep 11
2
Differential Equations Using R?
To whom it may concern,
I am a student at Macaleste College, and next semester Macalester
is going to offer a course for CellBio that is mainly statistically based.
For the most part the students will be using R for analysis. The problem is
there will be some simple differential equations for the students to solve.
The committee that in charge of the classes corriculam would like only to
2010 Apr 06
2
Extracting formulae from expression() / deriv()
I am attempting to extract the derivative/ gradient from this expression
df1p <- deriv(f1, "P")
> df1p
expression({
.value <- s - c - a * P
.grad <- array(0, c(length(.value), 1L), list(NULL, c("P")))
.grad[, "P"] <- -a
attr(.value, "gradient") <- .grad
.value
})
So in this case I want to extract the "-a".
2008 Sep 08
1
Vorticity and Divergence
Hi all,
I have some wind data (U and V components) and I would like to compute
Vorticity and Divergence of these fields. Is there any R function that
can easily do that?
Thanks in advance for any help
Igor Oliveira
CSAG, Dept. Environmental & Geographical Science,
University of Cape Town,
Private Bag X3,
Rondebosch 7701. Tel.: +27 (0)21 650 5774
South Africa Fax: +27 (0)21
2009 Oct 29
4
deriv() to take vector of expressions as 1st arg?
The deriv() function takes an 'expression' as its first argument). I was
wondering if the this function can take an array or a vector of
expressions as its first argument. Aside, I saw how to give a vector
argument to the second argument.
like to have something like:
deriv(c(~x^2+y^3, ~x^5+y^6), c("x","y"))
the documentation for this function talks about being able to
2007 Jan 12
1
incorrect result of deriv (PR#9449)
Full_Name: Joerg Polzehl
Version: 2.3.1
OS: x86_64, linux-gnu
Submission from: (NULL) (62.141.176.22)
I observed an incorrect behavior of function deriv when evaluating arguments of
dnorm
deriv(~dnorm(z,0,s),"z")
expression({
.value <- dnorm(z, 0, s)
.grad <- array(0, c(length(.value), 1), list(NULL, c("z")))
.grad[, "z"] <- -(z * dnorm(z))
2011 May 03
3
help with the maxBHHH routine
Hello R community,
I have been using R's inbuilt maximum likelihood functions, for the
different methods (NR, BFGS, etc).
I have figured out how to use all of them except the maxBHHH function. This
one is different from the others as it requires an observation level
gradient.
I am using the following syntax:
maxBHHH(logLik,grad=nuGradient,finalHessian="BHHH",start=prm,iterlim=2)
2009 Nov 29
1
optim or nlminb for minimization, which to believe?
I have constructed the function mml2 (below) based on the likelihood function described in the minimal latex I have pasted below for anyone who wants to look at it. This function finds parameter estimates for a basic Rasch (IRT) model. Using the function without the gradient, using either nlminb or optim returns the correct parameter estimates and, in the case of optim, the correct standard
2012 Nov 15
1
hessian fails for box-constrained problems when close to boundary?
Hi
I am trying to recover the hessian of a problem optimised with
box-constraints. The problem is that in some cases, my estimates are very
close to the boundary, which will make optim(..., hessian=TRUE) or
optimHessian() fail, as they do not follow the box-constraints, and hence
estimate the function in the unfeasible parameter space.
As a simple example (my problem is more complex though,