I dare the conjecture that if you had written the code, you would know how to
do this. This suggests that you are asking us to do your homework, which is
not the purpose of this list. A simple inclusion of the code in a for or
while loop and storing the estimated parameters with the index of the
iteration at which the loop is at should be no problem if you have
programmed the code you provided.
Best,
Daniel
Majdi wrote:>
> Hi
>
> My code generates data using (progressive censoring)
> then the program use this data to estimate the parameters (theta.t and
> beta.t) using EM algorithm
>
> My question is that I have to do simulation, that is I have to run my code
> 10,000 times, for each time I have to store the output and at the end find
> the average of theta.t and beta.t over these 10000 simulations
>
> Is this possible at all .... OR I should go back to SAS and try to do it
> over there
> I learned that R doesn't like loops
>
> ##################################
>
> ## Lomax distribution ##
> ## Use EM algorithm to estimate parameters of Lomax based on progressive
> censored data
>
> n=20;m=5;R<-c(0,0,0,0,15)
> theta=0.4986; beta=3
>
> # generate data using progressive censoring algorithm#
> ######################################
>
> W<-matrix((runif(m,0,1)),m,1)
> E<-matrix(NA,m,1);V<-matrix(NA,m,1);U<-matrix(NA,m,1)
>
> i<-1
> while (i<=m) {
>
> E[i]<- 1/( i+ sum(R[(m+1-i):m]))
> i<-i+1
> }
>
> V<-W^E
>
> i<-1
> while (i<=m) {
>
> U[i]<- (1- prod( V[(m+1-i):m]))
> i<-i+1
> }
>
> x<-( (1-U)^(-1/theta)-1 )/beta # simulation ends
>
> Yobs<-x
>
> em.lomax <- function(Y){
> r <- length(Yobs)
>
> ##initial value
>
> theta.t=theta
> beta.t=beta
>
> # Define log-likelihood function
> ll <- function(y, k, c){
> n*log(c*k)-(k+1)*( sum(log(1+c*Y))+sum( R*log(1+c*Y)+1/k ) )
> }
>
> # Compute the log-likelihood for the initial values, and ignoring the
> missing data mechanism
> lltm1 <- ll(Yobs, theta.t, beta.t)
>
> repeat{
> # E-step
>
> Bbar<- function(theta.t,beta.t){
> sum(R*(1+beta.t*(theta.t+1)*Yobs)/(beta.t*(theta.t+1)*(1+beta.t*Yobs)))
> }
>
> Abar<- function(theta.t,beta.t){
> sum( R* ( log(1+beta.t*Yobs)+ 1/theta.t ))
> }
>
> theta.t1<- n/( sum(log(1+beta.t*Yobs)) + Abar(theta.t,beta.t))
>
> # M-step
> beta.t1<-((theta.t1+ 1)/n
> *(sum(Yobs/(1+beta.t*Yobs))+Bbar(theta.t,beta.t)))^(-1)
> theta.t1<- n/( sum(log(1+beta.t*Yobs)) + Abar(theta.t,beta.t))
>
> beta.t <-beta.t1
> theta.t<-theta.t1
>
> # compute log-likelihood using current estimates, and igoring the missing
> data mechanism
> llt <- ll(Yobs, theta.t, beta.t)
>
> # Print current parameter values and likelihood
> cat(theta.t, beta.t, llt, "\n")
>
>
> # Stop if converged
> if ( abs(lltm1 - llt) < 0.0001) break
> lltm1 <- llt
> }
> return(theta.t,beta.t)
> }
>
> em.lomax(Yobs)
>
> ################################################################
>
> please help me with this
>
> majdi
>
--
View this message in context:
http://r.789695.n4.nabble.com/loops-and-simulation-tp3678640p3679377.html
Sent from the R help mailing list archive at Nabble.com.