Have you first done more traditional analyses like normal probability
plots and simple ARIMA models of each series individually? In
particular, have you tried this after transforming your two series to
the mean and difference between the two series? The mean series should
relate to what's happening in both hemispheres, while the difference
should relate more to differences.
Regarding "KernSmooth", I didn't see in its documentation any
mention
of time series. Virtually all of the traditional distributional
analyses assume independence. Almost no time series are statistically
independent. I'd be particularly skeptical about any suggestions of
non-normality among time series with 128 observations per second.
I've skimmed through your code, but I've never used KernSmooth, so I
can't comment further. If you had included a short, self-contained,
reproducible example, I might have been able to say more.
hope this helps.
spencer graves
tim parkin wrote:
> Hi I hope this is going to the right place. I am trying to write a program
> which uses KernSmooth library to estimate mutual information between two
> time series at various different lags. At the moment it?s producing
negative
> values, which is supposed to be impossible (something is fishy). I am
> summing across one row of the matrix to get p(value is in bin x) and
summing
> across the columns to get p(value is in y), and just taking the value for
> the density at x,y to get the joint probability p(x,y), then just applying
> formulas for mutual information?. I know it?s kludgy? does anyone have any
> idea as to what I?ve done wrong?
>
> The data is EEG sampled at 128hz from the left hemisphere (C3) and right
> hemisphere of the brain during neurofeedback sessions and enters via a
> bivariate time series to datin, and lmax is the maximum lag to measure the
> mutual information at. I hope to use it to set up optimal bivariate
> embeddings but also to measure the strength of the relationship between the
> time series
>
>
>
> mutual2<-function(datin,lmax) {
>
> library(KernSmooth)
>
> lmax<-lmax+1
>
> lhplogp<-numeric(100)
>
> rhplogp<-numeric(100)
>
> mut<-numeric(lmax)
>
> jointplogp<-matrix(0,nrow=100,ncol=100)
>
> lh<-as.vector(datin[,1])
>
> rh<-as.vector(datin[,2])
>
>
>
> rhemb<-embed(rh,lmax)
>
>
>
> lhc<-lh[1:length(rhemb[,1])]
>
>
>
> for (i in 1:lmax) {
>
>
>
> rhc<-rhemb[,i]
>
>
>
>
>
>
>
>
kd<-bkde2D(cbind(lhc,rhc),bandwidth=c(dpik(lhc),dpik(rhc)),gridsize=c(100,10
> 0),truncate=T)
>
> #2d kernel density estimate in 2 dimensions)
>
>
>
> kdmat<-as.matrix(kd$fhat)
>
> for (j in 1:100){
>
>
>
> p<-sum(kdmat[j,])
>
>
>
> lhplogp[j]<-p*log(p)
>
> }
>
>
>
> lhplogp[is.nan(lhplogp)]<-0
>
>
>
> hlh<- -1*sum(lhplogp)
>
> #entropy of left hemsisphere
>
> for (j in 1:100){
>
>
>
> p<-sum(kdmat[,j])
>
>
>
> rhplogp[j]<-p*log(p)
>
>
>
> rhplogp[is.nan(rhplogp)]<-0
>
> #entropy of right hemisphere }
>
> hrh<- -1*sum(rhplogp)
>
>
>
>
>
> for (j in 1:100){
>
> for (k in 1:100) {
>
> #finding joint probability
>
> p<-kdmat[j,k]
>
>
>
> jointplogp[j,k]<-p*log(p)
>
>
>
>
>
> }
>
> }
>
> jointplogp[is.nan(jointplogp)]<-0
>
> #sum of joint probabilities
>
> hjoint<-sum(jointplogp)*-1
>
>
>
>
>
> mut[i]<-hlh+hrh-hjoint
>
> #classic defn of mutual information
>
> maint<-paste("MI by Kernel Density is ",as.character(mut[i]))
>
> rhlab<-paste("rh lag ",as.character(i))
>
> xlabl<-"lh"
>
> contour(kd$x1,kd$x2,kd$fhat,xlab=xlabl,ylab=rhlab,main=maint)
>
>
>
> #dammit not producing realistic numbers
>
> }
>
>
>
> return(mut)
>
>
>
> }
>
>
>
> Thanks
>
> Tim Parkin
>
>
>
>
> ------------------------------------------------------------------------
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html