similar to: About dpik function

Displaying 20 results from an estimated 200 matches similar to: "About dpik function"

2012 Jul 16
2
about dpik
Thank you for your reply. I know the x in dpik() means the vector. But I don't know how to import into c() with a huge metadata (>1000). Following is my some try, and the h is: [1] 0.001180569, which seems to be feasible.
2008 Jan 03
1
KernSmooth: bkde and dpik bandwidth questions
Hi, I have two separate questions relating to the KernSmooth package. I am using the dpik function from the KernSmooth package and receive the error Warning message: In kappam * Gcounts : longer object length is not a multiple of shorter object length I saw an earlier post , but the issue was using the bkde fxn and the person appeared to be using too small of a bandwidth.
2007 Dec 10
0
problem using "by" with custom function?
Hi, I'm relatively new to R and R development, so please forgive me for any obvious errors. What I am trying to do is use the command dpik within the package KernSmooth to estimate bandwidth parameters for GPS telemetry data. I have been able to get this to work on a case by case basis without any problem, but would like to extend this so that I can batch process many different animals for
2012 Jul 31
3
about lscv
Thanks in advance. Nowadays I just calculate the bandwidth h of cross validation in kernel smoothing using R language. And I just looked up the usage of function, which is lscv(x,......, exact=FALSE) My question is what does "........" stand for and mean? do you mind specifically explaining it for me? Thanks Regards -- View this message in context:
2009 Jun 03
0
Treated - KernSmooth pckg - dpik function gives numeric(0) for kernel="epanech"
Epanechnikov kernel works if option canonical=TRUE, however it would be good to know why it does not for for canonical=FALSE (default). Sorry for craetaing maybe useless thread. Best regards, Ondra. [[alternative HTML version deleted]]
2013 Feb 27
2
matrix multiplication
Hi, Try this: #mat1 is the data res<-do.call(cbind,lapply(seq_len(nrow(mat1)),function(i) {new1<-do.call(rbind,lapply(seq_len(nrow(mat1[-i,])),function(j) {x1<-rbind(mat1[i,],mat1[j,]); x2<-(abs(x1[1,1]-x1[2,1])*abs(x1[1,5]-x1[2,5]))+(abs(x1[1,2]-x1[2,2])*abs(x1[1,6]-x1[2,6]))+(abs(x1[1,3]-x1[2,3])*abs(x1[1,7]-x1[2,7]))+(abs(x1[1,4]-x1[2,4])*abs(x1[1,8]-x1[2,8]))}));new1}))
2006 Mar 31
1
mutual information for two time series
Hi I hope this is going to the right place. I am trying to write a program which uses KernSmooth library to estimate mutual information between two time series at various different lags. At the moment it’s producing negative values, which is supposed to be impossible (something is fishy). I am summing across one row of the matrix to get p(value is in bin x) and summing across the columns to get
1999 Nov 18
0
bkde() breaks
Hello, I've been using the KernSmooth package recently and think I have found a problem with it: after loading the library I can issue bkde(c(27,26,27), bandwidth=dpik(c(27,26,27)), range.x=c(4.4, 113.6), gridsize=128, truncate=T) and bkde returns an error. If I change the gridsize to 129 the function works perfectly. I have tried this on my Linux box, and on a nearby Solaris machine, both
2004 Oct 12
1
bandwidths for bivariate density estimation
Hi, I am using the KernSmooth package to estimate nonparametrically bivariate density functions. However, it seems that the bandwidths (one for each co-ordinate direction) have to be selected manually. This does not apply for the univariate case, for which dpik (included in KernSmooth) uses up-to-date plug-in rules. Does anyone know about a package, or function, which estimates bandwidths
2011 Jan 20
0
Bandwidth - Kernel Density Estimation
Dear R helpers I am having recovery rates as given below and I am trying to estimate the Loss Given Default (LGD) and for this I am using Kernel Density estimation method. recovery_rates = c(0.61,0.12,0.10,0.68,0.87,0.19,0.84,0.81,0.87,0.54,0.08,0.65,0.91, 0.56,0.52,0.30,0.41,0.24,0.66,0.35,0.36,0.64,0.55,0.43,0.36,0.28,0.89,0.11,0.23,0.07,
2011 Jul 27
1
create a index.date column
Dear R users, I created a matrix that tells me the first day of use of a category by id. #Calculate time difference test$tdiff<-as.numeric(difftime(as.Date("2002-09-01"), test$ftime, units = "days")) # obtain the index date per person and dcategory index.date.test<-tapply(test$tdiff, list(test$id, test$rcat), max) Nonetheless, at the moment I think will be
2012 Sep 05
4
Summarizing data containing data/time information (as factor)
Dear R user I want to create a table (as below) to summarize the attached data (Test.csv, which can be read into R by using 'read.csv(Test.csv, header=F)' ), to indicate the day that there are any data available, e.g.value=1 if there are any data available for that day, otherwise value=0. 28/04 29/04 30/04 01/05 02/05 532703 0 1 1
2012 Jul 17
1
about different bandwidths in one graph
Thank you in advance. Now I want to make comparison of the different bandwidth h in a normal distribution graph. This is the table of bandwidth h: thumb rule (normal)--0.00205; thumb rule(Epanech.)--0.00452; Plug-in (normal)--0.0009; Plug-in(Epanech.)--0.002. this is the condition: N=1010 data sample is from normal distribution N(0,0.0077^2). The grid points are taken to be [-0.05,0.05] and
2017 Dec 06
2
Odd dates generated in Forecasts
Dear friends, I have a weekly time series which starts on Jan 4th, 2003 and ends on december 31st, 2016. I set up my ts object as follows: MyTseries <- ts(mydataset, start=2003, end=2016, frequency=52) MyModel <- auto.arima(MyTseries, d=1, D=1) MyModelForecast <- forecast (MyModel, h=12) Since my last observation was on december 31st, 2016 I expected my forecast date to start on
2018 May 16
1
Systemfit Question
I can't get my simultaneous equations to work using system fit. Please help. #Reproducible script Empdata<- read.csv("/Users/ngwinuiazenui/Documents/UPLOADemp.csv") View(Empdata) str(Empdata) Empdata$gnipc<-as.numeric(Empdata$gnipc) install.packages("systemfit") library("systemfit") pdata <- plm.data(Empdata,
2012 Jul 10
2
estimation of NA by predict command
Dear arun and all R users, I will first of all try to simply define my issue.. I have data in the following format Year Discharge dd/mm/yyyy x .. … … … There are some NA values in the discharge which I would like to predict by using “predict command”. I cant figure out the way to write the coding for that. Could you please help me on that??? I have also ,written
2008 Dec 09
2
Need help optimizing/vectorizing nested loops
Hi, I'm analyzing a large number of large simulation datasets, and I've isolated one of the bottlenecks. Any help in speeding it up would be appreciated. `dat` is a dataframe of samples from a regular grid. The first two columns are the spatial coordinates of the samples, the remaining 20 columns are the abundances of species in each cell. I need to calculate the species richness in
2017 Dec 06
0
Odd dates generated in Forecasts
> On Dec 6, 2017, at 5:07 AM, Paul Bernal <paulbernal07 at gmail.com> wrote: > > Dear friends, > > I have a weekly time series which starts on Jan 4th, 2003 and ends on > december 31st, 2016. > > I set up my ts object as follows: > > MyTseries <- ts(mydataset, start=2003, end=2016, frequency=52) > > MyModel <- auto.arima(MyTseries, d=1, D=1)
2017 Dec 06
1
Odd dates generated in Forecasts
Thank you very much David. As a matter of fact, I solved it by doing the following: MyTimeSeriesObj <- ts(MyData, freq=365.25/7, start=decimal_date(mdy("01-04-2003"))) After doing that adjustment, my forecasts dates started from 2017 on. Cheers, Paul 2017-12-06 12:03 GMT-05:00 David Winsemius <dwinsemius at comcast.net>: > > > On Dec 6, 2017, at 5:07 AM, Paul
2013 Jan 29
1
starting values in glm(..., family = binomial(link = log))
Dear R-helpers, i have a problem with a glm-model. I am trying to fit models with the log as link function instead of the logit. However, in some cases glm fails to estimate those models and suggests to give start values. However, when I set start = coef(logistic_model) within the function call, glm still says it cannot find starting values? This seems to be more of a problem, when I include a