similar to: Kernel density

Displaying 20 results from an estimated 3000 matches similar to: "Kernel density"

2005 Apr 18
4
longer object length, is not a multiple of shorter object length in: kappa * gcounts
Hi, I was using a density estimation function as follows: > est <- KernSmooth::bkde(x3, bandwidth=10) When setting bandwidth less than 5, I got the error "longer object length, is not a multiple of shorter object length in: kappa * gcounts ". I wonder if there is anybody who can explain the error for me? Thanks! Hui
2008 Jan 03
1
KernSmooth: bkde and dpik bandwidth questions
Hi, I have two separate questions relating to the KernSmooth package. I am using the dpik function from the KernSmooth package and receive the error Warning message: In kappam * Gcounts : longer object length is not a multiple of shorter object length I saw an earlier post , but the issue was using the bkde fxn and the person appeared to be using too small of a bandwidth.
2012 Jul 16
2
about dpik
Thank you for your reply. I know the x in dpik() means the vector. But I don't know how to import into c() with a huge metadata (>1000). Following is my some try, and the h is: [1] 0.001180569, which seems to be feasible.
1999 Nov 18
0
bkde() breaks
Hello, I've been using the KernSmooth package recently and think I have found a problem with it: after loading the library I can issue bkde(c(27,26,27), bandwidth=dpik(c(27,26,27)), range.x=c(4.4, 113.6), gridsize=128, truncate=T) and bkde returns an error. If I change the gridsize to 129 the function works perfectly. I have tried this on my Linux box, and on a nearby Solaris machine, both
2010 Jan 18
1
density() vs. KernSmooth::bkde
Any advice when to use denstity() and when the KernSmooth package bkde() to smooth a histogram? No specific problem to use either one, but I'm curious why there are two so similar implementations. Thanks! mario -- Ing. Mario Valle Data Analysis and Visualization Group | http://www.cscs.ch/~mvalle Swiss National Supercomputing Centre (CSCS) | Tel: +41 (91) 610.82.60 v.
2017 Apr 27
2
R-3.4.0 and recommended packages
On 27 April 2017 at 12:01, Johannes Ranke wrote: | | > so it seems to me this must affect all packages in Debian sid that were | > built before the release of R 3.4.0! | | or rather before 14 April 2017, which is when R from revision r72510 was | uploaded to sid as pre-release candidate. Another example with KernSmooth: > library(KernSmooth) KernSmooth 2.23 loaded Copyright
2008 Sep 29
2
density estimate
Hi, I have a vector or random variables and I'm estimating the density using "bkde" function in the KernSmooth package. The out put contains two vectors (x and y), and the R documentation calls y as the density estimates, but my y-values are not exact density etstimates (since these are numbers larger than 1)! what is y here? Is it possible to get the true estimated density at each
2012 Jun 14
2
density plot on a log scale
I'm working with a large dataset - large enough that when I do a scatter plot the points all blur together, so I want to plot their density by color - a heat map or something like that. I've used smoothScatter for tasks like this, but the problem is that my current dataset really only looks good on a log-log scale. When I do the following command smoothScatter( data,
2011 Aug 09
1
How to pass different arguments to a function within lapply()?
Hi all, I have a data frame called "rst", see below: ------------------------------------------------------------------------------------------ # This is a paste able example # In case you don't have "KernSmooth" package installed, please uncomment below line. # install.packages("KernSmooth") library(KernSmooth) rst <- data.frame(hsp = rnorm(23), dal =
2006 Dec 18
2
surface3d grid from xyz dataframe
Hi List, I am trying to plot a grid with an overlayed height. I have a dataframe with four variables: x,y,gridvalue,height. The dataframe has 2.5mio observations (ie grid points), I assign colors through the gridvalue using map_color_gradient thus producing: x,y,gridvalue,height,gridcol as variables of the dataframe. The grid dimensions are 1253 x 2001 (=2507253 data points). My attempts with
2009 Feb 24
2
Problem about plot scale
There are 4 sets of array, each have 2 dimension data:x,y. And these data comes from bkde. I tried to plot them in one figure, but details of some lines, like peak and where it locate, have diminished since one set of array have fairly high y value.And another array have lots of y value near zero, like 1e-20, 1e-10, like a long tail, while the highest y value of these arrays is 1e-4. Does R have
2003 Feb 06
1
svm
Hello list, I want to apply svm from library e1071, and I want to supply class weights. I do not really understand the help entry (and there is no example) class.weights: a named vector of weights for the different classes, used for asymetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. I have two
2002 Feb 04
1
Guidelines for Rd-Files
Dear list, on p. 11 of "Writing R-extensions" I read "See the ``Guidelines for Rd-files'' for guidelines ... which should be useful..." Where can I find these guidelines? Best, Christian -- *********************************************************************** Christian Hennig Seminar fuer Statistik, ETH-Zentrum (LEO), CH-8092 Zuerich (current) and Fachbereich
2002 Aug 06
1
Rd: more than one list
Dear group, I would like to document more than one function on one help page using the Rd language. I tried something like \value{ \code{foo1} returns a list with components \item{arg1}{Argument 1} \item{arg2}{Argument 2} \code{foo2} returns a list with components \item{arg3}{Argument 3} \item{arg4}{Argument 4} } Unfortunately this makes the text "\code{foo2} returns a list
2002 Feb 22
1
Avoiding the mean
Dear list, what is the fastest way to compute a multivariate mean and cov-matrix? I presume that the mean is computed in cov, so it may be a waste of time to compute the mean first and then a second time inside of cov. Is it faster to use cov.wt, which gives cov-matrix and center? And: If mean and cov should be computed on a part of the data, is it faster to use cov.wt with some weights zero, or
2002 Aug 15
0
Behaviour of cov.rob/MCD
Dear list, here is something I do not understand about cov.rob. > dat <- rmvnorm(200,rep(0,10),diag(10)) > cov.rob(dat,method="mcd") > cov.rob(dat,method="mcd",quantile.used= floor(3*211/4)) # All fine; default for quantile.used is floor(211/2) > dat <- rmvnorm(20,rep(0,10),diag(10)) > cov.rob(dat,method="mcd") # quantile.used is floor(31/2).
2003 Mar 25
0
isoMDS results
Hi, this is a second try to post this to the R-help mailing list. The first one has been rejected because of a too large attachment. Now I ask this without attaching the data. If you want to reproduce the results, please contact me directly to get the data. (First mail, rejected:) > Attached there is a 149*149 dissimilarity matrix; it is a file obtained by >
2002 Jun 17
1
OT: Journal of Statistical Software
Dear list, yesterday I tried to find the www-site of the Journal of Statistical Software via Google. I was linked to www.jstatsoft.org directly as well as via several other sites, but netscape told me in all cases that www.jstatsoft.org does not exist. Does anybody know what is going on? New website? Wrong address? Best, Christian --
2002 Feb 14
1
Subsets in mclust
Dear group, I want to use the mclust package on large data, and therefore I want to use a subset in the initial clustering phase. From help(mclust): k: If `k' is specified, the hierarchical clustering phase will use a sample of size `k' of the data in the initial hierarchical clustering phase. The default is to use the entire data set. m2 is a
2004 Apr 08
2
How to draw a tree?
Hi, I have run rpart to construct a regression tree. Is there any simple method to draw a nice picture of it, as it is usually done in books and paper to visualize the tree? Thank you, Christian *********************************************************************** Christian Hennig Fachbereich Mathematik-SPST/ZMS, Universitaet Hamburg hennig at math.uni-hamburg.de,