Displaying 20 results from an estimated 21 matches for "leibler".
2008 Oct 19
0
Kullback Leibler Divergence
...)
Notice that kl1 and kl are not the same. the documentation for the package
doesn't mention to evaluate the densities at the same points. which one is
correct? do I need to evaluate them at the same point or not?
Thanks,
Lavan
--
View this message in context: http://www.nabble.com/Kullback-Leibler-Divergence-tp20056359p20056359.html
Sent from the R help mailing list archive at Nabble.com.
2010 Aug 04
0
Kullback–Leibler divergence question (flexmix::KLdiv) Urgent!
Hi all,
x <- cbind(rnorm(500),rnorm(500))
KLdiv(x, eps=1e-4)
KLdiv(x, eps=1e-5)
KLdiv(x, eps=1e-6)
KLdiv(x, eps=1e-7)
KLdiv(x, eps=1e-8)
KLdiv(x, eps=1e-9)
KLdiv(x, eps=1e-10)
...
KLdiv(x, eps=1e-100)
...
KLdiv(x, eps=1e-1000)
When calling flexmix::KLdiv using the given code I get results with
increasing value the smaller I pick the accuracy parameter 'eps' until
finally reaching
2003 Mar 05
8
How to draw several plots in one figure?
Hey,
I want to draw several plots sequently, but have to make them dispaly in one
figure.
So how to achieve this?
Thanks.
Fred
2010 Jul 09
1
KLdiv produces NA. Why?
I am trying to calculate a Kullback-Leibler divergence from two
vectors with integers but get NA as a result when trying to calulate
the measure. Why?
x <- cbind(stuff$X, morestuff$X)
x[1:5,]
[,1] [,2]
[1,] 293 938
[2,] 293 942
[3,] 297 949
[4,] 290 956
[5,] 294 959
KLdiv(x)
[,1] [,2]
[1,] 0 NA
[2,] NA...
2010 Jul 15
1
Repeated analysis over groups / Splitting by group variable
I am performing some analysis over a large data frame and would like
to conduct repeated analysis over grouped-up subsets. How can I do
that?
Here some example code for clarification:
require("flexmix") # for Kullback-Leibler divergence
n <- 23
groups <- c(1,2,3)
mydata <- data.frame(
sequence=c(1:n),
data1=c(rnorm(n)),
data2=c(rnorm(n)),
group=rep(sample(groups, n, replace=TRUE))
)
# Part 1: full stats (works fine)
dataOnly <- cbind(mydata$data1, mydata$data2, mydata$group)
KLdiv(dataOnly)
#
# Part 2:...
2005 May 06
1
distance between distributions
...is more of a general stat question. I am looking for a easily
computable measure of a distance between two empirical distributions.
Say I have two samples x and y drawn from X and Y. I want to compute a
statistics rho(x,y) which is zero if X = Y and grows as X and Y become
less similar.
Kullback-Leibler distance is the most "official" choice, however it
needs estimation of the density. The estimation of the density requires
one to choose a family of the distributions to fit from or to use some
sort of non-parametric estimation. I have no intuition whether the
resulting KL distance will b...
2008 Jan 10
1
Entropy/KL-Distance question
Dear R-Users,
I have the CDF of a discrete probability distribution. I now observe a
change in this CDF at one point. I would like to find a new CDF such that
it has the shortest Kullback-Leibler Distance to the original CDF and
respects my new observation. Is there an existing package in R which will
let me do this ?
Google searches based on entropy revealed nothing.
Kind regards,
Tolga
Generally, this communication is for informational purposes only
and it is not intended as an offer...
2008 Sep 29
2
density estimate
Hi,
I have a vector or random variables and I'm estimating the density using
"bkde" function in the KernSmooth package. The out put contains two vectors
(x and y), and the R documentation calls y as the density estimates, but my
y-values are not exact density etstimates (since these are numbers larger
than 1)! what is y here? Is it possible to get the true estimated density at
each
2007 Aug 03
3
question about logistic models (AIC)
Een ingesloten tekst met niet-gespecificeerde tekenset is
van het bericht gescrubt ...
Naam: niet beschikbaar
Url: https://stat.ethz.ch/pipermail/r-help/attachments/20070803/79b6292b/attachment.pl
2008 Oct 13
4
Fw: Logistic regresion - Interpreting (SENS) and (SPEC)
Dear Mr Peter Dalgaard and Mr Dieter Menne,
I sincerely thank you for helping me out with my problem. The thing is taht I already have calculated SENS = Gg / (Gg + Bg) = 89.97%
and SPEC = Bb / (Bb + Gb) = 74.38%.
Now I have values of SENS and SPEC, which are absolute in nature. My question was how do I interpret these absolue values. How does these values help me to find out wheher my model is
2003 Mar 02
0
gss_0.8-2
..."bivariate" smooth functions of time and
covariates through penalized full likelihood. It only takes static
covariates but accommodates interactions between time and covariates,
going beyond the proportional hazard models.
Utilities are provided for the calculation of a certain
Kullback-Leibler projection of cross-validated fits to "reduced model"
spaces, for the "testing" of model terms. Projection code is provide
for ssanova1, gssanova1, ssden, and sshzd fits.
Further details are to be found in the documentations and the examples
therein. As always, feature sugges...
2003 Mar 02
0
gss_0.8-2
..."bivariate" smooth functions of time and
covariates through penalized full likelihood. It only takes static
covariates but accommodates interactions between time and covariates,
going beyond the proportional hazard models.
Utilities are provided for the calculation of a certain
Kullback-Leibler projection of cross-validated fits to "reduced model"
spaces, for the "testing" of model terms. Projection code is provide
for ssanova1, gssanova1, ssden, and sshzd fits.
Further details are to be found in the documentations and the examples
therein. As always, feature sugges...
2007 Jan 25
0
distribution overlap - how to quantify?
...ta{p_{1},p_{2}} = min
\left\{ \int_{\chi}p_{1}(x)\log \frac{p_{1}(x)}{p_{2}(x)}dx,
\int_{\chi}p_{2}(x)\log\frac{p_{2}(x)}{p_{1}(x)}dx \right\}
The smaller the delta the more similar are the distributions (0 when
identical). I implemented this in 'R' using an adaptation of the
Kullback-Leibler divergence. The function works, I get the expected
results.
The question is how to interpret the results. Obviously a delta of 0.5
reflects more similarity than a delta of 2.5. But how much more? Is
there some kind of a statistical test for such an index (other than a
simulation based evaluation)...
2008 Oct 04
0
difference between sm.density() and kde2d()
Dear R users,
I used sm.density function in the sm package and kde2d() in the MASS package
to estimate the bivariate density. Then I calculated the Kullback leibler
divergence meassure between a distribution and the each of the estimated
densities, but the asnwers are different. Is there any difference between
the kde2d and sm.density estimates? if there is a difference, then which is
the best estimate?
Thank you,
lavan
--
View this message in context: http...
2009 Jul 03
1
is AIC always 100% in evaluating a model?
Hello,
I'd like to say that it's clear when an independent variable can be ruled
out generally speaking; on the other hand in R's AIC with bbmle, if one
finds a better AIC value for a model without the given independent variable,
versus the same model with, can we say that the independent variable is not
likely to be significant(in the ordinary sense!)?
That is, having made a lot of
2006 Sep 28
0
AIC in R
Dear R users,
According Brockwell & Davis (1991, Section 9.3, p.304), the penalty term for
computing the AIC criteria is "p+q+1" in the context of a zero-mean
ARMA(p,q) time series model. They arrived at this criterion (with this
particular penalty term) estimating the Kullback-Leibler discrepancy index.
In practice, the user usually chooses the model whose estimated index is
minimum. Consequently, it seems that the theory and the interpretation are
only available in the case of a zero mean ARMA model, at least in the time
series context.
Concerning R, it seems that the penalty...
2005 May 06
0
FW: distance between distributions
...is more of a general stat question. I am looking for a easily
computable measure of a distance between two empirical distributions.
Say I have two samples x and y drawn from X and Y. I want to compute a
statistics rho(x,y) which is zero if X = Y and grows as X and Y become less
similar.
Kullback-Leibler distance is the most "official" choice, however it needs
estimation of the density. The estimation of the density requires one to
choose a family of the distributions to fit from or to use some sort of
non-parametric estimation. I have no intuition whether the resulting KL
distance will b...
2009 Apr 29
12
Una pregunta de estadística (marginalmente relacionada con R)
Hola, ¿qué tal?
Tengo una pregunta de esta
2003 Jun 25
3
logLik.lm()
Hello,
I'm trying to use AIC to choose between 2 models with
positive, continuous response variables and different error
distributions (specifically a Gamma GLM with log link and a
normal linear model for log(y)). I understand that in some
cases it may not be possible (or necessary) to discriminate
between these two distributions. However, for the normal
linear model I noticed a discrepancy
2010 Jul 18
6
CRAN (and crantastic) updates this week
CRAN (and crantastic) updates this week
New packages
------------
* allan (1.0)
Alan Lee
http://crantastic.org/packages/allan
Automates Large Linear Analysis Model Fitting
* andrews (1.0)
Jaroslav Myslivec
http://crantastic.org/packages/andrews
Andrews curves for visualization of multidimensional data
* anesrake (0.3)
Josh Pasek
http://crantastic.org/packages/anesrake
This