Displaying 20 results from an estimated 30000 matches similar to: "Jackknife for a 2-sample dispersion test"
2013 Mar 15
0
dispersion indicator for clustered data
Hi,
I have a dataset with clustered data (observations within groups) and would like to make some descriptive plots.
Now, I am a little bit lost on how to present the dispersion of the data (what kind of residuals to plot).
I could compute the standard error of the mean (SEM) ignoring the clustering (very low values and misleading) or I could first aggregate the data by calculating th mean for
2006 Apr 11
4
Bootstrap and Jackknife Bias using Survey Package
Dear R users,
I?m student of Master in Statistic and Data analysis, in New University of Lisbon. And now i?m writting my dissertation in variance estimation.So i?m using Survey Package to compute the principal estimators and theirs variances.
My data is from Incoming and Expendire Survey. This is stratified Multi-stage Survey care out by National Statistic Institute of Mozambique. My domain of
2008 Dec 18
1
using jackknife in linear models
Hi R-experts,
I want to use the jackknife function from the bootstrap package onto a
linear model.
I can't figure out how to do that. The manual says the following:
# To jackknife functions of more complex data structures,
# write theta so that its argument x
# is the set of observation numbers
# and simply pass as data to jackknife the vector 1,2,..n.
# For example, to jackknife
#
2005 Jul 14
0
Pearson dispersion statistic
Thank you for your reply.
I am aware of the good reasons not to use the deviance estimate in
binomial, Poisson, and gamma families.
However, for the inverse Gaussian, the choice seems to me less clear
cut. So I just wanted to compare two different options.
I have used the dispersion parameter to compute the standardized
deviance residuals:
summary(model.gamma)$deviance.resid
2010 Nov 25
2
delete-d jackknife
Hi dear all,
Can aynone help me about delete-d jackknife
usually normal jackknife code for my data is:
n <- nrow(data)
y <- data$y
z <- data$z
theta.hat <- mean(y) / mean(z)
print (theta.hat)
theta.jack <- numeric(n)
for (i in 1:n)
theta.jack[i] <- mean(y[-i]) / mean(z[-i])
bias <- (n - 1) * (mean(theta.jack) - theta.hat)
print(bias)
but how i can apply delete-d jackknife
2006 Jun 28
0
Fwd: add1() and anova() with glm with dispersion
> Hello,
>
> I have a question about a discrepancy between the
> reported F statistics using anova() and add1() from
> adding an additional term to form nested models.
>
> I found and old posting related to anova() and
> drop1() regarding a glm with a dispersion parameter.
>
> The posting is very old (May 2000, R 1.1.0).
> The old posting is located here.
>
2007 Mar 27
1
Jackknife estimates of predict.lda success rate
Dear all
I have used the lda and predict functions to classify a set of objects
of unknown origin. I would like to use a jackknife reclassification to
assess the degree to which the outcomes deviate from that expected by
chance. However, I can't find any function that allows me to do this.
Any suggestions of how to generate the jackknife reclassification to
assess classification accuracy?
2010 Nov 14
2
jackknife-after-bootstrap
Hi dear all,
Can someone help me about detection of outliers using jackknife after
bootstrap algorithm?
--
View this message in context: http://r.789695.n4.nabble.com/jackknife-after-bootstrap-tp3041634p3041634.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
2004 Mar 02
0
Jackknife after bootstrap influence values in boot package?
Is there a routine in the boot package to get the jackknife-after-
bootstrap influence values? That is, the influence values of
a jackknife of the bootstrap estimates?
I can see how one would go about it from the jack.after.boot code, but that
routine only makes pretty pictures.
It wouldn't be hard to write, but I find it hard to believe this
isn't part of the package already.
Thanks
2007 Dec 31
0
Optimize jackknife code
Hi,
I have the following jackknife code which is much slower than my colleagues C code. Yet I like R very much and wonder how R experts would optimize this.
I think that the for (i in 1:N_B) part is bad because Rprof() said sum() is called very often but I have no idea how to optimize it.
#O <- read.table("foo.dat")$V1
O <- runif(100000);
k=100 # size of block to delete
2005 Nov 08
1
Poisson/negbin followed by jackknife
Folks,
Thanks for the help with the hier.part analysis. All the problems
stemmed from an import problem which was solved with file.chose().
Now that I have the variables that I'd like to use I need to run some
GLM models. I think I have that part under control but I'd like to use
a jackknife approach to model validation (I was using a hold out sample
but this seems to have fallen out
2005 May 11
1
Mixed Effect Model - Jackknife error estimate
Greetings,
I?ve fit the following mixed effects model using the NLME package:
hd.impute.lme <- lme(I(log(HEIGHT_M - 1.37)) ~ SPECIES + SPECIES:I(1/(DBH_CM + 2.54)),
random = ~ I(1/(DBH_CM + 2.54)) | PLOTID,
data = trees, na.action = na.exclude)
I would now like to extract a jackknife estimate of model error. I tried the following code, however, the estimate produced seems too
2003 Apr 16
2
Jackknife and rpart
Hi,
First, thanks to those who helped me see my gross misunderstanding of
randomForest. I worked through a baging tutorial and now understand the
"many tree" approach. However, it is not what I want to do! My bagged
errors are accpetable but I need to use the actual tree and need a single
tree application.
I am using rpart for a classification tree but am interested in a more
unbaised
2012 Nov 14
2
Jackknife in Logistic Regression
Dear R friends
I´m interested into apply a Jackknife analysis to in order to quantify the
uncertainty of my coefficients estimated by the logistic regression. I´m
using a glm(family=’binomial’) because my independent variable is in 0 - 1
format.
My dataset has 76000 obs, and I´m using 7 independent variables plus an
offset. The idea involves to split the data in let’s say 5 random subsets
and
2016 Jun 02
0
[RfC] Family dispersion
Hi,
I'd like to hear your opinion about the following proposal to make the
computation of dispersion in GLMs more flexible. Dispersion is used in
summary.glm; the relevant code chunk with the dispersion calculation is listed
below (from glm.R):
summary.glm <- function(object, dispersion = NULL,
correlation = FALSE, symbolic.cor = FALSE, ...)
{
est.disp <- FALSE
df.r <-
2006 Oct 24
1
Variance Component/ICC Confidence Intervals via Bootstrap or Jackknife
I'm using the lme function in nmle to estimate the variance components
of a fully nested two-level model:
Y_ijk = mu + a_i + b_j(i) + e_k(j(i))
lme computes estimates of the variances for a, b, and e, call them v_a,
v_b, and v_e, and I can use the intervals function to get confidence
intervals. My understanding is that these intervals are probably not
that robust plus I need intervals on the
1998 Feb 04
0
[J.Lindsey: Re: glm(.) / summary.glm(.); [over]dispersion and returning AIC..]
--Multipart_Wed_Feb__4_12:25:40_1998-1
Content-Type: text/plain; charset=US-ASCII
Jim, I am relating your message to R-devel.
This should be discussed in a broader audience;
I am not an expert on GLM's, I know you are
and others on this group also...
R-develers, please CC to Jim Lindsey (on this topic), since he hasn't
been part of the R-devel list for a while..
BTW: I will be gone
2010 Nov 29
2
accuracy of GLM dispersion parameters
I'm confused as to the trustworthiness of the dispersion parameters
reported by glm. Any help or advice would be greatly appreciated.
Context: I'm interested in using a fitted GLM to make some predictions.
Along with the predicted values, I'd also like to have estimates of
variance for each of those predictions. For a Gamma-family model, I believe
this can be done as Var[y] =
2012 Apr 26
1
variable dispersion in glm models
Hello,
I am currently working with the betareg package, which allows the fitting of a variable dispersion beta regression model (Simas et al. 2010, Computational Statistics & Data Analysis). I was wondering whether there is any package in R that allows me to fit variable dispersion parameters in the standard logistic regression model, that is to make the dispersion parameter contingent upon
2007 May 25
1
Estimation of Dispersion parameter in GLM for Gamma Dist.
Hi All,
could someone shed some light on what the difference between the
estimated dispersion parameter that is supplied with the GLM function
and the one that the 'gamma.dispersion( )' function in the MASS
library gives? And is there consensus for which estimated value to
use?
It seems that the dispersion parameter that comes with the summary
command for a GLM with a Gamma dist. is