I have some questions about the use of weights in binomial glm as I am not getting the results I would expect. In my case the weights I have can be seen as 'replicate weights'; one respondent i in my dataset corresponds to w[i] persons in the population. From the documentation of the glm method, I understand that the weights can indeed be used for this: "For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes.">From "Modern applied statistics with S-Plus 3rd ed." I understand thesame. However, I am getting some strange results. I generated an example: Generate some data which is simular to my dataset> Z <- rbinom(1000, 1, 0.1) > W <- round(rnorm(1000, 100, 40)) > W[W < 1] <- 1Probability of success can either be estimated using:> sum(Z*W)/sum(W)[1] 0.09642109 Or using glm:> model <- glm(Z ~ 1, weights=W, family=binomial())Warning message: In glm.fit(x = X, y = Y, weights = weights, start = start, etastart etastart, : fitted probabilities numerically 0 or 1 occurred> predict(model, type="response")[1]1 2.220446e-16 These two results are obviously not the same. The strange thing is that when I scale the weights, such that the total equals one, the probability is correctly estimated:> model <- glm(Z ~ 1, weights=W/sum(W), family=binomial())Warning message: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm!> predict(model, type="response")[1]1 0.09642109 However scaling of the weights should, as far as I am aware, not have an effect on the estimated parameters. I also tried some other scalings. And, for example scaling the weights by 20 also gives me the correct result.> model <- glm(Z ~ 1, weights=W/20, family=binomial())Warning message: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm!> predict(model, type="response")[1]1 0.09642109 Am I misinterpreting the weights? Could this be a numerical problem? Regards, Jan
Jan, It looks like you did not understand the line "For a binomial GLM prior weights are used to give the number of trials when the response is the proportion of successes." Weights must be a number of trials (hence integer). Not a proportion of a population. Here is an example that clarifies the use of weights. library(boot) library(reshape) dataset <- data.frame(Person = c(rep("A", 20), rep("B", 10)), Success c(rbinom(20, 1, 0.25), rbinom(10, 1, 0.75))) Aggregated <- cast(Person ~ ., data = dataset, value = "Success", fun list(mean, length)) m0 <- glm(Success ~ 1, data = dataset, family = binomial) m1 <- glm(mean ~ 1, data = Aggregated, family = binomial, weights length) inv.logit(coef(m0)) inv.logit(coef(m1)) Have a look at the survey package is you want to analyse stratified data. Thierry ------------------------------------------------------------------------ ---- ir. Thierry Onkelinx Instituut voor natuur- en bosonderzoek team Biometrie & Kwaliteitszorg Gaverstraat 4 9500 Geraardsbergen Belgium Research Institute for Nature and Forest team Biometrics & Quality Assurance Gaverstraat 4 9500 Geraardsbergen Belgium tel. + 32 54/436 185 Thierry.Onkelinx at inbo.be www.inbo.be To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ~ Sir Ronald Aylmer Fisher The plural of anecdote is not data. ~ Roger Brinner The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. ~ John Tukey> -----Oorspronkelijk bericht----- > Van: r-help-bounces at r-project.org > [mailto:r-help-bounces at r-project.org] Namens Jan van der Laan > Verzonden: vrijdag 16 april 2010 14:11 > Aan: r-help at r-project.org > Onderwerp: [R] Weights in binomial glm > > I have some questions about the use of weights in binomial > glm as I am not getting the results I would expect. In my > case the weights I have can be seen as 'replicate weights'; > one respondent i in my dataset corresponds to w[i] persons in > the population. From the documentation of the glm method, I > understand that the weights can indeed be used for this: "For > a binomial GLM prior weights are used to give the number of > trials when the response is the proportion of successes." > >From "Modern applied statistics with S-Plus 3rd ed." I understand the > same. >Druk dit bericht a.u.b. niet onnodig af. Please do not print this message unnecessarily. Dit bericht en eventuele bijlagen geven enkel de visie van de schrijver weer en binden het INBO onder geen enkel beding, zolang dit bericht niet bevestigd is door een geldig ondertekend document. The views expressed in this message and any annex are purely those of the writer and may not be regarded as stating an official position of INBO, as long as the message is not confirmed by a duly signed document.
Jan, Thierry is correct in saying that you are misusing glm(), but there is also a numerical problem. You are misusing glm() because your model specification claims to have Binomial(n,p) observations with w in the vicinity of 100, where there is a single common p but the observed binomial proportion is either 1 or 0, never anything in between. These data are a very poor fit to a binomial model. The correct specification if you have what you call replicate weights and I call frequency weights is to produce a single data record for each covariate pattern that has both the 1 and 0 observations. This can either be two columns for successes and failures, or one column of proportions and one column of weights. As your quote from MASS says "weights are used to give the number of trials when the response is the proportion of successes." In your data the response is *not* the proportion of successes. However, the MLE should still be equal to the weighted mean even with this misuse. The reason it is not is because of the starting values. R has to find some starting values for the iterative maximization of the likelihood, and for binomial data with y successes out of n it uses starting values for the fitted means of (y+0.5)/(n+1). Starting the iteration at the data in this way usually makes the Fisher scoring algorithm very reliable -- it is correctly scaled to the data, in some sense. Unfortunately, if you separate out the successes and failures, you have some points starting with values very close to 0. When I used your code the starting value for the point with the largest weight was 0.5/199. At iteration 2, the estimated mean ends up very small for all observations, and then the iteration diverges. However, if you provide a starting value then the fitting works, even if you start the iteration at, say beta=1, corresponding to a fitted mean of over 70%. So, the result is wrong in the sense that it is not the mle, because of a failure of convergence, which happens because specifying the weights the way you did rather than the documented way leads to bad default starting values for the iteration. You need either to specify the data as recommended or supply starting values. =thomas On Fri, 16 Apr 2010, Jan van der Laan wrote:> I have some questions about the use of weights in binomial glm as I am > not getting the results I would expect. In my case the weights I have > can be seen as 'replicate weights'; one respondent i in my dataset > corresponds to w[i] persons in the population. From the documentation > of the glm method, I understand that the weights can indeed be used > for this: "For a binomial GLM prior weights are used to give the > number of trials when the response is the proportion of successes." >> From "Modern applied statistics with S-Plus 3rd ed." I understand the > same. > > However, I am getting some strange results. I generated an example: > > Generate some data which is simular to my dataset >> Z <- rbinom(1000, 1, 0.1) >> W <- round(rnorm(1000, 100, 40)) >> W[W < 1] <- 1 > > Probability of success can either be estimated using: >> sum(Z*W)/sum(W) > [1] 0.09642109 > > Or using glm: >> model <- glm(Z ~ 1, weights=W, family=binomial()) > Warning message: > In glm.fit(x = X, y = Y, weights = weights, start = start, etastart > etastart, : > fitted probabilities numerically 0 or 1 occurred >> predict(model, type="response")[1] > 1 > 2.220446e-16 > > These two results are obviously not the same. The strange thing is > that when I scale the weights, such that the total equals one, the > probability is correctly estimated: > >> model <- glm(Z ~ 1, weights=W/sum(W), family=binomial()) > Warning message: > In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! >> predict(model, type="response")[1] > 1 > 0.09642109 > > > However scaling of the weights should, as far as I am aware, not have > an effect on the estimated parameters. I also tried some other > scalings. And, for example scaling the weights by 20 also gives me the > correct result. > >> model <- glm(Z ~ 1, weights=W/20, family=binomial()) > Warning message: > In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! >> predict(model, type="response")[1] > 1 > 0.09642109 > > > Am I misinterpreting the weights? Could this be a numerical problem? > > Regards, > > Jan > > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >Thomas Lumley Assoc. Professor, Biostatistics tlumley at u.washington.edu University of Washington, Seattle
Seemingly Similar Threads
- Specifying Prior Weights in a GLM
- Something changed and glm(..., family=binomial) doesn't work now
- logistic regression weights problem
- Saturated model in binomial glm
- From THE R BOOK -> Warning: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm!