Hi all, i am looking to built a simple example of a very basic propensity score adjustment, just using the estimated propensity scores as inverse probability weights (respectively 1-estimated weights for the non-treated). As far as i understood, MLE predictions of a logit model can directly be used as to estimates of the propensity score. I already considered the twang package and the several matching approaches and i am basically not trying to reinvent the wheel. Often i could not understand what was going, and why some iterative process like k.stat.max were taking so long. Anyway i?d really like to something really simple apart from all this focus on some iterative algorithm thats beyond my scope. And here is where the problem starts. Most textbooks i considered proposed to estimate a simple logit model by ML Estimation. Obviously the standard approach to do it using R is glm. The zelig package provides an alternative. My logit model is as simple at its gets: Y~X, where Y is a treament vector and X is matrix of some covariates. I wonder right now if te glm respectively summary(glm(...)) puts out something comparable to ML estimates that can be used as the estimated pscores, in such a way that there is one value for every observation. Thanks for any help in advance
Bunny, lautloscrew.com <bunny <at> lautloscrew.com> writes: ix of some covariates.> > I wonder right now if te glm respectively summary(glm(...)) puts out > something comparable to ML estimates that can be used as the estimated > pscores, in such a way that there is one value for every observation. >If you saved the result of the glm() function in foo, wouldn't foo$fitted.values give you what you're looking for? Ben> Thanks for any help in advance > > ______________________________________________ > R-help <at> r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > >
Bunny, lautloscrew.com wrote:> Hi all, > > i am looking to built a simple example of a very basic propensity score > adjustment, just using the estimated propensity scores as inverse > probability weights (respectively 1-estimated weights for the > non-treated). As far as i understood, MLE predictions of a logit modelThat is a high variance procedure as compared with covariate adjustment using the propensity score, or stratification. Frank Harrell> can directly be used as to estimates of the propensity score. > I already considered the twang package and the several matching > approaches and i am basically not trying to reinvent the wheel. Often i > could not understand what was going, and why some iterative process like > k.stat.max were taking so long. > Anyway i?d really like to something really simple apart from all this > focus on some iterative algorithm thats beyond my scope. > > And here is where the problem starts. Most textbooks i considered > proposed to estimate a simple logit model by ML Estimation. Obviously > the standard approach to do it using R is glm. The zelig package > provides an alternative. My logit model is as simple at its gets: Y~X, > where Y is a treament vector and X is matrix of some covariates. > > I wonder right now if te glm respectively summary(glm(...)) puts out > something comparable to ML estimates that can be used as the estimated > pscores, in such a way that there is one value for every observation. > > > Thanks for any help in advance > > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >-- Frank E Harrell Jr Professor and Chair School of Medicine Department of Biostatistics Vanderbilt University
Frank E Harrell Jr wrote:> > That is a high variance procedure as compared with covariate adjustment > using the propensity score, or stratification. > > Frank Harrell >Yes, I guess the foo$fitted.values was the syntax i missed. I know this method is not optimal and that it yields high variance. For the moment i am just trying to get the idea of the IPW weighting process, it not about real application yet. By the way I read that the double robustness is backing the method up again... Meanwhile i considered the USPS package. Do you think this would be the right package for my basic approach ? -- View this message in context: http://www.nabble.com/propensity-score-adjustment-using-R-tp19555722p19556885.html Sent from the R help mailing list archive at Nabble.com.
Frank E Harrell Jr wrote:> > > > That is a high variance procedure as compared with covariate adjustment > using the propensity score, or stratification. > > Frank Harrell >Ah, wait what if I got very high dimensional X ? Even with 20 binary covariates i would end up with more than 1 million possibilities... -- View this message in context: http://www.nabble.com/propensity-score-adjustment-using-R-tp19555722p19557409.html Sent from the R help mailing list archive at Nabble.com.
the Matching() package by Jasjeet Sekhon does propensity score matching in a very user friendly way. (as you said you don't want to reinvent the wheel...) just feed it with the fitted values from a glm model (fitted$myglmmodel). afaik, you may additionally match on some covariates directly. HTH marc _________________________________________________________________________ In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten!
Possibly Parallel Threads
- [OT] propensity score implementation
- selection bias adjustment via propensity score
- exclusion rules for propensity score matchng (pattern rec)
- Propensity score and three treatments
- Propensity score modeling using machine learning methods. WAS: RE: LARS for generalized linear models