Displaying 8 results from an estimated 8 matches for "brglm".
Did you mean:
biglm
2009 Mar 31
1
Multicollinearity with brglm?
I''m running brglm with binomial loguistic regression. The perhaps
multicollinearity-related feature(s) are:
(1) the k IVs are all binary categorical, coded as 0 or 1;
(2) each row of the IVs contains exactly C (< k) 1''s;
(3) k IVs, there are n * k unique rows;
(4) when brglm is run, at least 1 IV is...
2013 Feb 27
1
Separation issue in binary response models - glm, brglm, logistf
...() function, however I was having 2 warnings
concerning glm.fit () :
# 1: glm.fit: algorithm did not converge
# 2: glm.fit: fitted probabilities numerically 0 or 1 occurred
After some investigation I found out that the problem was most probably
quasi complete separation and therefor decide to use brglm and/or logistf.
* logistf : analysis does not run
When running logistf() I get a error message saying :
# error in chol.default(x) :
# leading minor 39 is not positive definite
I looked into logistf package manual, on Internet, in the theoretical
and technical paper of Heinze and Ploner and canno...
2009 Mar 26
1
Extreme AIC in glm(), perfect separation, svm() tuning
Dear List,
With regard to the question I previously raised, here is the result I
obtained right now, brglm() does help, but there are two situations:
1) Classifiers with extremely high AIC (over 200), no perfect separation,
coefficients converge. in this case, using brglm() does help! It stabilize
the AIC, and the classification power is better.
Code and output: (need to install package: brglm)
ma...
2008 Dec 15
5
OT: (quasi-?) separation in a logistic GLM
Dear List,
Apologies for this off-topic post but it is R-related in the sense that
I am trying to understand what R is telling me with the data to hand.
ROC curves have recently been used to determine a dissimilarity
threshold for identifying whether two samples are from the same "type"
or not. Given the bashing that ROC curves get whenever anyone asks about
them on this list (and
2011 Oct 13
1
binomial GLM quasi separation
...ion) is not a statistical
artifact but it is something real.
As suggested in
http://r.789695.n4.nabble.com/OT-quasi-separation-in-a-logistic-GLM-td875726.html#a3850331
http://r.789695.n4.nabble.com/OT-quasi-separation-in-a-logistic-GLM-td875726.html#a3850331
(the last post is mine) I tried to use brglm procedure that uses a penalized
maximum likelihood but it made no difference.
What would you do if you were in my shoes?
Thanks in advance for any help.
Simone
--
View this message in context: http://r.789695.n4.nabble.com/binomial-GLM-quasi-separation-tp3901687p3901687.html
Sent from the R he...
2012 Jul 09
0
firth's penalized likelihood bias reduction approach
...e non-fair game, none of neutral-mood participants did so (0/20). Thus,
if drawing a 2x2 (mood x response, in the non-fair game) table, there was an
empty cell. I've learned that I can use Firth's penalized likelihood method
for bias reduction, which could be achieved using R packages "brglm" or
"logistf". However, I found the packages only deal with non-clustered data,
which is not the case for my data. I included game type as a within-subject
variable and mood as a between-subject variable, and I am interested in
their interaction. So, when involving the interaction te...
2012 Feb 29
2
puzzling results from logistic regression
Hi all,
As you can see from below, the result is strange...
I would imagined that the bb result should be much higher and close to 1,
any way to improve the fit?
Any other classification methods?
Thank you!
data=data.frame(y=rep(c(0, 1), times=100), x=1:200)
aa=glm(y~x, data=data, family=binomial(link="logit"))
newdata=data.frame(x=6, y=100)
bb=predict(aa, newdata=newdata,
2010 Jul 18
6
CRAN (and crantastic) updates this week
...(1.6.0), aroma.core (1.6.0), asbio (0.3-10), ascii
(0.7), automap (1.0-7), bayesmix (0.7-1), bbmle (0.9.5.1), bcp
(2.2.0), bear (2.5.3), bibtex (0.2-0), bifactorial (1.4.4), bigmemory
(4.2.3), binGroup (1.0-5), biOps (0.2.1.1), bipartite (1.12), blighty
(3.1-0), bnlearn (2.1.1), bootruin (1.0-156), brglm (0.5-5),
cairoDevice (2.13), caret (4.43), catspec (0.95), cgh (1.0-7.1),
cluster (1.13.1), clusterSim (0.38-1), cmprskContin (1.6),
coarseDataTools (0.2), coin (1.0-12), copula (0.9-7), corrplot (0.30),
cshapes (0.2-4), ctv (0.6-0), cudaBayesreg (0.3-6), dagR (1.1.1),
data.table (1.4.1), dclone (1...