Displaying 20 results from an estimated 2000 matches similar to: "firth's penalized likelihood bias reduction approach"
2006 Jan 12
1
Firths bias correction for log-linear models
Dear R-Help List,
I'm trying to implement Firth's (1993) bias correction for log-linear models.
Firth (1993) states that such a correction can be implemented by supplementing
the data with a function of h_i, the diagonals from the hat matrix, but doesn't
provide further details. I can see that for a saturated log-linear model, h_i=1
for all i, hence one just adds 1/2 to each count,
2010 Mar 09
1
penalized maximum likelihood estimation and logistf
Hi, I got two questions and would really appreciate any help from here.
First, is the penalized maximum likelihood estimation(Firth Type Estimation)
only fit for binary response (0,1 or TRUE, FALSE)? Can it be applied to
multinomial logistic regression?
If yes, what's the formula for LL and U(beta_i)? Can someone point me to
the right reference?
Second, when I used *logistf *on a dataset with
2013 Feb 27
1
Separation issue in binary response models - glm, brglm, logistf
Dear all,
I am encountering some issues with my data and need some help.
I am trying to run glm analysis with a presence/absence variable as
response variable and several explanatory variable (time, location,
presence/absence data, abundance data).
First I tried to use the glm() function, however I was having 2 warnings
concerning glm.fit () :
# 1: glm.fit: algorithm did not converge
# 2:
2009 Mar 26
1
Extreme AIC in glm(), perfect separation, svm() tuning
Dear List,
With regard to the question I previously raised, here is the result I
obtained right now, brglm() does help, but there are two situations:
1) Classifiers with extremely high AIC (over 200), no perfect separation,
coefficients converge. in this case, using brglm() does help! It stabilize
the AIC, and the classification power is better.
Code and output: (need to install package:
2009 Mar 31
1
Multicollinearity with brglm?
I''m running brglm with binomial loguistic regression. The perhaps
multicollinearity-related feature(s) are:
(1) the k IVs are all binary categorical, coded as 0 or 1;
(2) each row of the IVs contains exactly C (< k) 1''s;
(3) k IVs, there are n * k unique rows;
(4) when brglm is run, at least 1 IV is reported as involving a singularity.
I''ve tried recoding the n
2008 Dec 15
5
OT: (quasi-?) separation in a logistic GLM
Dear List,
Apologies for this off-topic post but it is R-related in the sense that
I am trying to understand what R is telling me with the data to hand.
ROC curves have recently been used to determine a dissimilarity
threshold for identifying whether two samples are from the same "type"
or not. Given the bashing that ROC curves get whenever anyone asks about
them on this list (and
2005 Aug 13
1
Penalized likelihood-ratio chi-squared statistic: L.R. model for Goodness of fit?
Dear R list,
From the lrm() binary logistic model we derived the G2 value or the
likelihood-ratio chi-squared statistic given as L.R. model, in the output of
the lrm().
How can this value be penalized for non-linearity (we used splines in the
lrm function)?
lrm.iRVI <- lrm(arson ~ rcs(iRVI,5),
penalty=list(simple=10,nonlinear=100,nonlinear.interaction=4))
This didn’t work
2003 Apr 20
1
survreg penalized likelihood?
What objective function is maximized by survreg with the default
Weibull model? I'm getting finite parameters in a case that has the
likelihood maximzed at Infinite, so it can't be a simple maximum
likelihood.
Consider the following:
#############################
> set.seed(3)
> Stress <- rep(1:3, each=3)
> ch.life <- exp(9-3*Stress)
> simLife <- rexp(9,
2007 May 18
1
penalized maximum likelihood estimator
dear R-helper,
I tried to find out a package in which i can have
penalized maximum likelihood estimator applying on
generalized extreme value distribution with beta
function) but could not. would you please help me to
know the name of the package. thanks for your help.
S.Murshed
--- r-help-request at stat.math.ethz.ch wrote:
> Send R-help mailing list submissions to
> r-help at
2005 Oct 27
2
how to predict with logistic model in package logistf ?
dear community,
I am a beginer in R , and can't predict with logistic model in package
logistf,
could anyone help me ? thanks !
the following is my command and result :
>library(logistf)
>data(sex2)
>fit<-logistf(case ~ age+oc+vic+vicl+vis+dia, data=sex2)
>predict(fit,newdata=sex2)
Error in predict(fit, newdata = sex2) : no applicable method for
"predict"
2002 Feb 20
2
How to get the penalized log likelihood from smooth.spline()?
I use smooth.spline(x, y) in package modreg and I would like to get
value of penalized log likelihood and preferable also its two parts. To
make clear what I am asking for (and make sure that I am asking for the
right thing) I clarify my problem trying to use the same notation as in
help(smooth.spline):
I want to find the natural cubic spline f(x) such that
L(f) = \sum_{k=1}{n} w[k](y[k] -
2011 Sep 27
1
model selection using logistf package
Hi everyone,
I'm wondering how to select the "best" model when using logistf? AIC does
not work neither does anova. I tried fitting a glm model but got the
separation warning message so I tried using the logistf package but as I
stepwise simplify the model I don't know if the simplification is motivated
or not... Can anyone explain to me how I should approach this problem? I
2011 Oct 13
1
binomial GLM quasi separation
Hi all,
I have run a (glm) analysis where the dependent variable is the gender
(family=binomial) and the predictors are percentages.
I get a warning saying "fitted probabilities numerically 0 or 1 occurred"
that is indicating that quasi-separation or separation is occurring.
This makes sense given that one of these predictors have a very influential
effect that is depending on a
2006 Jan 18
0
Logistftest to select diagnostic genes
Hi, all,
Anyone has experience on Logistf package? I am using logistftest in Logistf
package to selelct diagnosis genes. The result seems not the same as I
expected.
I have 10 gene expression data for 27 tumor 1 and 11 tumor 0. I want to
select the best one using Maximum likelihood ratio test in logistic
regression model. This is the way my code works:
1. Read in 10 genes as independent
2008 Sep 16
1
logistf error message
I am new to using R. Currently, I am using the logistf package to run logistic regression analysis. When I run the following line of code:
attach(snpriskdata)
logisticpaper<-logistf(sascasecon~saspackyrs+newsbmi+EDUCATION+sasagedx+sasflung+condobst+sasadultasprev)
I get the following error message:
Error in sum(y) : invalid 'type' (character) of argument
What does this error
2006 Feb 22
8
filtering "tags" via checkboxes - HABTM
First post/newbie post... bear with me.
What I''m trying to achive (music site):
A system containing tracks and moods with a HABTM relationship. I''ve
got all that set up and functioning in the admin environment - i.e.
admins can apply a variety of moods to a particular track via a
series of checkboxes. Join table works just fine.
I''m currently stuck on allowing
2009 Mar 04
0
error in mood.test
Dear list,
when running a mood.test() (part of package "stats") on slightly longer
vectors (than the example from the help-page) we get the error-message
shown below : once both vectors tested are of length 50 this error oocurs.
Note, that this problem didn't occur with R-2.7.x (or even older versions).
> x <- rnorm(50,10,5)
> y <- rnorm(50,2,5)
>
>
2013 Feb 07
4
help with creating new variables using a loop
Hi there,
I've got a set of 10 numeric variables called Mood1 to Mood10 in a dataset called mood. I'm trying to create a set of 10 new variables called m1 to m10 so that m1=Mood1*1, m2=Mood2*2, etc to m10=Mood10*10
Trawling through the internet, I eventually tried the following code:
for (i in 1:10){
assign(x=paste0("mood$m",i),
2017 Jul 27
0
How long to wait for process?
Rather than go to a penalized GLM, you might be better off investigating
the sources of quasi-perfect separation and simplifying the model to
avoid or reduce it. In your data set you have several factors with
large number of levels, making the data sparse for all their combinations.
Like multicolinearity, near perfect separation is a data problem, and is
often better solved by careful
2017 Jul 26
3
How long to wait for process?
UseRs,
I have a dataframe with 2547 rows and several hundred columns in R
3.1.3. I am trying to run a small logistic regression with a subset of
the data.
know_fin ~
comp_grp2+age+gender+education+employment+income+ideol+home_lot+home+county
> str(knowf3)
'data.frame': 2033 obs. of 18 variables:
$ userid : Factor w/ 2542 levels