similar to: OT: (quasi-?) separation in a logistic GLM

Displaying 20 results from an estimated 9000 matches similar to: "OT: (quasi-?) separation in a logistic GLM"

2011 Oct 13
1
binomial GLM quasi separation
Hi all, I have run a (glm) analysis where the dependent variable is the gender (family=binomial) and the predictors are percentages. I get a warning saying "fitted probabilities numerically 0 or 1 occurred" that is indicating that quasi-separation or separation is occurring. This makes sense given that one of these predictors have a very influential effect that is depending on a
2010 Jan 12
1
Non-metric multidimensional scaling (NMDS) help
Hi, I am currently working on some data and feel that NMDS would return an excellent result. With my current data set however I have been experiencing some problems and cannot carry out metaMDS. I have tried with a few smaller data sets which I created for practice sake and this has worked fine. I think it is the set up of my data set that is causing me trouble. I have 18 columns and 18 rows,
2013 Jun 22
1
metaMDS Error, Nan similar or negative values
H ello R-experts, I want to do ordination plots using vegan metaMDS. I have a where many cells have zero values. Data structure: X[1:10,1:14] Height.1 Height.2 Height.3 Height.4 Height.5 Height.6 Height.7 Height.8 Height.9 Height.10 Height.11 Height.12 Height.13 D30I1A 46 0 0 0 0 0 0 0 0 0 39 0 98 D30I1B
2013 Feb 27
1
Separation issue in binary response models - glm, brglm, logistf
Dear all, I am encountering some issues with my data and need some help. I am trying to run glm analysis with a presence/absence variable as response variable and several explanatory variable (time, location, presence/absence data, abundance data). First I tried to use the glm() function, however I was having 2 warnings concerning glm.fit () : # 1: glm.fit: algorithm did not converge # 2:
2009 Dec 03
1
distance matrices
i'm working on some distance matrices and i was wondering if there is a way to export the matrices from R to excel. OG thanks [[alternative HTML version deleted]]
2011 Mar 18
3
exploring dist()
Hello, everybody, I hope somebody could help me with a dist() function. I have a data frame of size 2*4087 (col*row), where col corresponds to the treatment and rows are species, values are Hellinger distances, I should reconstruct a distance matrix with a dist() function. I know that "euclidean" method should be used. When I type: dist(dframe,"euclidean") it gives me a
2007 Jul 26
5
ROC curve in R
Hi, I need to build ROC curve in R, can you please provide data steps / code or guide me through it. Thanks and Regards Rithesh M Mohan [[alternative HTML version deleted]]
2009 Mar 26
1
Extreme AIC in glm(), perfect separation, svm() tuning
Dear List, With regard to the question I previously raised, here is the result I obtained right now, brglm() does help, but there are two situations: 1) Classifiers with extremely high AIC (over 200), no perfect separation, coefficients converge. in this case, using brglm() does help! It stabilize the AIC, and the classification power is better. Code and output: (need to install package:
2006 Mar 27
2
Clustering question \ dist(datmat)
Hello everybody. I am trying to cluster circular data (data points which are angles), thus i can not use the "dist" function in "mclust" to generate my distance matrix, I am using the function " Dij = 0.5*( 1 - cos(theta_i - theta_j)). The thing is "hclust" will not accept this distance matrix, i tried to put it in a data frame, but again i get an error message
2001 Jan 10
1
optmizing with monotone stepfunctions?
Before re-inventing the wheel I would like to ask: does anyone know about an optimizer in R which can reliably identify which value of X (Xopt) leads to Y (Yopt) closest to Ytarget in Y <- MonotoneStepFun(X) optionally with the restriction that Yopt <= Ytarget (at least if any Y <= Ytarget, otherwise any Yopt > Ytarget would be the preferred answer) If none is known, I will write
2009 Mar 31
1
Multicollinearity with brglm?
I''m running brglm with binomial loguistic regression. The perhaps multicollinearity-related feature(s) are: (1) the k IVs are all binary categorical, coded as 0 or 1; (2) each row of the IVs contains exactly C (< k) 1''s; (3) k IVs, there are n * k unique rows; (4) when brglm is run, at least 1 IV is reported as involving a singularity. I''ve tried recoding the n
2006 Feb 17
2
Something changed and glm(..., family=binomial) doesn't work now
I ran logistic regression models last week using glm (...,family=binomial) and got a set of results. Since then I have loaded the Epi package for ROC analysis. Now when I run those same models I get completely different results, with most being: Warning message: fitted probabilities numerically 0 or 1 occurred in: glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart,
2011 Jan 27
3
agnes clustering and NAs
Hello, In the documentation for agnes in the package 'cluster', it says that NAs are allowed, and sure enough it works for a small example like : > m <- matrix(c( 1, 1, 1, 2, 1, NA, 1, 1, 1, 2, 2, 2), nrow = 3, byrow = TRUE) > agnes(m) Call: agnes(x = m) Agglomerative coefficient: 0.1614168 Order of objects: [1] 1 2 3 Height (summary): Min. 1st Qu. Median Mean 3rd
2017 Jul 26
3
How long to wait for process?
UseRs, I have a dataframe with 2547 rows and several hundred columns in R 3.1.3. I am trying to run a small logistic regression with a subset of the data. know_fin ~ comp_grp2+age+gender+education+employment+income+ideol+home_lot+home+county > str(knowf3) 'data.frame': 2033 obs. of 18 variables: $ userid : Factor w/ 2542 levels
2017 Jul 27
2
How long to wait for process?
Michael, Thank you for the suggestion. I will take your advice and look more critically at the covariates. John On 7/27/2017 8:08 AM, Michael Friendly wrote: > Rather than go to a penalized GLM, you might be better off > investigating the sources of quasi-perfect separation and simplifying > the model to avoid or reduce it. In your data set you have several > factors with large
2012 Feb 29
2
puzzling results from logistic regression
Hi all, As you can see from below, the result is strange... I would imagined that the bb result should be much higher and close to 1, any way to improve the fit? Any other classification methods? Thank you! data=data.frame(y=rep(c(0, 1), times=100), x=1:200) aa=glm(y~x, data=data, family=binomial(link="logit")) newdata=data.frame(x=6, y=100) bb=predict(aa, newdata=newdata,
2017 Jul 27
0
How long to wait for process?
Rather than go to a penalized GLM, you might be better off investigating the sources of quasi-perfect separation and simplifying the model to avoid or reduce it. In your data set you have several factors with large number of levels, making the data sparse for all their combinations. Like multicolinearity, near perfect separation is a data problem, and is often better solved by careful
2008 May 15
1
metaMDS using Dissimilarity matrix
Hello R-user community! I am running R 2.7.0 on a Power Book (Tiger). (I am still R and statistics beginner) Presently I try to run the function metaMDS (vegan) using an existing dissimilarity-matrix. As I would like to start with this matrix I thought I could just give the matrix using the x= -argument Test<-metaMDS(x=Dist.Gower) Fehler in inherits(comm, "dist") :
2008 Apr 26
6
quasi-random sequences
Dear list useRs, I have to generate a random set of coordinates (x,y) in [-1 ; 1]^2 for say, N points. At each of these points is drawn a circle (later on, an ellipse) of random size, as in: > N <- 100 > > positions <- matrix(rnorm(2 * N, mean = 0 , sd= 0.5), nrow=N) > sizes<-rnorm(N, mean = 0 , sd= 1) > plot(positions,type="p",cex=sizes) My problem is to
2005 Dec 15
3
Name conflict between Epi and ROC packages
The name conflicts in Epi and ROC packages (2 'ROC' functions are the problem) cause the following code to work once, but not twice: library(MASS); data(cats); x = cats[,2] y = ifelse(cats[,1]=='F',0,1) library(Epi); ROC(x,y,grid=0)$AUC library(ROC); AUC(rocdemo.sca(y, x, dxrule.sca)) What is the standard way of resolving name conflicts? Ask maintainers to resolve