similar to: logistic regressioin - course ornotes

Displaying 20 results from an estimated 1100 matches similar to: "logistic regressioin - course ornotes"

2003 Jan 21
2
books on categorical data analyses
Dear All, We are about to purchase the second edition of Agresti's "Categorical Data Analysis" (my old copy of the first ed. of that wonderful book is falling apart). I would appreciate suggestions about other comparable books which, if possible, have examples using R/S code (instead of SAS). Thanks, Ram?n -- Ram?n D?az-Uriarte Bioinformatics Unit Centro Nacional de
2000 Mar 02
1
R Package Building Question
I'll start with my apologies to Martin for sending those last two of message to the list owner (the last one he forwarded was actually sent a week or two ago and must have been lost in the ether somehow). This is basically due to fast copy/paste in emacs without paying much attention to what I'm doing. Lack of sleep is my excuse, but I don't know if that goes very far with the
2005 Oct 25
1
selecting every nth item in the data
I want to make a glm and then use predict. I have a fairly small sample (4000 cases) and I want to train on 90% and test on 10% but I want to do it in slices so I test on every 10th case and train on the others. Is there some simple way to get these elements? Stephen -- 21/10/2005 [[alternative HTML version deleted]]
2005 Jun 16
1
logistic regression - using polys and products of features
Hi I can get all my features by doing this: > logistic.model = glm(similarity ~ ., family=binomial, data = cData[3001:3800,]) I can get the product of all my features by this: logistic.model = glm(similarity ~ . ^ 2, family=binomial, data = cData[3001:3800,]) I don't seem to be able to get polys by doing this: logistic.model = glm(similarity ~ poly(.,2), family=binomial, data
2005 Jan 05
2
plotting percent of incidents within different 'bins'
Hi Say I have some data, two columns in a table being a binary outcome plus a predictor and I want to plot a graph that shows the percentage positives of the binary outcome within bands of the predictor, e.g. Outcome predictor 0 1 1 2 1 2 0 3 0 3 0
2005 Aug 08
1
chisq.test
Hi I am trying to use this function. Can anyone show me how I would input the following example? Chi-Squared = (40-30)^2 + (20-30)^2 + (30-30)^2 30 30 30 = 3.333 + 3.333 + 0 = 6.666 (p value = 0.036) I want to be able to use different denominators so can you show me how I can do it to accommodate these rather than assuming they are all the
2005 Aug 12
1
chisq warning
Hi I am running chisq as below and getting a warning. Can anyone tell me the significance or the warning? > chisq.test(c(10 ,4 ,2 ,6 ,5 ,3 ,4 ,4 ,6 ,3 ,2 ,2 ,2 ,4 ,7 ,10 ,0 ,6 ,19 ,3 ,2 ,7 ,2 ,2 ,2 ,1 ,32 ,2 ,3 ,10 ,1 ,3 ,9 ,4 ,10 ,2 ,2 ,4 ,5 ,7 ,6 ,3 ,7 ,4 ,3 ,3 ,7 ,1 ,4 ,2 ,2 ,3 ,3 ,5 ,5 ,4 ), p =c(0.01704142 ,0.017988166 ,0.018224852 ,0.017751479 ,0.017988166 ,0.018224852 ,0.017278107
2006 May 01
6
R-2.3.0 make error
Dear list, When compiling the R-2.3.0 on FC4 x86_64, I got the following errors: make[3]: Entering directory `/project/scratch3/ligroup/wuming/src/R-2.3.0/src/main' gcc -Wl,--export-dynamic -L/usr/local/lib64 -o R.bin Rmain.o CConverters.o CommandLineArgs.o Rdynload.o Renviron.o RNG.o apply.o arithmetic.o apse.o array.o attrib.o base.o bind.o builtin.o character.o coerce.o colors.o complex.o
2005 Apr 27
1
making table() work
I am trying to do some verification across a large dataset, cuData, that has 23 columns. Column 23 (similarity) is the outcome 0 or 1 and the other columns are the features. I do this: verificationglm.model <- glm(formula = similarity ~ ., family=binomial, data=cuData[1:1000,]) and produce the model: > summary(verificationglm.model) Call: glm(formula = similarity ~ ., family =
2006 Mar 29
3
Sub-vector
Dear list, Given a vector of logical values, say >a <- c(TRUE,TRUE,TRUE,FALSE,FALSE,FALSE,FALSE,TRUE,TRUE,TRUE) Are there any R functions that can tell whether there are two or more "TRUE" in a row in this vector? Thanks, Wuming
2006 Mar 22
2
R package for computing state path using Viterbi algorithm
Dear list, This question is about Hidden Markov Model. Given a transition matrix, an emission matrix and a sequence of observed symbols (actually, nucleotide sequences, A, T, C and G), I hope to predict the sequence of state by Viterbi algorithm. I searched R repository for related packages. msm package has function viterbi.msm (as well as very good document), but it only works for
2005 Sep 09
2
R-help Digest, Vol 31, Issue 9
Hi: I use lm (linear model) to analyze 47 variables , 8 responses So I use loop to finish it . I want the program to show the results that P-value is less than 0.05. How can I cite the P-valus from lm result ? Ping The code: #using LM to model general fati for (j in 48:52) { for (i in 3:46){ gen.fat<-y_x[,j] gen.fat<-as.numeric(gen.fat) snp_marker<-y_x[,i] x<-colnames(y_x)
2006 Jul 07
1
Polynomial kernel in SVM in e1071 package
Dear list, In some places (for example, http://en.wikipedia.org/wiki/Support_vector_machine) , the polynomail kernel in SVM is written as (u'*v + 1)^d, while in the document of svm() in e1071 package, the polynomial kernel is written as (gamma*u'*v + coef0)^d. I am a little confused here: When doing parameter optimization (grid search or so) for polynomial kernel, does it need to tune
2006 Aug 04
1
Error when loading odesolve
Dear list, I installed odesolve package (0.5-15) in R 2.3.1 in a Solaris server (Generic_118558-11 sun4u sparc SUNW,Sun-Blade-1000). The installing progress completed without errors, though several warnings like "Warning: Option -fPIC passed to ld, if ld is invoked, ignored otherwise" were outputed. However, when loading the odesolve package by library(odesolve), following error
2011 Mar 16
1
Standardized Pearson residuals (and score tests)
Hi Peter and others, If it helps, I wrote a small function glm.scoretest() for the statmod package on CRAN to compute score tests from glm fits. The score test for adding a covariate, or any set of covariates, can be extracted very neatly from the standard glm output, although you probably already know that. Regards Gordon --------------------------------------------- Professor Gordon K
2011 Mar 14
3
Standardized Pearson residuals
Is there any reason that rstandard.glm doesn't have a "pearson" option? And if not, can it be added? Background: I'm currently teaching an undergrad/grad-service course from Agresti's "Introduction to Categorical Data Analysis (2nd edn)" and deviance residuals are not used in the text. For now I'll just provide the students with a simple function to use, but I
2002 May 30
1
Documentation Bugs (PR#1618)
Just a few documentation "bugs" that I've noticed recently. 1. In the help for (dpqr)weibull(), the formula given for the variance of a Weibull is wrong. The correct formula is b^2 * sqrt(gamma(1 + 2/a) - (gamma(1 + 1/a))^2)) Note that I've also changed Gamma to gamma, which I think is preferable since this is actually the name of the gamma() function
2005 Sep 02
1
Calculating Goodman-Kurskal's gamma using delta method
Dear list, I have a problem on calculating the standard error of Goodman-Kurskal's gamma using delta method. I exactly follow the method and forumla described in Problem 3.27 of Alan Agresti's Categorical Data Analysis (2nd edition). The data I used is also from the job satisfaction vs. income example from that book. job <- matrix(c(1, 3, 10, 6, 2, 3, 10, 7, 1, 6, 14, 12, 0, 1, 9,
2012 Apr 06
2
Changing grid defaults
I'm trying to use the vcd package to produce mosaic plots for my class notes, written in Sweave and using the LaTeX's beamer document class. For projecting the notes in class, I use a dark background with light foreground colors. It's easy enough to change the defaults for R's standard graphics to match my color scheme (using the fg, col.axis, col.lab, col.main, and col.sub
2002 Apr 22
3
glm() function not finding the maximum
Hello, I have found a problem with using the glm function with a gamma family. I have a vector of data, assumed to be generated by a gamma distribution. The parameters of this gamma distribution are estimated in two ways (i) using the glm() function, (ii) "by hand", using the optim() function. I find that the -2*likelihood at the maximum found by (i) is substantially larger than that