similar to: Help with aggregate syntax for a multi-column function please.

Displaying 20 results from an estimated 1100 matches similar to: "Help with aggregate syntax for a multi-column function please."

2005 Mar 22
1
Writing R documentation
Greetings, I used latex type code in my Rd files. The pdf version using R CMD Rd2dvi --output=PKtools.pdf --pdf --title="PKtools" PKtools man came out quite nice. However, my current HTML version does not tex the latex so there is latex code in the files and looks bad. example problem code: AIC$_c$ or \begin{itemize} \item NLME: \begin{itemize} \item population level:
2007 Feb 15
1
Problem in summaryBy
The R script below gives values of 1 for all minimum values when I use a custom function in summaryBy. I get the correct values when I use FUN=min directly. Any help is much appreciated. The continuous information provided in this forum is fabulous as are the different R packages available. Rene # Simulated simplified data Subj <- rep(1:4, each=6) Analyte <-
2008 Jul 17
1
Comparing differences in AUC from 2 different models
Hi, I would like to compare differences in AUC from 2 different models, glm and gam for predicting presence / absence. I know that in theory the model with a higher AUC is better, but what I am interested in is if statistically the increase in AUC from the glm model to the gam model is significant. I also read quite extensive discussions on the list about ROC and AUC but I still didn't find
2010 Jan 22
2
Computing Confidence Intervals for AUC in ROCR Package
Dear R-philes, I am plotting ROC curves for several cross-validation runs of a classifier (using the function below). In addition to the average AUC, I am interested in obtaining a confidence interval for the average AUC. Is there a straightforward way to do this via the ROCR package? plot_roc_curve <- function(roc.dat, plt.title) { #print(str(vowel.ROC)) pred <-
2010 May 19
1
col allocation is not right
plot(svm.auc, col=2, main="ROC curves comparing classification performance\n of six machine learning models") legend(0.5, 0.6, c(ns, nb, nr, nt, nl,ne), 2:6, 9) # Draw a legend. plot(bo.auc, col=3, add=T) # add=TRUE draws on the existing chart plot(rf.auc, col=4, add=T) plot(tree.auc, col=5, add=T) plot(nn.auc, col=6, add=T) plot(en.auc, col=9,lty="dotted",lwd=3, add=T) Hi,
2008 Jan 05
1
AUC values from LRM and ROCR
Dear List, I am trying to assess the prediction accuracy of an ordinal model fit with LRM in the Design package. I used predict.lrm to predict on an independent dataset and am now attempting to assess the accuracy of these predictions. >From what I have read, the AUC is good for this because it is threshold independent. I obtained the AUC for the fit model output from the c score (c =
2012 Feb 09
2
AUC, C-index and p-value of Wilcoxon
Dear all, I am using the ROCR library to compute the AUC and also the Hmisc library to compute the C-index of a predictor and a group variable. The results of AUC and C-index are similar and give a value of about 0.57. The Wilcoxon p-value is <0.001! Why the AUC is showing small value and the p-value is high significant? The AUC is based on Wilcoxon calculation? Many thanks, Lina
2005 Sep 28
1
Fast AUC computation
I am doing a simulation with a relatively large data set (20,000 observations) for which I want to calculate the area under the Receiver Operator Curve (AUC) for many parameter combinations. I am using the ROC library and the following commands to generate each AUC: rocobj=rocdemo.sca(truth = ymis, data = model$fitted.values, rule = dxrule.sca) #generation of observed ROC object
2006 Nov 24
1
How to find AUC in SVM (kernlab package)
Dear all, I was wondering if someone can help me. I am learning SVM for classification in my research with kernlab package. I want to know about classification performance using Area Under Curve (AUC). I know ROCR package can do this job but I found all example in ROCR package have include prediction, for example, ROCR.hiv {ROCR}. My problem is how to produce prediction in SVM and to find
2010 Oct 22
2
Random Forest AUC
Guys, I used Random Forest with a couple of data sets I had to predict for binary response. In all the cases, the AUC of the training set is coming to be 1. Is this always the case with random forests? Can someone please clarify this? I have given a simple example, first using logistic regression and then using random forests to explain the problem. AUC of the random forest is coming out to be
2012 Dec 19
2
pROC and ROCR give different values for AUC
Packages pROC and ROCR both calculate/approximate the Area Under (Receiver Operator) Curve. However the results are different. I am computing a new variable as a predictor for a label. The new variable is a (non-linear) function of a set of input values, and I'm checking how different parameter settings contribute to prediction. All my settings are predictive, but some are better. The AUC i
2006 Mar 20
1
How to compare areas under ROC curves calculated with ROC R package
I might be missing something but I thought that AUC was a measure for comparing ROC curves, so there is nothing else needed to "compare" them. The larger AUC is the higher correlation of 2 variables compared. No other measures or calculations are needed. Jarek Tuszynski -----Original Message----- From: r-help-bounces at stat.math.ethz.ch [mailto:r-help-bounces at stat.math.ethz.ch] On
2012 Mar 19
2
by output into data frame
I could do this in various hacky ways, but what's the right way? I have a nice application of the by function, which does what I want. The output looks like this: > auc_stress lab.samples.stress$subid: 2 cortisol amylase 1 919.05 6834.8 ---------------------------------------------------------------------------------------------------------------------------
2012 May 11
2
Random forests prediction
Hi all, I have a strange problem when applying RF in R. I have a set of variables with which I obtain an AUC of 0.67. I do have a second set of variables that have an AUC of 0.57. When I merge the first and second set of variables, the AUC becomes 0.64. I would expect the prediction to become better as I add variables that do have some predictive power? This is even more strange as the AUC
2012 Oct 25
2
How to extract auc, specificity and sensitivity
I am running my code in a loop and it does not work but when I run it outside the loop I get the values I want. n <- 1000; # Sample size fitglm <- function(sigma,tau){ x <- rnorm(n,0,sigma) intercept <- 0 beta <- 0 ystar <- intercept+beta*x z <- rbinom(n,1,plogis(ystar)) xerr <- x + rnorm(n,0,tau) model<-glm(z ~ xerr, family=binomial(logit))
2008 Jun 12
1
About Mcneil Hanley test for a portion of AUC!
Dear all I am trying to compare the performances of several methods using the AUC0.1 and not the whole AUC. (meaning I wanted to compare to AUC's whose x axis only goes to 0.1 not 1) I came to know about the Mcneil Hanley test from Bernardo Rangel Tura and I referred to the original paper for the calculation of "r" which is an argument of the function cROC. I can only find the
2010 Jun 11
2
Misplacement of Greek letter
Hello. I am trying to get my axis label to read as follows (The symbol) Delta AUC blah blah... then below it...(some other text) The problem is the Delta symbol shows up beside the "(some other text)" rather than the "AUC". Does any one know how I can get the Delta to remain beside AUC? Here is the actual command should you care to look at it. par(mar=c(8,8,4,4))
2009 Jul 24
1
Conditional sorting
Greetings! I am trying to figure out how to order a data frame by one variable conditioned on another. Here is an example of what I have: d <- data.frame(RUN = rep(1:3, each = 3), ID = 1:9, AUC = runif(9,1,100)) > d RUN ID AUC 1 1 70.2 1 2 86.5 1 3 20.1 2 4 74.3 2 5 53.6 2 6 67.6 3 7 99.9 3 8 47.3 3 9 41.3
2011 Jan 20
2
auc function
Hi, there. Suppose I already have sensitivities and specificities. What is the quick R-function to calculate AUC for the ROC plot? There seem to be many R functions to calculate AUC. Thanks. Yulei [[alternative HTML version deleted]]
2010 Jan 04
1
Are unpaired data suitable for DiagnosisMed's Diagnosis ?
Dear, I wanna to compare AUC generated by two distribution models using the same sample. The AUC for model 1 consists of two columns, column A for 0/1 and column B for probability, eahc with the same row number of 3000. The AUC for model 2 consists of two columns, column A for 0/1 and column B for probability, eahc with the same row number of 10000 rows. I am wondering what value I should put