search for: accuracies

Displaying 20 results from an estimated 1974 matches for "accuracies".

Did you mean: inaccuracies
2006 Jan 30
4
Integer bit size and the modulus operator
I am a statistician and I come up to an interesting problem in cryptography. I would like to use R since there are some statistical procedures that I need to use. However, I run into a problem when using the modulus operator %%. I am using R 2.2.1 and when I calculate modulus for large numbers (that I need with my problem) R gives me warnings. For instance if one does: a=1:40; 8^a %% 41 one
2005 Jan 20
2
Cross-validation accuracy in SVM
...elp. Regards - Ton --- Parameters: SVM-Type: C-classification SVM-Kernel: radial cost: 8 gamma: 0.007 Number of Support Vectors: 1015 ( 148 867 ) Number of Classes: 2 Levels: false true 5-fold cross-validation on training data: Total Accuracy: 92.24242 Single Accuracies: 90 93.33333 94.84848 92.72727 90.30303 Contingency Table predclasses origclasses false true false 1476 0 true 4 170
2011 Sep 07
3
[LLVMdev] Proposal: floating point accuracy metadata (OpenCL related)
Hi, This is my proposal to add floating point accuracy support to LLVM. The intention is that the frontend may provide metadata to signal to the backend that it may select a less accurate (i.e. more efficient) instruction to perform a given operation. This is primarily a requirement of OpenCL, which specifies that certain floating point operations may be computed inaccurately. Comments
2009 Mar 27
1
ROCR package finding maximum accuracy and optimal cutoff point
If we use the ROCR package to find the accuracy of a classifier pred <- prediction(svm.pred, testset[,2]) perf.acc <- performance(pred,"acc") Do we?find the maximum accuracy?as follows?(is there a simplier way?): > max(perf.acc at x.values[[1]]) Then to find the cutoff point that maximizes the accuracy?do we do the following?(is there a simpler way): > cutoff.list <-
2005 Jun 27
2
Numerical accuracy
Hi people, I need to prove the good quality of numerical accuracy of R. Anyone knows a paper or anything else comparing R to other statistical softwares in terms of numerical accuracy. I've made a long search about that but I found nothing. Please help me!! Thanx, Talita Leite ------------------------------------------------- Este e-mail foi enviado pelo Webmail da UFAL IMP:
2012 Apr 14
0
[LLVMdev] Representing -ffast-math at the IR level
On 14 April 2012 20:34, Duncan Sands <baldrick at free.fr> wrote: > the verifier checks that the accuracy operand is either a floating point > number (ConstantFP) or the keyword "fast".  If "Accuracy" is zero here > then that means it wasn't ConstantFP.  Thus it must have been the keyword > "fast". I think it's assuming too much. If I write
2013 Jan 18
1
scaling of nonbinROC penalties
Dear R Helpers I am having difficulty understanding how to use the penalty matrix for the nomROC function in package 'nonbinROC'. The documentation says that the values of the penalty matrix code the penalty function L[i,j] in which 0 <= L[i,j] <= 1 for j > i. It gives an example that if we have an ordered response with 4 categories, then we might wish to penalise larger
2012 Apr 14
2
[LLVMdev] Representing -ffast-math at the IR level
Hi Renato, > I'm not sure about this: > > + if (!Accuracy) > + // If it's not a floating point number then it must be 'fast'. > + return getFastAccuracy(); > > Since you allow accuracies bigger than 1 in setFPAccuracy(), integers > should be treated as float. Or at least assert. the verifier checks that the accuracy operand is either a floating point number (ConstantFP) or the keyword "fast". If "Accuracy" is zero here then that means it wasn't Constant...
2005 Jan 22
5
Checking accuracy of the output
We've made a little change that changed the output of vorbis is there a way to check the accuracy of the output against the original output? I also asked this question in the irc ... Thanks, Tal.
2004 Mar 13
4
nnet classification accuracy vs. other models
I was wandering if anybody ever tried to compare the classification accuracy of nnet to other (rpart, tree, bagging) models. From what I know, there is no reason to expect a significant difference in classification accuracy between these models, yet in my particular case I get about 10% error rate for tree, rpart and bagging model and 80% error rate for nnet, applied to the same data. Thanks.
2010 Nov 09
6
Extending the accuracy of exp(1) in R
Hi, I want to use a more accurate value of exp(1).  The value given by R is 2.718282. I want a value which has more than 15 decimal places. Can anyone let me know how can I increase the accuracy level. Actually there are some large multipliers of exp(1) in my whole expression, and I want a more accurate result at the last step of my program, and for that I need to use highly accurate value
2009 Jan 23
2
The Quality & Accuracy of R
Hi All, We have all had to face skeptical colleagues asking if software made by volunteers could match the quality and accuracy of commercially written software. Thanks to the prompting of a recent R-help thread, I read, "R: Regulatory Compliance and Validation Issues, A Guidance Document for the Use of R in Regulated Clinical Trial Environments (http://www.r-project.org/doc/R-FDA.pdf).
2011 Jun 22
1
caret's Kappa for categorical resampling
Hello, When evaluating different learning methods for a categorization problem with the (really useful!) caret package, I'm getting confusing results from the Kappa computation. The data is about 20,000 rows and a few dozen columns, and the categories are quite asymmetrical, 4.1% in one category and 95.9% in the other. When I train a ctree model as: model <- train(dat.dts,
2013 Mar 13
1
Accuracy of some classifiers
I am using machine learning for one researching. I am using some classifiers with 5-fold CV . I would like to know how it is possible to extract the accuracy, for example, for KNN,neural networks and J48, for each one of 5-fold because when I apply CV to my classifier, I obtain the "mean accuracy" of 5-fold but each accuracy/error of each fold is not returned. Any help is welcome and
2012 Apr 14
9
[LLVMdev] Representing -ffast-math at the IR level
The attached patch is a first attempt at representing "-ffast-math" at the IR level, in fact on individual floating point instructions (fadd, fsub etc). It is done using metadata. We already have a "fpmath" metadata type which can be used to signal that reduced precision is OK for a floating point operation, eg %z = fmul float %x, %y, !fpmath !0 ... !0 = metadata
2010 Oct 21
1
Accuracy/Goodness of fit of nnet
Hi R-Helpers , am working on nnet package.Multinom() has an option for finding the goodness of fit by giving the AIC value. Does nnet also gives some value to determine the accuracy. If not, can you guide me with some procedure to figure out the accuracy/goodness of fit of nnet model? Thanks in advance. -- View this message in context:
2007 Oct 03
1
accuracy of pt for x close to 0
Hello, I have been playing around with the statistical distributions in R, and overall I think the accuracy is very good. However, it seems that for the Student's t distribution, the CDF loses accuracy when evaluated at values close to zero. For instance, I did the following in R ---------------------------------- df<-seq(1,100,by=1)
2012 Oct 25
2
How to extract auc, specificity and sensitivity
I am running my code in a loop and it does not work but when I run it outside the loop I get the values I want. n <- 1000; # Sample size fitglm <- function(sigma,tau){ x <- rnorm(n,0,sigma) intercept <- 0 beta <- 0 ystar <- intercept+beta*x z <- rbinom(n,1,plogis(ystar)) xerr <- x + rnorm(n,0,tau) model<-glm(z ~ xerr, family=binomial(logit))
2012 Nov 27
1
Effect of each term in the accuracy of Nonlinear multivariate regression fitting equation
Dear all, I have a set of data with 4 inputs (independent variables) and one output (dependent variable). I want to perform a regression analysis in order to fit these data to a regression model, however due to the non-linearity of the model I do not have a clue which equation to use. I am thinking of starting with a very general equation including ^3 terms and interactions between the variables
2007 Oct 27
1
problems in cross validation of SVM in pakage "e1071"
Hi: I am a newer in using R for data mining, and find the "e1071" pakage an excellent tool in doing data mining work! what frustrated me recently is that when I using the function "svm" and using the "cross=10" parameters, I got all the "accuracies" of the model greater than 1. Isn't that the accuracy should be smaller than 1? so I wander how, the accuracy of the cross validation is calculated, and what's the meaning of the accuracy? summary of the trained svm model as follows: > summary(model) Call: svm(formula = y ~ x, ca...