Displaying 20 results from an estimated 500 matches similar to: "regularized discriminant function analysis using klaR: problems with predictions"
2011 Feb 27
2
regularized dfa rda (Klar): problems with predictions
Dear all, I am trying to do a n-fold cross-validation for a regularized discrimant function analysis using rda from the package klaR. However, I have problems to predict the groups from the test/validation sample. The exmaples of the R documantation and some online webpage also do not work. Does anybody know what I have done wrong?
Here my code
# I want to use the first 6 observations for
2010 Nov 03
2
[klaR package] [NaiveBayes] warning message numerical 0 probability
Hi,
I run R 2.10.1 under ubuntu 10.04 LTS (Lucid Lynx) and klaR version 0.6-4.
I compute a model over a 2 classes dataset (composed of 700 examples).
To that aim, I use the function NaiveBayes provided in the package
klaR.
When I then use the prediction function : predict(my_model, new_data).
I get the following warning :
"In FUN(1:747[[747L]], ...) : Numerical 0 probability with
2007 Oct 03
1
help with stepclass (klaR)
I use Windows, R version 2.5.1
When I try to run stepclass (klaR) I get an error message/warning saying:
1: error(s) in modeling/prediction step in: cv.rate(vars = c(model, tryvar),
data = data, grouping = grouping, ...
Actually, I look 16 warnings of this type. Can anyone tell me what this
means?
Also, it returns only 2 out of the 79 variables as important, however these
variables
2006 Oct 22
1
Question:shardsplot (package:klaR)
Dear all,
I have a question on the shardsplot package:klaR(see the below Example).
Plese tell me the meanings of " logstand <- t((t(logcount) / sdlogcount) *
c(1,2,6,5,5,3))", much more.
Why does this example use "c(1,2,6,5,5,3)" ?
Examples:
# Compute clusters and an Eight Directions Arranged Map for the
# country data. Plotting the result.
2010 Apr 29
1
randomness in stepclass (klaR) or lda (MASS) ?
Hi,
a colleague ran a stepwise discriminant analysis
twice in a row and got different results, suggesting
some "sochasticity" in the algorithms involved.
I looked at her data and found that there was a lot
of collinearity, so that I reckoned that maybe "stepclass"
(klaR) cannot find a clear winner when trying to include a
new variable and makes a random choice. Is that true?
2007 Jun 05
1
klaR stepclass
Hi,
I'm trying to use "stepclass" to do a stepwise variable selection with
method=lda. I keep getting this warning message, which shows up once
for each variable added to the model during variable selection:
Warning message:
error(s) in modeling/prediction step in: cv.rate(vars = c(model,
tryvar), data = data, grouping = grouping,
I don't know how to interpret this warning. I
2009 Jun 30
2
NaiveBayes fails with one input variable (caret and klarR packages)
Hello,
We have a system which creates thousands of regression/classification models and in cases where we have only one input variable NaiveBayes throws an error. Maybe I am mistaken and I shouldn't expect to have a model with only one input variable.
We use R version 2.6.0 (2007-10-03). We use caret (v4.1.19), but have tested similar code with klaR (v.0.5.8), because caret relies on
2010 Oct 05
6
SVM functions
Hi !
Right now I am learning to use svm functions available in R and trying to
use these function with given example. I was stuck with svmlight function
which is available in klaR package. Any help would be appreciated regarding
this function.
1. I am unable to use svmlight( ) which is available in package: klaR.
Although I have downloaded klaR_0.6-3 package from
2011 Mar 12
1
Stepwise Discriminant... in R
Hello R list,
I'm looking to do some stepwise discriminant function analysis (DFA) based
on the minimization of Wilks' lambda in R to end up with a composite
signature (of metals "Al","Sb","Bi","Cr","Ba") capable of discriminating
100% of the source factors (LANDUSE: "A","B","C").
The Wilks' lambda
2007 Aug 31
1
Question on shardsplot
Dear All,
Would you please tell me how to display the sample No. on the map ?
---Below commands don't display the sample No.(from 1 to 150).---
library(som)
library(klaR)
iris.som3 <- som(iris[,1:4], xdim = 14,ydim = 6)
library(klaR); opar<- par(xpd = NA)
shardsplot(iris.som3, data.or = iris,label = TRUE)
legend(3.5,14.3, col = rainbow(3), xjust =0.5, yjust = 0,legend =
2005 Jul 19
1
a possible bug in svmlight (PR#8012)
When I used svmlight, I got below error:
my command is:
foo <- svmlight(y~., data= myData)
the results:
Error in file(con, "r") : unable to open connection
In addition: Warning messages:
1: svm_learn not found
2: cannot open file '_model_1.txt'
> myData[1:2,]
y X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
1 1 63 1 0 0 145 233 1 1 0 150 0 2.3 1
2005 Jul 05
1
Getting runtime error in stepclass
Hi!
I got the following runtime error when I tried to use svm method with
stepclass.
Error in "colnames<-"(`*tmp*`, value = c("0", "1")) :
attempt to set colnames on object with less than two dimensions
I repeated the same sequence of statements but this time I used the
classification function used in the example, i.e., "lda" and it worked
fine
2010 Mar 09
1
create picture (k -the nearest neighbours)
Hi
I want to create a nice picture about my result of k -the nearest neighbours
algorithm. Here is my easy code:
#################################
library(klaR)
library(ipred)
library(mlbench)
data(PimaIndiansDiabetes2)
dane=na.omit(PimaIndiansDiabetes2)[,c(2,5,9)]
dane[,2]=log(dane[,2])
dane[,1:2]=scale(dane[,1:2])
zbior.uczacy=sample(1:nrow(dane),nrow(dane)/2,F)
2011 Jul 07
1
Naive Bayes Classifier
Hi,
Currently I testing the packets that contain built-in features for
classification. Actually I looked packages such as: e1071, Klar, Caret,
CORElearn. However, from what I noticed when building a naive Bayesian
classifier, that they package use of the finite mixture model to estimate P
(x | C) and using a normal distribution. In my research I use binary data
and I want modeled P (x | C), eg the
2010 May 19
1
Re: Re: Xen-4 PVUSB kernel bug / Xenlinux 2.6.32
On Fri, May 14, 2010 at 09:10:57PM +0200, Peter Klar wrote:
> As the bug seems to be related to the SLAB allocator, the dump says ''kernel
> BUG at mm/slub.c:2969!'', I also recompiled the kernel using the SLAB instead
> of SLUB allocator, but this does not make any difference, the behaviour is
> the same (beside the dump then reports a bug within slab.c instead of
2010 May 19
1
Re: Re: Xen-4 PVUSB kernel bug / Xenlinux 2.6.32
On Fri, May 14, 2010 at 09:10:57PM +0200, Peter Klar wrote:
> As the bug seems to be related to the SLAB allocator, the dump says ''kernel
> BUG at mm/slub.c:2969!'', I also recompiled the kernel using the SLAB instead
> of SLUB allocator, but this does not make any difference, the behaviour is
> the same (beside the dump then reports a bug within slab.c instead of
2004 Nov 05
1
Lda versus Rda
Hello,
I used the lda function from the MASS (VR) package and the rda function
from the klaR package.
I wanted to compare the result of this two functions by using the same
training set.
Thus, I used the rda function with lambda=1 an gamma=0, I should emulate
the lda function and I should obtain the same result.
But this it not the case, the two result are very different.
My training set is
2009 Oct 27
1
"ipredknn" - How may I find values?
Hi everybody!
I want to find a closer neighbourins observation. This is my code:
##########################
library(klaR)
library(ipred)
library(mlbench)
data(PimaIndiansDiabetes2)
dane=na.omit(PimaIndiansDiabetes2)[,c(2,5,9)]
dane[,2]=log(dane[,2])
dane[,1:2]=scale(dane[,1:2])
zbior.uczacy=sample(1:nrow(dane),nrow(dane)/2,F)
2005 Nov 28
1
GLMM: measure for significance of random variable?
Hi,
I have three questions concerning GLMMs.
First, I ' m looking for a measure for the significance of the random variable in a glmm.
I'm fitting a glmm (lmer) to telemetry-locations of 12 wildcat-individuals against random locations (binomial response). The individual is the random variable. Now I want to know, if the individual ("TIER") has a significant effect on the model
2007 Jul 27
3
(PR#9811) sequence(c(2, 0, 3)) produces surprising results,
This is as doumented, and I think you could say the same thing of seq().
BTW, sequence() allows negative inputs, and I don't think you want
sum(input) in that case.
I've never seen the point of sequence(), but it has been around in R for a
long time. It is used in packages eRm, extRemes, hydrosanity, klaR, seas.
Who knows what people have in private code, so I don't see any