similar to: prediction.strength in r package fpc

Displaying 20 results from an estimated 300 matches similar to: "prediction.strength in r package fpc"

2008 Dec 17
1
bug (?!) in "pam()" clustering from fpc package ?
Hello all. I wish to run k-means with "manhattan" distance. Since this is not supported by the function "kmeans", I turned to the "pam" function in the "fpc" package. Yet, when I tried to have the algorithm run with different starting points, I found that pam ignores and keep on starting the algorithm from the same starting-points (medoids). For my
2009 Feb 18
0
Index-G1 error
I am using some functions from package clusterSim to evaluate the best clusters layout. Here is the features vector I am using to cluater 12 signals: > alpha.vec [1] 0.8540039 0.8558350 0.8006592 0.8066406 0.8322754 0.8991699 0.8212891 [8] 0.8815918 0.9050293 0.9174194 0.8613281 0.8425293 In the following I pasted an excerpt of my program:
2002 Apr 09
1
Problem handling NA indexes for character matrixes (PR#1447)
In a package I've been developing for manipulating genetic data I discovered a problem when indexing into character arrays using NA's: Create a character matrix and a numeric matrix > cmat <- matrix( letters[1:4], ncol=2, nrow=2) > nmat <- matrix( 1:4, ncol=2, nrow=2) Create an index vector containing an NA value > indvec <- c(1,2,NA) Indexing works fine for both
2006 Apr 05
1
"partitioning cluster function"
Hi All, For the function "bclust"(e1071), the argument "base.method" is explained as "must be the name of a partitioning cluster function returning a list with the same components as the return value of 'kmeans'. In my understanding, there are three partitioning cluster functions in R, which are "clara, pam, fanny". Then I check each of them to
2011 Jan 15
1
[Bug 33159] New: Lock up on GeForce 9600 GT
https://bugs.freedesktop.org/show_bug.cgi?id=33159 Summary: Lock up on GeForce 9600 GT Product: xorg Version: unspecified Platform: x86-64 (AMD64) OS/Version: Linux (All) Status: NEW Severity: major Priority: medium Component: Driver/nouveau AssignedTo: nouveau at lists.freedesktop.org
2008 Aug 01
2
Exporting data to a text file
HI R users With clara function I get a data frame (maybe this is not the exact word, I'm new to R) with the following variables: > names(myclara) [1] "sample" "medoids" "i.med" "clustering" "objective" [6] "clusinfo" "diss" "call" "silinfo" "data" I want to
2010 Oct 25
1
re-vertical conversion of data entries
Dear R user, Can you please help me. How do I convert part of a cluster analysis output under the heading “Clustering vector” as shown below, showing the clusters to which each respondent belongs to:      [1] 1 1 2 2 1 2 1 2 1 1 2 2 1 2 2 2 2 1 1 1 1 2 2 1 2 2 1 2 2 2 2 2 2 2 2 1 2   [38] 2 1 1 2 2 2 2 2 1 2 1 2 2 2 2 1 2 1 2 2 1 2 2 2 2 2 2 1 2 1 2 2 2 1 1 2 2   [75] 2 1 2 2 2 2 2 2 2 1 1 2
2009 Jun 29
0
Naive knn question
Dear list, I have two dissimilarity matrices, one for a training data set which I then clustered using PAM. The second is a diss matrix for a validation data set (an independent field sample). I have been trying to use knn to distinguish distances between the validation data set and the 6 mediods of the training data defined by using PAM. I continue to get error messages in regards to either the
2008 Feb 22
2
Looping and Pasting
Hello R-community: Much of the time I want to use loops to look at graphs, etc. For example, I have 25 plots, for which the names are m.1$medoids, m.2$medoids, ..., m.25$medoids. I want to index the object number (1:25) as below (just to show concept). for (i in 1:25){ plot(m.i$medoids) } I've tried the following, with negative results for ...
2004 Jun 29
1
give PAM my own medoids
Hello, When using PAM (partitioning around medoids), I would like to skip the build-step and give the fonction my own medoids. Do you know if it is possible, and how ? Thank you very much. Isabel
2005 Jun 07
1
Specifying medoids in PAM?
I am using the PAM algorithm in the CLUSTER library. When I allow PAM to seed the medoids using the default __build__ algorithm things work well: > pam(stats.table, metric="euclidean", stand=TRUE, k=5) But I have some clusters from a Hierarchical analysis that I would like to use as seeds for the PAM algorithm. I can't figure what the mediod argument wants. When I put in the
2009 Mar 29
1
[cluster package question] What is the "sum of the dissimilarities" in the pam command ?
Hello Martin Maechler and All, A simple question (I hope): How can I compute the "sum of the dissimilarities" that appears in the pam command (from the cluster package) ? Is it the "manhattan" distance (such as the one implemented by "dist") ? I am asking since I am running clustering on a dataset. I found 7 medoids with the pam command, and from it I have the
2024 Sep 17
1
Getting individual co-ordinate points in k medoids cluster
Hello I am using k medoids in R to generate sets of clusters for datasets through time. I can plot the individual clusters OK but what I cannot find is a way of pulling out the co-ordinates of the individual points in the cluster diagrams - none of the kmed$... info sets seems to be this. Beneath is an example of a k medoid prog using the built in US arrests dataset - this is not the data I am
2011 May 16
1
pam() clustering for large data sets
Hello everyone, I need to do k-medoids clustering for data which consists of 50,000 observations. I have computed distances between the observations separately and tried to use those with pam(). I got the "cannot allocate vector of length" error and I realize this job is too memory intensive. I am at a bit of a loss on what to do at this point. I can't use clara(), because I
2011 Mar 31
1
Cluster analysis, factor variables, large data set
Dear R helpers, I have a large data set with 36 variables and about 50.000 cases. The variabels represent labour market status during 36 months, there are 8 different variable values (e.g. Full-time Employment, Student,...) Only cases with at least one change in labour market status is included in the data set. To analyse sub sets of the data, I have used daisy in the cluster-package to create
2015 Apr 29
2
cantidad de datos
Hola. Yo en vez de utilizar análisis cluster que impliquen distancias, probaría con un kmedias o con un pam (partition around medoids) pero utilizando muestras, la función clara de la librería cluster puede ayudarte. Pego el details de la ayuda de 'clara' Details clara is fully described in chapter 3 of Kaufman and Rousseeuw (1990). Compared to other partitioning methods such as pam,
2011 Aug 10
4
Clustering Large Applications..sort of
Hello all, I am using the clustering functions in R in order to work with large masses of binary time series data, however the clustering functions do not seem able to fit this size of practical problem. Library 'hclust' is good (though it may be sub par for this size of problem, thus doubly poor for this application) in that I do not want to make assumptions about the number of
2006 Apr 10
2
passing known medoids to clara() in the cluster package
Greetings, I have had good success using the clara() function to perform a simple cluster analysis on a large dataset (1 million+ records with 9 variables). Since the clara function is a wrapper to pam(), which will accept known medoid data - I am wondering if this too is possible with clara() ... The documentation does not suggest that this is possible. Essentially I am trying to
2006 Mar 17
0
(no subject)
Hi there, I notice that some of the clustering methods in R are not appropriate to deal with large data set. Here is the list I make to see which are appropriate or which are not appropriate for large dataset. Could you please take a look and check if it is right or not? I need this information to decide which methods I should choose. Thank you! P.S.: List: Appropriate for large
2003 Dec 24
0
Is there an R or S implementation of PAMSIL or PAMMEDSIL
I have some data that is dwarfed by one large cluster. I came across a paper titled "A New Partitioning Around Medoids Algorithm" (van der Laan, Pollard & Bryan, 2002) http://www.bepress.com/ucbbiostat/paper105/ that describes PAMSIL and PAMMEDSIL that look as though they might be more appropriate for the data I have. There does not appear to be much out there which is describing