similar to: Getting individual co-ordinate points in k medoids cluster

Displaying 20 results from an estimated 900 matches similar to: "Getting individual co-ordinate points in k medoids cluster"

2006 Apr 10
2
passing known medoids to clara() in the cluster package
Greetings, I have had good success using the clara() function to perform a simple cluster analysis on a large dataset (1 million+ records with 9 variables). Since the clara function is a wrapper to pam(), which will accept known medoid data - I am wondering if this too is possible with clara() ... The documentation does not suggest that this is possible. Essentially I am trying to
2004 Jun 29
1
give PAM my own medoids
Hello, When using PAM (partitioning around medoids), I would like to skip the build-step and give the fonction my own medoids. Do you know if it is possible, and how ? Thank you very much. Isabel
2005 Jun 07
1
Specifying medoids in PAM?
I am using the PAM algorithm in the CLUSTER library. When I allow PAM to seed the medoids using the default __build__ algorithm things work well: > pam(stats.table, metric="euclidean", stand=TRUE, k=5) But I have some clusters from a Hierarchical analysis that I would like to use as seeds for the PAM algorithm. I can't figure what the mediod argument wants. When I put in the
2008 Feb 22
2
Looping and Pasting
Hello R-community: Much of the time I want to use loops to look at graphs, etc. For example, I have 25 plots, for which the names are m.1$medoids, m.2$medoids, ..., m.25$medoids. I want to index the object number (1:25) as below (just to show concept). for (i in 1:25){ plot(m.i$medoids) } I've tried the following, with negative results for ...
2009 Mar 29
1
[cluster package question] What is the "sum of the dissimilarities" in the pam command ?
Hello Martin Maechler and All, A simple question (I hope): How can I compute the "sum of the dissimilarities" that appears in the pam command (from the cluster package) ? Is it the "manhattan" distance (such as the one implemented by "dist") ? I am asking since I am running clustering on a dataset. I found 7 medoids with the pam command, and from it I have the
2008 Dec 17
1
bug (?!) in "pam()" clustering from fpc package ?
Hello all. I wish to run k-means with "manhattan" distance. Since this is not supported by the function "kmeans", I turned to the "pam" function in the "fpc" package. Yet, when I tried to have the algorithm run with different starting points, I found that pam ignores and keep on starting the algorithm from the same starting-points (medoids). For my
2015 Apr 29
2
cantidad de datos
Hola. Yo en vez de utilizar análisis cluster que impliquen distancias, probaría con un kmedias o con un pam (partition around medoids) pero utilizando muestras, la función clara de la librería cluster puede ayudarte. Pego el details de la ayuda de 'clara' Details clara is fully described in chapter 3 of Kaufman and Rousseeuw (1990). Compared to other partitioning methods such as pam,
2011 May 16
1
pam() clustering for large data sets
Hello everyone, I need to do k-medoids clustering for data which consists of 50,000 observations. I have computed distances between the observations separately and tried to use those with pam(). I got the "cannot allocate vector of length" error and I realize this job is too memory intensive. I am at a bit of a loss on what to do at this point. I can't use clara(), because I
2020 May 12
3
Graficos: como hacer que las etiquetas no estén sobrepuestas
Hola, Estoy haciendo un PCA con el paquete ade4. En el gráfico, las etiquetas de las especies suelen quedar sobrepuestas unas sobre las otras y no se pueden distinguir individualmente. ¿Hay alguna manera de generar un poco de espacio entre ellas para poder visualizarlas todas? gracias Yésica [[alternative HTML version deleted]]
2009 Feb 18
0
Index-G1 error
I am using some functions from package clusterSim to evaluate the best clusters layout. Here is the features vector I am using to cluater 12 signals: > alpha.vec [1] 0.8540039 0.8558350 0.8006592 0.8066406 0.8322754 0.8991699 0.8212891 [8] 0.8815918 0.9050293 0.9174194 0.8613281 0.8425293 In the following I pasted an excerpt of my program:
2015 Apr 29
2
cantidad de datos
El inconveniente con un K-medias, es que se tiene que se tiene que pre definir el número de segmentos, pero eso es algo con lo q no cuento. La solución de Javier me parece q sería la única opción. Atte. Ricardo Alva Valiente -----Mensaje original----- De: R-help-es [mailto:r-help-es-bounces en r-project.org] En nombre de javier.ruben.marcuzzi en gmail.com Enviado el: miércoles, 29 de abril de
2010 Jun 07
1
classification algorithms with distance matrix
Dear all, I have a problem when using some classification functions (Kmeans, PAM, FANNY...) with a distance matrix, and i would to understand how it proceeds for the positioning of centroids after one execution step. In fact, in the classical formulation of the algorithm, after each step, to re-position the center, it calculates the distance between any elements of the old cluster and its
2015 Apr 29
2
cantidad de datos
Buen aporte?excelente!! Atte. Ricardo Alva Valiente De: Jose Luis Cañadas Reche [mailto:canadasreche en gmail.com] Enviado el: miércoles, 29 de abril de 2015 12:51 PM Para: Alva Valiente, Ricardo (RIAV); 'javier.ruben.marcuzzi en gmail.com'; R-help-es en r-project.org Asunto: Re: [R-es] cantidad de datos Podrías hacer varios kmedias con diferente número de clusters y comprobar como
2002 May 08
3
Inputting Co-ordinates
Hello I am trying to input some co-ordinate sets into R of the form x,y by using lists. The command I am using is: p1 <- list(x=c(3445,563,646), y=c(234,567,456)) However the actual co-ordinate sets that I am trying to input have 305 points each and I think that the program will not accept a command that is as long as necessary. Is this so? If this is the case can you tell me how to read
2008 Aug 01
2
Exporting data to a text file
HI R users With clara function I get a data frame (maybe this is not the exact word, I'm new to R) with the following variables: > names(myclara) [1] "sample" "medoids" "i.med" "clustering" "objective" [6] "clusinfo" "diss" "call" "silinfo" "data" I want to
2017 Sep 28
1
BoF: Co-ordinating RISC-V development in LLVM, AND RISC-V LLVM working session event
There will be a RISC-V focused Birds of a Feather (BoF) session at the LLVM Dev Meeting in a few weeks time <https://2017llvmdevmtg.sched.com/event/CMiv/co-ordinating-risc-v-development-in-llvm> (Wednesday, October 18, 4:20pm - 5:05pm) The aim of this session is to bring together everyone with an interest in RISC-V support LLVM, and especially those from companies who have had private
2010 Apr 24
4
DICE Coefficient of similarity measure
Hi, I wanted the DICE coefficient (similarity measure for binary variables) to be calculated in R and found that the "igraph" package has the option of "similarity.dice" to do this. But, for this command, the input object should be an igraph object. But, I have a dataframe of columns containing 1's and 0's. Can I convert this dataframe into an igraph object, so that
2008 Jul 15
3
playwith package crashes on Mac
Dear R-helpers, I tried the playwith packages for the first time, and it crashed R: > require(playwith) Loading required package: playwith Loading required package: lattice Loading required package: grid Loading required package: gWidgets Loading required package: gWidgetsRGtk2 Loading required package: RGtk2 Loading required package: cairoDevice > sessionInfo() R version 2.7.1
2011 Aug 10
4
Clustering Large Applications..sort of
Hello all, I am using the clustering functions in R in order to work with large masses of binary time series data, however the clustering functions do not seem able to fit this size of practical problem. Library 'hclust' is good (though it may be sub par for this size of problem, thus doubly poor for this application) in that I do not want to make assumptions about the number of
2010 Oct 25
1
re-vertical conversion of data entries
Dear R user, Can you please help me. How do I convert part of a cluster analysis output under the heading “Clustering vector” as shown below, showing the clusters to which each respondent belongs to:      [1] 1 1 2 2 1 2 1 2 1 1 2 2 1 2 2 2 2 1 1 1 1 2 2 1 2 2 1 2 2 2 2 2 2 2 2 1 2   [38] 2 1 1 2 2 2 2 2 1 2 1 2 2 2 2 1 2 1 2 2 1 2 2 2 2 2 2 1 2 1 2 2 2 1 1 2 2   [75] 2 1 2 2 2 2 2 2 2 1 1 2