search for: mclustbic

Displaying 12 results from an estimated 12 matches for "mclustbic".

2009 Apr 05
1
Which model to keep (negative BIC)
Hi, My questions concern the function 'mclustBIC' which compute BIC for a range of clusters of several models on the given data and the other function 'mclustModel' which choose the best model and the best number of cluster accordind to the results of the previous cited function. 1) When trying the following example (see ?mclustModel...
2010 Sep 22
0
Help with mclust package
...el: unequal variance with 3 components > mc$mu NULL > sqrt(mc$sigmasq) Error in sqrt(mc$sigmasq) : Non-numeric argument to mathematical function > warnings() Warning messages: 1: In meV(data = data, z = z, prior = prior, control = control, ... : sigma-squared falls below threshold 2: In mclustBIC(data = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, ... : there are missing groups 3: In meV(data = data, z = z, prior = prior, control = control, ... : sigma-squared falls below threshold 4: In mclustBIC(data = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, ... : there are missing groups 5: In meV(data = data, z...
2007 Jul 18
2
EM unsupervised clustering
...lust(as.data.frame(data)), as.data.frame(data)) Hit <Return> to see next plot: Hit <Return> to see next plot: Hit <Return> to see next plot: Error in 1:L : NA/NaN argument In addition: Warning messages: 1: best model occurs at the min or max # of components considered in: summary.mclustBIC(Bic, data, G = G, modelNames = modelNames) 2: optimal number of clusters occurs at min choice in: Mclust(as.data.frame(anc.st.mat)) 3: insufficient input for specified plot in: coordProj(data = data, parameters = x$parameters, z = x$z, what = "classification", That's puzzling becau...
2009 Nov 25
1
fitting mixture of normals distribution to asset return data
An embedded and charset-unspecified text was scrubbed... Name: not available URL: <https://stat.ethz.ch/pipermail/r-help/attachments/20091125/6a78f78b/attachment-0001.pl>
2008 Oct 20
1
Mclust problem with mclust1Dplot: Error in to - from : non-numeric argument to binary operator
...=c("V"), warn=T, G=1:3) Warning messages: 1: In meV(data = data, z = z, prior = prior, control = control, warn = warn) : sigma-squared falls below threshold 2: In meV(data = data, z = z, prior = prior, control = control, warn = warn) : sigma-squared falls below threshold 3: In summary.mclustBIC(Bic, data, G = G, modelNames = modelNames) : best model occurs at the min or max # of components considered 4: In Mclust(my.data, modelNames = c("V"), warn = T, G = 1:3) : optimal number of clusters occurs at min choice Many thanks in advance for your help, Best wishes, Emmanuel...
2010 Jan 06
1
positive log likelihood and BIC values from mCLUST analysis
...TRUE, expand = TRUE, trace = FALSE, plot = FALSE, old.wa = FALSE) ######################### BEGIN EM ANALYSIS ######################### #Use the points determined by MDS to perform EM clustering. #Allow only the unconstrained models. Sometimes, constrained models mess things up! EMclusters <- mclustBIC(mds$points, G=Clusterrange, modelNames= c("VII", "VVI", "VVV"), prior=NULL, control=emControl(), initialization=list(hcPairs=NULL, subset=NULL, noise=NULL), Vinv=NULL, warn=FALSE, x=NULL) The input data are in the form of an N X N matrix of pairw...
2010 Apr 19
1
What is mclust up to? Different clusters found if x and y interchanged
Hello All... I gave a task to my students that involved using mclust to look for clusters in some bivariate data of isotopes vs various mining locations. They discovered something I didn?t expect; the data (called tur) is appended below. p <- qplot(x = dD, y = dCu65, data = tur, color = mine) print(p) # simple bivariate plot of the data; looks fine mod1 <- Mclust(tur[,2:3]) mod1$G mod2
2009 Jul 26
1
normal mixture model
Hi, All, I want to fit a normal mixture model. Which package in R is best for this? I was using the package 'mixdist', but I need to group the data into groups before fitting model, and different groupings seem to lead to different results. What other package can I use which is stable? And are there packages that can automatically determine the number of components? Thank you, Cindy
2010 Mar 22
0
superfluous distribution found with mclust
...51.48167, 52.22395, 54.96204, 59.58895, 55.49020, 50.50893, 49.97572, 53.26222, 57.10047, 51.25523, 52.38768, 56.42965, 51.83258, 55.40537, 51.60564, 54.68883, 53.48098, 58.47231, 70.15088, 51.68805, 52.82636, 52.97804, 51.90228, 53.49184, 52.24366, 52.36895, 53.26520, 52.27327, 50.85403) cl <- mclustBIC(my.data) myModel <- summary(cl, my.data) Warning message: In map(out$z) : no assignment to 1 I do not know why this happens, but this confirms that a first distribution was found but no data was assigned to it: myModel$classification [1] 3 2 2 3 2 3 2 3 2 2 2 2 2 2 3 3 2 3 3 2 2 3 2 2 2 3 2...
2008 Mar 26
0
out of colors in Mclust with 12 clusters
...that the CPU grinds to a halt but the memory is exhausted. Shouldn't a process in R exit and return an error when memory can no longer be allocated? Wishful thinking???? Mark > exprs.clust <- Mclust(exprs(AOP.sig), G=9:12, modelNames="VEI") Warning messages: 1: In summary.mclustBIC(Bic, data, G = G, modelNames = modelNames) : best model occurs at the min or max # of components considered 2: In Mclust(exprs(AOP.sig), G = 9:12, modelNames = "VEI") : optimal number of clusters occurs at max choice > plot(exprs.clust, data = exprs(AOP.sig)) Hit <Return&gt...
2010 May 05
5
Dynamic clustering?
Are there R packages that allow for dynamic clustering, i.e. where the number of clusters are not predefined? I have a list of numbers that falls in either 2 or just 1 cluster. Here an example of one that should be clustered into two clusters: two <- c(1,2,3,2,3,1,2,3,400,300,400) and here one that only contains one cluster and would therefore not need to be clustered at all. one <-
2010 Jan 11
1
K-means recluster data with given cluster centers
K-means recluster data with given cluster centers Dear R user, I have several large data sets. Over time additional new data sets will be created. I want to cluster all the data in a similar/ identical way with the k-means algorithm. With the first data set I will find my cluster centers and save the cluster centers to a file [1]. This first data set is huge, it is guarantied that cluster