Two questions about principal components analysis in R: Q.1) Hogenraad and McKenzie (1999) used Bruce Thompson's FACSTRAP program to bootstrap the factor loadings and scores in a principal components analysis. The input to the analysis was a word-word correlation matrix derived from a frequency count of x words across n texts. This is how they described their procedure: "Finally, we used FACSTRAP (Thompson, 1988; see also Scott et al., 1989) to obtain standard deviations of eigenvalue estimates. Repeated factor analyses with variables or observations dropped result not exceptionally in similar factor structures (Iker, 1974b) although then a given factor I, say, in one solution may become factor II in the next. FACSTRAP forces the factors to keep a unique position and then offers three types of replacement analyses, each repeated 100 times and each involving different sample sizes, with means and SDs of the eigenvalues and factor loadings: The first is a sample size half the size of the original, the second is a sample the size of the original, and the third is a sample size twice the size of the original." [Hogenraad, R., & McKenzie, DP (1999). Replicating text: The cumulation of knowledge in social science. Quality & Quantity, 33:2, 97-116] Could anyone advise me whether (and how) it is possible to undertake a similar analysis in R? Q.2) Has anyone implemented the Standard Error Scree or a similar objective procedure for determining the optimal number of factors? Many thanks, Andrew Wilson