Displaying 20 results from an estimated 1000 matches similar to: "New package: R to LaTeX Univariate Analyses"
2007 Dec 31
0
R to LaTeX Univariate Analysis
Hi all
Well, first: happy new year...
Second: I write a function in R that might interest some other people.
On the other hand, I am closer to beginners than experts; I don't know
how valuable my code is. I don't know how long it will take to me to
create a library and I don't know if it's worth to. So before starting
this long process, I would like some advices, both on the
2005 Jun 28
2
enhanced MDS
Hi again
Sorry, in looking again at sammon and isoMDS I see that they seem to do
exactly what I want, except that they are non-metric, which means, as I
understand it, that they relate the rank orders of the variables rather than
the actual distances.
Could I use these non-metric MDS packages even if my distances are metric?
Thanks
Karen
--
Karen Kotschy
Centre for Water in the Environment
2009 Dec 07
3
savePlot for Mac and / or Linux?
Hi all,
In the package rtlu, I use the function savePlot. It is convenient since
it let the user decide in which graphic format he wants his graph to be
export.
But when I run R CMD check, I get the following message :
> rtlu(V1,fileOutput="First.tex",textBefore="\\section{Variable 1 to
3}",graphName="V1")
Error in savePlot(filename = nomBarplot, type = type)
2009 Apr 20
2
PCA and automatic determination of the number of components
Hi all,
I have relatively small dataset on which I would like to perform a PCA. I am
interested about a package that would also combine a method for determining
the number of components (I know there are plenty of approaches to this
problem). Any suggestions about a package/function?
thanks,
Nick
--
View this message in context:
2008 Dec 07
2
concordance correlation coefficient using R
Hi.
I have data which i would want to assess the degree of agreement
between two assays, e.g., to evaluate reproducibility or for
inter-rater reliability. I have used the Pearson product-moment
correlation coefficient. It looks good ranginging between 0.90 to
0.998. Though this looks good. I am told the Concordance correlation
coefficient will give a better picture of how reproducible the assay
2005 May 08
2
Need a factor level even though there are no observations
I'm in this situation:
factorlabels <- c("School", "College", "Beyond")
with data for 8 families:
education.man <- c(1,2,1,2,1,2,1,2) # Note : no "3" values
education.wife <- c(1,2,3,1,2,3,1,2) # 1,2,3 are all present.
My goal is to create this table:
School College Beyond
2006 Apr 20
4
online tutorials
I work for a Investment group with a very extensive training program and
we are having our new hires take a statistics course at University of
Chicago where they have to complete some assignments with R. I was
wondering if there are any online tutorials that exist where we could
get our participants comfortable with R before the class itself? I
appreciate any help at all.
Thanks,
Matt Maxon
2009 Dec 26
5
Is SEM package of R suitable for sem analysis
Dears,
I'm a college student and In doing my statistics homework.
I use R with SEM package as my tool for sem analysis,
but my teacher told me AMOS is more suitable for such analysis.
Could someone help tell me whether it is true
that some commercial software is better accepted in academic fields?
Sorry if I should not post such topics here.
--
Best Regards,
Reeyarn T. Lee
Accounting
2006 Aug 10
1
How to fit bivaraite longitudinal mixed model ?
Hi
Is there any way to fit a bivaraite longitudinal mixed model using R. I have
a data set with col names
resp1 (Y_ij1), resp2 (Y_ij2), timepts (t_ij), unit(i)
j=1,2,..,m and i=1,2,..n.
I want to fit the following two models
Model 1
Y_ij1, Y_ij2 | U_i = u_i ~ N(alpha + u_i + beta1*t_ij, Sigma)
U_i ~ iid N(0, sigu^2)
Sigma = bivariate AR structure
alpha and beta are vectors of order 2.
2012 Oct 17
2
loop of quartile groups
Greetings R users,
My goal is to generate quartile groups of each variable in my data set. I
would like each experiment to have its designated group added as a
subsequent column. I can accomplish this individually with the following
code:
brks <- with(data_variables,
cut2(var2, g=4))
#I don't want the actual numbers, I need a numbered group
data$test1=factor(brks,
2008 Jun 13
2
Quartile regression question
I have data that looks like
lake,loglength,logweight
1,2.369215857,1.929418926
1,2.426511261,2.230448921
1,2.434568904,2.298853076
1,2.437750563,2.298853076
1,2.442479769,2.230448921
1,2.445604203,2.356025857
...
102,2.722633923,3.310268367
102,2.781755375,3.502153893
102,2.836324116,3.683407299
102,2.802773725,3.583312152
102,2.790285164,3.546419267
102,2.806179974,3.599118565
2010 Jan 22
2
Quartiles and Inter-Quartile Range
Why am I getting a wrong result for quartiles?
here is my code:
> cbiomass = c(910, 1058, 929, 1103, 1056, 1022, 1255, 1121, 1111, 1192,
> 1074, 1415)
> summary(cbiomass)
> IQR(cbiomass)
The result R gives me is:
For the summary
> Min. 1st Qu. Median Mean 3rd Qu. Max.
910 1048 1088 1104 1139 1415
For IQR
> 91.25
*********
The true Q1 is 1039
2012 Sep 28
0
Statistician Vacancy - CAMHS EBPU, UCL and Anna Freud Centre
The Child and Adolescent Mental Health Services (CAMHS) Evidence Based Practice Unit (EBPU) is a dynamic, expanding and friendly academic and service development unit, currently consisting of 19 people including: researchers, clinicians and service development specialists (http://www.ucl.ac.uk/clinical-psychology/EBPU/). It is part of University College London (UCL) and the Anna Freud Centre
2012 May 08
0
Mental Health Informatician
The CAMHS EBPU is a research and training unit that is part of University
College London (UCL) and the Anna Freud Centre (a registered charity
dedicated to excellence in child and adolescent mental health). Within the
unit sits the central team for the CAMHS Outcome Research Consortium (CORC)
? a national collaboration between child and adolescent mental health
services (CAMHS) across the UK who
2005 Jul 22
0
boxplot() defaults {was "boxplot in extreme cases"}
>[Rd] boxplot() defaults {was "boxplot in extreme cases"}
>Martin Maechler maechler at stat.math.ethz.ch
>Mon Nov 8 10:36:42 CET 2004
>
> AndyL> Try:
>
> AndyL> x <- list(x1=rep(c(0,1,2),c(10,20,40)),
> x2=rep(c(0,1,2),c(10,40,20)))
> AndyL> boxplot(x, pars=list(medpch=20, medcex=3))
>
> AndyL> (Cf ?bxp, pointed to from
2012 Dec 19
1
Theoretical confidence regions for any non-symmetric bivariate statistical distributions
Respected R Users,
I looking for help with generating theoretical confidence regions for any
of non-symmetric bivariate statistical distributions (bivariate Chi-squared
distribution<Wishart distribution>, bivariate F-distribution, or any of the
others). I want to to used it as a benchmark to compare a few strategies
constructing confidence regions for non-symmetric bivariate data.
There is
2006 Jul 13
1
TR: Latent Class Analysis
_____
De : Pousset [mailto:maud.pousset@noos.fr]
Envoyé : mardi 4 juillet 2006 18:38
À : 'r-help@stat.math.ethz.ch'
Objet : Latent Class Analysis
Hello everybody,
I am working on latent class analysis and have already used the ‘R’ function
« lca » (in the e1071 package). I ‘ve got interesting results but I can’t
simply find out the methodology used by this routine :
1) What
2017 May 18
2
Bug: floating point bug in nclass.FD can cause hist() to crash
Hello everybody,
This is a bug involving functions in core R package:
graphics::hist.default, grDevices::nclass.FD, and
base::pretty.default. It is not yet on Bugzilla. I cannot submit it
myself, as I do not have an account. Could somebody else add it for
me, perhaps? That would be much appreciated.
Kind regards,
Sietse
Sietse Brouwer
Summary
-------
Floating point errors can cause a data
2017 May 18
0
Bug: floating point bug in nclass.FD can cause hist() to crash
I just got the same error message with
> sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.4
Matrix products: default
BLAS:
/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK:
/Library/Frameworks/R.framework/Versions/3.4/Resources/lib/libRlapack.dylib
2017 Oct 13
0
How to define proper breaks in RFM analysis
Hemant's problem is that the indicators are not distributed uniformly.
With a uniform distribution, categorization gives a reasonably optimal
separation of cases. One approach would be to drop categorization and
calculate the overall score as the mean of the standardized indicator
scores. Whether this is an option I do not know. I did offer an
"eyeball" set of breaks in a previous