Thank you for reading. I am trying to get sphericity values, and I understood I
need to use mlm, but how do I implement a nested within subject design in mlm? I
already read the R newsletter, fox chapter appendix, EZanova, and whatever I
could find online.
My original ANOVA
anova(aov(resp ~ sucrose*citral, random =~1 | subject, data = p12bl, subset =
exps==1))
Or
anova(aov(resp ~ sucrose*citral, random =~1 | subject/sucrose*citral, data =
p12bl, subset = exps==1))
?
Thanks,
Adam
----------------------------------------
From: r-help-request at r-project.org
Subject: R-help Digest, Vol 90, Issue 27
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 12:00:09 +0200
Send R-help mailing list submissions to
r-help at r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request at r-project.org
You can reach the person managing the list at
r-help-owner at r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
--Forwarded Message Attachment--
From: veepsirtt at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 16:14:52 +0530
Subject: [R] sendmailR-package-valid code needed
## Not run:
from <- sprintf("
to <- ""
subject <- "Hello from R"
msg <- "It works!"
sendmail(from, to, subject, msg,
control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
## End(Not run)
the above commands are provided in this document ie
http://cran.r-project.org/web/packages/sendmailR/sendmailR.pdf
it is not working.
hence give me a valid code for sending mails using gmail.com
thanks
veepsirtt
--Forwarded Message Attachment--
From: landronimirc at gmail.com
CC: r-help at r-project.org
To: adicool4u at gmail.com
Date: Mon, 23 Aug 2010 14:35:50 +0300
Subject: Re: [R] Fitting a GARCH model in R
Hello
You can find functions related to GARCH by searching on Rseek.org or
by running in R:
install.packages('sos', dep=T)
require(sos)
findFn('garch')
Regards
Liviu
On Mon, Aug 23, 2010 at 5:59 AM, Aditya Damani wrote:> Hi,
>
> I want to fit a mean and variance model jointly.
>
> For example I might want to fit an AR(2)-GARCH(1,1) model i.e.
>
> r_t = constant_term1 + b*r_t-1 + c*r_t-2 + a_t
>
> where a_t = sigma_t*epsilon_t
>
> where sigma^2_t = constant_term2 + p*sigma^2_t-1 + q*a^2_t-1
>
> i.e. R estimates a constant_term1, b, c, constant_term2, p, q
>
> TIA
> Aditya
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Do you know how to read?
http://www.alienetworks.com/srtest.cfm
http://goodies.xfce.org/projects/applications/xfce4-dict#speed-reader
Do you know how to write?
http://garbl.home.comcast.net/~garbl/stylemanual/e.htm#e-mail
--Forwarded Message Attachment--
From: jordan_oplante at hotmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 11:52:56 +0000
Subject: [R] One legend for a multiple graph page
Hi,
I am using the Sciplot package to graph barplots. Since I put 4 barplots per
page and that the legends are always the same, I would like to find a way to
have only one legend per page. Also, I would like it to be centered in the
middle of the page. I tried locator() without success.
Thanks for help.
Best
Jordan
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: bluejay948 at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 12:30:28 +0200
Subject: [R] DNA sequence Fst
Hi,
I want to analyse DNA sequence data (mtDNA) in R as in calculate Fst,
Heterozygosity and such summary statistics. Package Adagenet converts the
DNA sequence into retaining only retaining the polymorphic sites and then
calcuates Fst.. but is there any other way to do this? I mean analyse the
DNA sequence as it is.. and calculate the statistics?
Thanks!
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: Jordan.Ouellette-Plante at dfo-mpo.gc.ca
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 08:49:51 -0300
Subject: [R] One legend for a multiple graph page
Hi,
I am using the Sciplot package to graph barplots. Since I put 4 barplots
per page and that the legend is always the same, I would like to find a
way to have only one legend per page. Also, I would like it to be
centered in the middle of the page. I tried locator() without success.
Thanks for help.
Best
Jordan
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: tg at shelx.uni-ac.gwdg.de
CC: r-help at r-project.org
To: bluejay948 at gmail.com
Date: Mon, 23 Aug 2010 14:05:41 +0200
Subject: Re: [R] DNA sequence Fst
Hello Blue Ray,
I have worked with DNA, but frankly I do not know what Fst, Heterozygosity and
"such summary statistics" might be.
If you could explain a little, maybe even give a reference about their meaning
and how to do the calculations, people who are not biologists but know R might
be able to give you advice, increasing the probability of being answered :-)
Tim
On Mon, Aug 23, 2010 at 12:30:28PM +0200, blue jay
wrote:> Hi,
>
> I want to analyse DNA sequence data (mtDNA) in R as in calculate Fst,
> Heterozygosity and such summary statistics. Package Adagenet converts the
> DNA sequence into retaining only retaining the polymorphic sites and then
> calcuates Fst.. but is there any other way to do this? I mean analyse the
> DNA sequence as it is.. and calculate the statistics?
>
>
> Thanks!
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
--
Tim Gruene
Institut fuer anorganische Chemie
Tammannstr. 4
D-37077 Goettingen
GPG Key ID = A46BEE1A
--Forwarded Message Attachment--
From: S.Ellison at lgc.co.uk
CC: r-help at r-project.org
To: zroslina at yahoo.com
Date: Mon, 23 Aug 2010 13:19:42 +0100
Subject: Re: [R] which one give clear picture-pdf, jpg or tiff?
I use savePlot() with type="emf" (remember to include the right file
extension - although savePlot will use the type as default extension, it
will only do so if there are no '.'s in the filename, so filenames like
'../plots/aplot' won't get the extension automatically. Really
should
have made that check an extension check!).
.emf format is a native format for windows vector graphics on a windows
system, so Word imports .emf files directly.
I would not use the older .wmf format, though; scaling seems to have
gone badly awry for .wmf from R. No idea why as I assume R uses native
windows calls to generate them.
Like others, I would not recommend jpg, png or tiff for import into
word-processing softeare; you either get oversized files or poor
reproduction and scaling.
Steve E
>>> Duncan Murdoch 20/08/2010 09:27:53>>>
On 20/08/2010 1:58 AM, Jeff Newmiller wrote:> I use Rgui and copy/paste special windows metafile.
*******************************************************************
This email and any attachments are confidential. Any use...{{dropped:8}}
--Forwarded Message Attachment--
From: croosen at mango-solutions.com
To: bluejay948 at gmail.com; r-help at r-project.org
Date: Mon, 23 Aug 2010 13:20:01 +0100
Subject: Re: [R] DNA sequence Fst
Hi,
Take a look at the Bioconductor packages, particularly "BSgenome". If
that isn't specific enough, you might want to try the Bioconductor
mailing list rather than "R-help".
http://www.bioconductor.org/packages/release/bioc/html/BSgenome.html
bioconductor at stat.math.ethz.ch
Best regards,
Charlie Roosen
Mango Solutions
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
On Behalf Of blue jay
Sent: 23 August 2010 12:30
To: r-help at r-project.org
Subject: [R] DNA sequence Fst
Hi,
I want to analyse DNA sequence data (mtDNA) in R as in calculate Fst,
Heterozygosity and such summary statistics. Package Adagenet converts
the
DNA sequence into retaining only retaining the polymorphic sites and
then
calcuates Fst.. but is there any other way to do this? I mean analyse
the
DNA sequence as it is.. and calculate the statistics?
Thanks!
[[alternative HTML version deleted]]
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
LEGAL NOTICE
This message is intended for the use o...{{dropped:9}}
--Forwarded Message Attachment--
From: jim at bitwrit.com.au
CC: r-help at r-project.org
To: jordan_oplante at hotmail.com
Date: Mon, 23 Aug 2010 22:23:19 +1000
Subject: Re: [R] One legend for a multiple graph page
On 08/23/2010 09:52 PM, Jordan Ouellette-Plante wrote:>
>
> Hi,
>
> I am using the Sciplot package to graph barplots. Since I put 4 barplots
per page and that the legends are always the same, I would like to find a way to
have only one legend per page. Also, I would like it to be centered in the
middle of the page. I tried locator() without success.
>
Hi Jordan,
Don't know about Sciplot, but you can do something like this with:
par(mfrow=c(2,2))
par(mar=c(6,4,4,2))
barplot(...)
barplot(...)
par(mar=c(5,4,6,2))
barplot(...)
barplot(...)
par(xpd=TRUE)
# place the legend in the space in the middle
legend(...)
par(xpd=FALSE)
Jim
--Forwarded Message Attachment--
From: info at aghmed.fsnet.co.uk
CC: r-help at r-project.org
To: cecilia.carmo at ua.pt
Date: Mon, 23 Aug 2010 13:27:26 +0100
Subject: Re: [R] problems with merge() - the output has many repeated lines
At 18:23 22/08/2010, Cecilia Carmo wrote:>I have done
>intersect(names(df1), names(df2))
>[1] "firm" "year"
>
>This is the key I used to merge
>merge(df1,df2,by=c("firm","year"))
>
>And there is just one row firm/year in df1 that
>matches with another firm/year row in df2. Df1
>has more firm/year rows than df2, and them don't match with none in df2.
That is what you believe but it seems that R disagrees.
I imagine the dataframes are too big to post so
what I would try first is to create new
dataframes containing just the variables firm and
year (say newdf1 and newdf2), merge them and see
whether I got the expected number of rows. If I
did then I would add other variables back into
the dataframe until the problem re-appeared.
>Cec?lia
>
>Em Sun, 22 Aug 2010 12:09:57 -0500
> Erik Iverson escreveu:
>>Cecilia -
>>Find what columns you're matching on,
>>intersect(names(df1), names(df2)),
>>Maybe that will shed some light on the issue.
>>On 08/22/2010 12:02 PM, Cecilia Carmo wrote:
>>>Thanks, but I don't have multiple matches and the lines repeated
in the
>>>final dataframe are exactly equal in all columns.
>>>
>>>Cec?lia
>>>
>>>Sat, 21 Aug 2010 10:58:53 -0500
>>>Hadley Wickham escreveu:
>>>>You may find a close reading of ?merge helpful, particularly
this
>>>>sentence: "If there is more than one match, all possible
>>>>matches contribute one row each" (so check that you
don't have
>>>>multiple matches).
>>>>
>>>>Hadley
>>>>
>>>>On Sat, Aug 21, 2010 at 10:45 AM, Cecilia Carmo
>>>>wrote:
>>>>>Hi everyone,
>>>>>
>>>>>I have been merging many big dataframes (about 80000 rows
each) and I
>>>>>never
>>>>>had this problem, but now it happened to me and I want to
know if
>>>>>someone
>>>>>knows what could be happening.
>>>>>The final dataframe has many rows, an impossible number! I
have done
>>>>>edit(dataframe) and I saw that there are many repeated rows
(all equal).
>>>>>
>>>>>Thanks for any help,
>>>>>
>>>>>Cec?lia Carmo
>>>>>Universidade de Aveiro
>>>>>Portugal
>>>>>
>>>>>______________________________________________
>>>>>R-help at r-project.org mailing list
>>>>>https://stat.ethz.ch/mailman/listinfo/r-help
>>>>>PLEASE do read the posting guide
>>>>>http://www.R-project.org/posting-guide.html
>>>>>and provide commented, minimal, self-contained, reproducible
code.
>>>>
>>>>
>>>>
>>>>--
>>>>Assistant Professor / Dobelman Family Junior Chair
>>>>Department of Statistics / Rice University
>>>>http://had.co.nz/
>>>
>>>______________________________________________
>>>R-help at r-project.org mailing list
>>>https://stat.ethz.ch/mailman/listinfo/r-help
>>>PLEASE do read the posting guide
>>>http://www.R-project.org/posting-guide.html
>>>and provide commented, minimal, self-contained, reproducible code.
>
>
Michael Dewey
http://www.aghmed.fsnet.co.uk
--Forwarded Message Attachment--
From: frainj at gmail.com
CC: r-help at r-project.org
To: adicool4u at gmail.com
Date: Mon, 23 Aug 2010 13:52:13 +0100
Subject: Re: [R] Fitting VAR and doing Johansen's cointegration test in R
Look at the econometrics and time series Task wiew on the CRAN web site
John
On 23 August 2010 04:09, Aditya Damani wrote:> Hi,
>
> Could someone please tell me the R codes for fitting VAR(p) (Vector
> Auto Regressive) models and doing the Johansen?s cointegration tests.
>
> TIA
> Aditya
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
John C Frain
Economics Department
Trinity College Dublin
Dublin 2
Ireland
www.tcd.ie/Economics/staff/frainj/home.html
mailto:frainj at tcd.ie
mailto:frainj at gmail.com
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 06:28:41 -0700
Subject: Re: [R] manual update Agilent oligo probes
Try BioC?
--
View this message in context:
http://r.789695.n4.nabble.com/manual-update-Agilent-oligo-probes-tp2334869p2335106.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: cecilia.carmo at ua.pt
CC: r-help at r-project.org; info at aghmed.fsnet.co.uk
To: wdunlap at tibco.com
Date: Mon, 23 Aug 2010 14:51:46 +0100
Subject: Re: [R] problems with merge() - the output has many repeated lines
Thank you all for your help and patience.
I?have done table(duplicated(df1[, c("firm","year")])) as
William Dunlap suggested and I find repeated rows in df1.
R is always right!
I really believed that my data could not be repeated
lines. I now have another problem which is to discover why
this happened with my data, but this has nothing to do
with the R!
Thank you again and again,
Cec?lia Carmo
Universidade de Aveiro
Portugal
Em Sun, 22 Aug 2010 13:15:36 -0700
"William Dunlap" escreveu:>> -----Original Message-----
>> From: r-help-bounces at r-project.org
>> [mailto:r-help-bounces at r-project.org] On Behalf Of
>>Cecilia Carmo
>> Sent: Sunday, August 22, 2010 10:24 AM
>> To: Erik Iverson
>> Cc: r-help at r-project.org; Hadley Wickham
>> Subject: Re: [R] problems with merge() - the output has
>>many
>> repeated lines
>>
>> I have done
>> intersect(names(df1), names(df2))
>> [1] "firm" "year"
>>
>> This is the key I used to merge
>> merge(df1,df2,by=c("firm","year"))
>>
>> And there is just one row firm/year in df1 that matches
>> with another firm/year row in df2. Df1 has more
>>firm/year
>> rows than df2, and them don't match with none in df2.
>
> To get to the bottom of this you may have to show
> us some of the relevant rows of data (80000 rows
> per dataset would be a lot to mailout). For starters
> it would be nice to see the output of
> str(df1)
> str(df2)
> str(m) # where m is merge(df1,df2)
> Then it would nice to see the output of
> table(duplicated(df1[, c("firm","year")]))
> and the same for df2 and m.
>
> You said you saw many repeated rows in the output of
> merge(df1,df2,...), which I am calling 'm'. Say the
>i'th
> row is one of the repeated rows. What are the outputs
>of
> df1[ df1$firm==m$firm[i] & df1$year==m$year[i],
>,drop=FALSE]
> df2[ df2$firm==m$firm[i] & df2$year==m$year[i],
>,drop=FALSE]
> m[ m$firm==m$firm[i] & m$year==m$year[i], ,drop=FALSE]
> ?
>
> Bill Dunlap
> Spotfire, TIBCO Software
> wdunlap tibco.com
>
>> Cec?lia
>>
>> Em Sun, 22 Aug 2010 12:09:57 -0500
>> Erik Iverson escreveu:
>>> Cecilia -
>>>
>>>Find what columns you're matching on,
>>>
>>> intersect(names(df1), names(df2)),
>>>
>>> Maybe that will shed some light on the issue.
>>>
>>> On 08/22/2010 12:02 PM, Cecilia Carmo wrote:
>>>> Thanks, but I don't have multiple matches and the
>>lines
>>>>repeated in the
>>>> final dataframe are exactly equal in all columns.
>>>>
>>>> Cec?lia
>>>>
>>>> Sat, 21 Aug 2010 10:58:53 -0500
>>>> Hadley Wickham escreveu:
>>>>> You may find a close reading of ?merge helpful,
>>>>>particularly this
>>>>> sentence: "If there is more than one match, all
>>possible
>>>>> matches contribute one row each" (so check that you
>>>>>don't have
>>>>> multiple matches).
>>>>>
>>>>> Hadley
>>>>>
>>>>> On Sat, Aug 21, 2010 at 10:45 AM, Cecilia Carmo
>>>>>
>>>>> wrote:
>>>>>> Hi everyone,
>>>>>>
>>>>>> I have been merging many big dataframes (about
>>80000
>>>>>>rows each) and I
>>>>>> never
>>>>>> had this problem, but now it happened to me and I
>>want
>>>>>>to know if
>>>>>> someone
>>>>>> knows what could be happening.
>>>>>> The final dataframe has many rows, an impossible
>>number!
>>>>>>I have done
>>>>>> edit(dataframe) and I saw that there are many
>>repeated
>>>>>>rows (all equal).
>>>>>>
>>>>>> Thanks for any help,
>>>>>>
>>>>>> Cec?lia Carmo
>>>>>> Universidade de Aveiro
>>>>>> Portugal
>>>>>>
>>>>>> ______________________________________________
>>>>>> R-help at r-project.org mailing list
>>>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>>>> PLEASE do read the posting guide
>>>>>> http://www.R-project.org/posting-guide.html
>>>>>> and provide commented, minimal, self-contained,
>>>>>>reproducible code.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Assistant Professor / Dobelman Family Junior Chair
>>>>> Department of Statistics / Rice University
>>>>> http://had.co.nz/
>>>>
>>>> ______________________________________________
>>>> R-help at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide
>>>> http://www.R-project.org/posting-guide.html
>>>> and provide commented, minimal, self-contained,
>>>>reproducible code.
>>>
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained,
>>reproducible code.
>>
--Forwarded Message Attachment--
From: mariateresa.torres at lineadirecta.es
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 16:03:08 +0200
Subject: [R] Maria Teresa Torres Mu?oz/Control/Gesti?n de
Accidentes/LineaDirecta est? ausente de la oficina.
Estar? ausente de la oficina desde el 23/08/2010 y no volver? hasta el
06/09/2010.
Responder? a su mensaje cuando regrese, si es urgente enviar mail a
Estadisticas GDA
Vis?tenos en: www.lineadirecta.com
"Este mensaje y los documentos que, en su caso, lleve anexos, pueden
contener informaci?n confidencial. Por ello, se informa a quien lo reciba por
error que la informaci?n contenida en el mismo es reservada y su uso no
autorizado est? prohibido legalmente, por lo que en tal caso le rogamos se
abstenga de realizar copias del mensaje, leerlo, remitirlo o difundirlo y
proceda a borrarlo inmediatamente."
"This message is intended only for the use of the individual to whom it is
addressed and may contain information that is confidential. If you have received
this communication, by error, you are hereby notified that any distribution or
copying of this communication is prohibited."
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: veepsirtt at gmail.com
CC: jonathan.lees at unc.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 19:45:42 +0530
Subject: [R] Cran-packages-ProfessoR- how to Automatically email a file to an
address using the perl program.
AutoEmail
autoemail
Description
Automatically email a file to an address using the perl program.
Usage
autoemail(eadd, sfile, hnote = "Exam Results")
Arguments
eadd Email address
sfile file to be sent
hnote subject line
Details
This program will work well in Linux and Mac where Perl is
installed - I am not sure about Win-
dows. Creates a unix executable file, if perl is present.
Value
Side Effects.
HOW to use this command after including library("ProfessR") in R
autoemail(eadd, sfile, hnote = "Exam Results")
Could you please give me a working example?
with regards
veepsirtt
--Forwarded Message Attachment--
From: veepsirtt at gmail.com
CC: r-help at r-project.org
To: ccampbell at mango-solutions.com
Date: Mon, 23 Aug 2010 20:25:29 +0530
Subject: Re: [R] sendmailR-package-valid code needed
Hello Chris Campbell ,
I tried this for my email id it give me errors
> from <- sprintf("", Sys.info()[4])
> to <- "< veepsirtt at gmail.com>"
> subject <- "Hello from R"
> msg <- "It works!"
> sendmail(from, to, subject,
msg,control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
Error in waitFor(code) :
SMTP Error: 5.1.1 The email account that you tried to reach does not
exist. Please try
Calls: sendmail -> smtpSubmitMail -> sendCmd -> waitFor
my email id is correct
please help me
with regards
veepsirtt
--Forwarded Message Attachment--
From: samuoko at yahoo.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 14:57:39 +0000
Subject: [R] AUC
Hello,
Is there is any R function computes the AUC for paired data?
Many thanks,
Samuel
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: f.veronesi at cranfield.ac.uk
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 16:01:59 +0100
Subject: [R] Fitting Weibull Model with Levenberg-Marquardt regression method
Hi,
I have a problem fitting the following Weibull Model to a set of data.
The model is this one: a-b*exp(-c*x^d)
If I fitted the model with CurveExpert I can find a very nice set of
coefficients which create a curve very close to my data, but when I use the
nls.lm function in R I can't obtain the same result.
My data are these:
X Y
15 13
50 13
75 9
90 4
With the commercial software I obtain the following coefficients:
Weibull Model: y=a-b*exp(-c*x^d)
Coefficient Data:
a = 1.31636909714E+001
b = 7.61325570579E+002
c = 2.82150000991E+002
d = -9.23838785044E-001
For fitting the Levenberg-Marquardt in R I'm using the following lines:
pS<-list(a=1,b=1,c=1,d=1)
model<-function(pS,xx){pS$a-pS$b*exp(-pS$c*xx^-pS$d)}
resid<-function(observed,pS,xx){observed-model(pS,xx)}
lin<-nls.lm(pS,resid,observed=Y,xx=X)
Why I can't obtain the same results?
Many thanks in advance,
Fabio
Mr. Fabio Veronesi
Ph.D. student
Cranfield University
e-mail: f.veronesi at cranfield.ac.uk
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: petr.pikal at precheza.cz
CC: r-help at r-project.org; sheikh_613 at yahoo.com
To: jwiley.psych at gmail.com
Date: Mon, 23 Aug 2010 17:08:43 +0200
Subject: Re: [R] Fatal Error
Hi
Another possibility could be that .RData was saved when some packages were
used and are not installed in this R version (I often percieve it when I
migrate among different computers). If the complete error message suggests
it you can install that package and you may open your .RData again without
problems.
Regards
Petr
r-help-bounces at r-project.org napsal dne 20.08.2010 07:51:30:
> Renaming the .RData file will suffice and is less extreme...I should
> have said that in the first place.
>
> On Thu, Aug 19, 2010 at 10:44 PM, Joshua Wiley
wrote:>> Hi Sheikh,
>>
>> The error suggests that the file .RData file may be corrupted. Unless
>> you have data saved in it that you need, try deleting that file and
>> then see if R will start properly.
>>
>> Maybe it is just me, but it seems like problems with the .RData file
>> are happening frequently enough to possibly warrant an entry in the
>> FAQ.
>>
>> HTH,
>>
>> Josh
>>
>> On Thu, Aug 19, 2010 at 9:57 PM, sheik faisal
wrote:>>> Hi,
>>> I tried to lauch R2.8.1 version on Windows platform after closing
down one> session but now i get this message in an Information Dialogue Box:
>>> "Fatal error: unable to restore saved data in .RData".
>>> When i click on OK in the dialogue box, R shuts down and wouldn?t
let
me doanything.>>> I later downloaded R-2.11.1 (the latest version), installed it and
i
get> the same message when i run the new version.
>>>
>>> Could i get some help, please.
>>>
>>> Sheikh
>>>
>>>
>>>
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>>
>> --
>> Joshua Wiley
>> Ph.D. Student, Health Psychology
>> University of California, Los Angeles
>> http://www.joshuawiley.com/
>>
>
>
>
> --
> Joshua Wiley
> Ph.D. Student, Health Psychology
> University of California, Los Angeles
> http://www.joshuawiley.com/
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: list at econinfo.de
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 17:12:42 +0200
Subject: Re: [R] Fitting VAR and doing Johansen's cointegration test in R
Am 23.08.2010 05:09, schrieb Aditya Damani:> Hi,
>
> Could someone please tell me the R codes for fitting VAR(p) (Vector
> Auto Regressive) models and doing the Johansen?s cointegration tests.
>
> TIA
> Aditya
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Just feeling like doing some homework for free...
require(vars)
require(urca)
reps <- 1000 # length of time series
A <- matrix(NA,nrow=reps,ncol=3)
colnames(A) <- c("a","b","c")
A[1,] <- rnorm(3) # starting values
for(i in 2:reps){# generate time series
A[i,] <- c(0.1+0.2*i+0.7*A[i-1,1]+0.1*A[i-1,2]+0.1*A[i-1,3]+rnorm(1),
0.5+0.1*i+0.6*A[i-1,2]-0.2*A[i-1,1]-0.2*A[i-1,3]+rnorm(1),
0.9+0.2*i+A[i-1,3]+0.1*A[i-1,1]+0.15*A[i-1,2]+rnorm(1)
)
}
(a.ct <- ur.df(A[,"a"],type="trend"))
(b.ct <- ur.df(A[,"b"],type="trend"))
(c.ct <- ur.df(A[,"c"],type="trend"))
VARselect(A,type="both")
var.p1 <- VAR(A,1,type="both")
summary(var.p1)
jo <- ca.jo(A)
summary(jo)
--
Owe Jessen
Nettelbeckstr. 5
24105 Kiel
post at owejessen.de
http://privat.owejessen.de
--Forwarded Message Attachment--
From: ccampbell at mango-solutions.com
CC: r-help at r-project.org
To: veepsirtt at gmail.com
Date: Mon, 23 Aug 2010 14:56:58 +0100
Subject: Re: [R] sendmailR-package-valid code needed
Hi veepsirtt
This code is fragmented due to the syntax of the example not being
parsed as intended to text. The source of \examples for sendmail.Rd is:
from <- sprintf("", Sys.info()[4])
to <- "< olafm at datensplitter.net>"
subject <- "Hello from R"
msg <- "It works!"
sendmail(from, to, subject, msg,
control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
Cheers
Chris
Hadley Wickham, Creator of ggplot2 - teaching in the UK. 1st - 2nd
November 2010.
To book your seat please go to http://mango-solutions.com/news.html
Chris Campbell
MANGOSOLUTIONS
T: +44 (0)1249 767700 Ext: 233
F: +44 (0)1249 767707
M: +44 (0)7967 028876
www.mango-solutions.com
Unit 2 Greenways Business Park
Bellinger Close
Chippenham
Wilts
SN15 1BN
UK
-----Original Message-----
From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-project.org]
On Behalf Of Velappan Periasamy
Sent: 23 August 2010 11:45
To: r-help at r-project.org
Subject: [R] sendmailR-package-valid code needed
## Not run:
from <- sprintf("
to <- ""
subject <- "Hello from R"
msg <- "It works!"
sendmail(from, to, subject, msg,
control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
## End(Not run)
the above commands are provided in this document ie
http://cran.r-project.org/web/packages/sendmailR/sendmailR.pdf
it is not working.
hence give me a valid code for sending mails using gmail.com
thanks
veepsirtt
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
LEGAL NOTICE
This message is intended for the use o...{{dropped:9}}
--Forwarded Message Attachment--
From: keo.ormsby2 at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 08:38:24 -0500
Subject: Re: [R] DNA sequence Fst
Hi Blue Jay,
If your sequences are small (<10Kb) and you have a few samples (~100)
the seqinr package from CRAN itself is a very straightforward way of
handling DNA sequences, but if you plan to do more sophisticated things
that can drain your RAM and processor time, I would definitely recommend
the more versatile but complex Biostrings package from Bioconductor.
www.bioconductor.org/packages/2.2/bioc/html/*Biostrings*.html
Happy DNARing!
Keo.
El 23/08/10 05:30 a.m., blue jay escribi?:> Hi,
>
> I want to analyse DNA sequence data (mtDNA) in R as in calculate Fst,
> Heterozygosity and such summary statistics. Package Adagenet converts the
> DNA sequence into retaining only retaining the polymorphic sites and then
> calcuates Fst.. but is there any other way to do this? I mean analyse the
> DNA sequence as it is.. and calculate the statistics?
>
>
> Thanks!
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: breman.mark at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 17:03:34 +0200
Subject: [R] Strange space characters in character strings
Hello everyone,
I am reading a HTML table from a website with readHTMLTable() from the XML
package:
> library(XML)
> moose =
readHTMLTable("http://www.decisionmoose.com/Moosistory.html",
header=FALSE, skip.rows=c(1,2), trim=TRUE)[[1]]> moose
V1 V2 V3
1 07.02.2010 SWITCH to Long Bonds\n (BTTRX) $880,370
2 05.07.2010 Switch to Gold (GLD) $878,736
3 03.05.2010 Switch to US Small-cap Equities (IWM) $895,676
4 01.22.2010 Switch to Cash (3moT) $895,572
..... truncated by me!
I am interested in the values in the third column:
> as.character(moose$V3)
[1] "$880,370 " "$878,736 " "$895,676 "
"$895,572 " "$932,139 "
"$932,131 " "$1,013,505 " "$817,451 "
"$817,082 " "$848,133 "
[11] "$904,527 " " $903,981 " "$902,582 "
"$896,170 " "$809,853 " "
$808,852 " " $807,409 " "$802,658 " "$747,629
" "$672,465 "
[21] " $671,826 " "$645,352 " "$615,174 "
"$609,415 " " $590,664 " "
$586,785 " "$561,056 " "$537,307 " " $535,744
" " $552,712 "
[31] "$551,615 " " $508,790 " "$501,161 "
"$499,023 " " $446,568 "
"$423,727 " "$421,967 " "$396,007 "
"$395,943 " " $270,011 "
[41] "$264,386 " "$278,513 " "$251,855 "
"$251,685 " " $129,198 "
"$127,541 " "$117,381 " "$100,000 " "
" " $275,417"
[51] "$266,459" " $214,552" "$207,312"
"$173,557" "$167,647"
"$150,516" "$135,842" "$126,667"
"$131,642" "$113,804"
[61] "$107,364" "$108,242" " $102,881"
" $100,000"
Notice the spaces leading and lagging some of the values.
I want to get the values as numeric values, so I try to get rid of the
$-character and comma's with gsub() and a regular expression:
> gsub("[$,]", "", as.character(moose$V3))
[1] "880370 " "878736 " "895676 " "895572
" "932139 " "932131 "
"1013505 " "817451 " "817082 " "848133
" "904527 " " 903981 " "902582
"
[14] "896170 " "809853 " " 808852 " "
807409 " "802658 " "747629 "
"672465 " " 671826 " "645352 " "615174
" "609415 " " 590664 " " 586785
"
[27] "561056 " "537307 " " 535744 " "
552712 " "551615 " " 508790 "
"501161 " "499023 " " 446568 " "423727
" "421967 " "396007 " "395943 "
[40] " 270011 " "264386 " "278513 " "251855
" "251685 " " 129198 "
"127541 " "117381 " "100000 " " "
" 275417" "266459" " 214552"
[53] "207312" "173557" "167647"
"150516" "135842" "126667"
"131642" "113804" "107364"
"108242" " 102881" " 100000"
Looks fine to me. Now I can use as.numeric() to convert to numbers (leading
and lagging spaces should not be a problem):
> as.numeric(gsub("[$,]", "", as.character(moose$V3)))
[1] NA NA NA NA NA NA NA NA NA NA
NA NA NA NA NA NA NA NA NA NA
[21] NA NA NA NA NA NA NA NA NA NA
NA NA NA NA NA NA NA NA NA NA
[41] NA NA NA NA NA NA NA NA NA NA
266459 NA 207312 173557 167647 150516 135842 126667 131642 113804
[61] 107364 108242 NA NA
Warning message:
NAs introduced by coercion
Something is wrong here! Let's have a look at one specific value:
> gsub("[$,]", "", as.character(moose$V3))[1]
[1] "880370 "> as.numeric(gsub("[$,]", "", as.character(moose$V3))[1])
[1] NA
Warning message:
NAs introduced by coercion
If the last character in the string would be a regular space it would not be
a problem for as.numeric():
> as.numeric("880370 ")
[1] 880370
But it looks like it's not a regular space character:
> substr(gsub("[$,]", "", as.character(moose$V3))[1], 7,
7) == " "
[1] FALSE
It looks to me the spaces in some of the cells are not regular spaces. In
the original HTML table they are defined as "non breaking spaces" i.e.
So my question is WHAT ARE THEY?
Is there a way to show the binary (hex) values of these characters?
Here is my environment:
> sessionInfo()
R version 2.11.1 (2010-05-31)
i486-pc-linux-gnu
locale:
[1] LC_CTYPE=en_US.utf8 LC_NUMERIC=C LC_TIME=en_US.utf8
LC_COLLATE=en_US.utf8 LC_MONETARY=C
[6] LC_MESSAGES=en_US.utf8 LC_PAPER=en_US.utf8 LC_NAME=C
LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.utf8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] XML_3.1-0
loaded via a namespace (and not attached):
[1] tools_2.11.1
Thanks,
-Mark-
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: b.rowlingson at lancaster.ac.uk
CC: r-help at r-project.org; ccampbell at mango-solutions.com
To: veepsirtt at gmail.com
Date: Mon, 23 Aug 2010 16:23:52 +0100
Subject: Re: [R] sendmailR-package-valid code needed
On Mon, Aug 23, 2010 at 3:55 PM, Velappan Periasamy
wrote:> Hello Chris Campbell ,
>
> I tried this for my email id it give me errors
>
>> from <- sprintf("", Sys.info()[4])
>> to <- "< veepsirtt at gmail.com>"
>> subject <- "Hello from R"
>> msg <- "It works!"
>> sendmail(from, to, subject,
msg,control=list(smtpServer="ASPMX.L.GOOGLE.COM"))
>
> Error in waitFor(code) :
> SMTP Error: 5.1.1 The email account that you tried to reach does not
> exist. Please try
> Calls: sendmail -> smtpSubmitMail -> sendCmd -> waitFor
>
> my email id is correct
The sendmail function in the sendmailR package can't send email via a
server that uses SMTP-Authentication. Gmail's smtp server, which is
called smtp.gmail.com, relies on SMTP AUTH to make sure you are who
you say you are. The sendmail function doesn't know how to respond.
It could be written to handle it, the outline of SMTP AUTH are here:
http://en.wikipedia.org/wiki/SMTP-AUTH
and google's page detailing its SMTP service are here:
http://mail.google.com/support/bin/answer.py?hl=en&answer=13287
http://mail.google.com/support/bin/answer.py?hl=en&answer=78775
and various other places.
Short answer: no.
Barry
--Forwarded Message Attachment--
From: f.harrell at vanderbilt.edu
CC: r-help at r-project.org
To: samuoko at yahoo.com
Date: Mon, 23 Aug 2010 10:26:05 -0500
Subject: Re: [R] AUC
Samuel,
Since the difference in AUCs has insufficient power and doesn't really
take into account the pairing of predictions, I recommend the Hmisc
package's rcorrp.cens function. Its method has good power and asks
the question "is one predictor more concordant than the other in the
same pairs of observations?".
Frank
Frank E Harrell Jr Professor and Chairman School of Medicine
Department of Biostatistics Vanderbilt University
On Mon, 23 Aug 2010, Samuel Okoye wrote:
> Hello,
>
> Is there is any R function computes the AUC for paired data?
>
> Many thanks,
> Samuel
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--Forwarded Message Attachment--
From: hadley at rice.edu
To: R-help at r-project.org
Date: Mon, 23 Aug 2010 10:33:54 -0500
Subject: [R] Recyclable
Hi all,
Is there a function to determine whether a set of vectors is cleanly
recyclable? i.e. is there a common function for detecting the
error/warnings that underlie the following two function calls?
> 1:3 + 1:2
[1] 2 4 4
Warning message:
In 1:3 + 1:2 :
longer object length is not a multiple of shorter object length
> data.frame(1:3, 1:2)
Error in data.frame(1:3, 1:2) :
arguments imply differing number of rows: 3, 2
Hadley
--
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/
--Forwarded Message Attachment--
From: jholtman at gmail.com
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 11:35:14 -0400
Subject: [R] Problems with Tinn-R
A co-worker has been having problems with using Tinn-R and has posted
to their help forum and got no response. So I am asking if anyone
else who might be using Tinn-R has seen the problems that he is
experiencing. I am using the same version of R and Tinn-R that he is,
and I have not seen it. I assume it must be some way that the
'clipboard' is operating dependent upon so other system configuration,
probably in Windows itself.
Here is what he is seeing:
=====================================================================
"Cannot open clipboard" Error
I am using Version 1.19.4.7 and R 2.10.1 (and Windows XP). I had no
problems for well over a year, and then I installed R 2.11.1 (I have
since uninstalled it). I started randomly getting this error when I
would try to send highlighted text over to the R window (either by the
"Send Selection" or "Send Selection (source)" buttons). A
window with
a red X pops up in Tinn-R, with this message. Sometimes it works for
a while without happening, and then will happen several times in a
row. I tried uninstalling and re-installing Tinn-R to no avail. I
compared settings with a co-worker who is having no problems, and my
settings are the same.
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
--Forwarded Message Attachment--
From: bmeyer at sozpsy.uzh.ch
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 18:15:00 +0200
Subject: [R] lmer() causes segfault
Hello lmer() - users,
A call to the lmer() function causes my installation of R (2.11.1 on
Mac OS X 10.5.8) to crash and I am trying to figure out the problem.
I have a data set with longitudinal data of four subsequent
performance measures of 1133 individuals nested in 88 groups. The data
is in long format. I hypothesize a performance increase for each
individual over time and intend to explain differences in these slopes
with individual-level and group-level characteristics. Thus, I have
hierarchic data with three levels (measurement time, individual, group).
With lme() (from the nlme package), this is my first simple model:
mod1 <- lme(fixed = performance ~ time,
random = ~ 1 + time | GroupID/StudentNumber,
data = dataset.long,
na.action = na.omit)
It fits to the data well. I tried to specify the same model in lmer():
detach("package:nlme")
library(lme4)
mod1 <- lmer(performance ~ time + (time | GroupID/StudentNumber), data
= dataset.long, na.action = na.omit)
However, this call results in a segfault:
*** caught segfault ***
address 0x154c3000, cause 'memory not mapped'
and a lengthy traceback. I can reproduce this error. It also occurs
when I don't load nlme before lme4. Can someone tell me what I am
doing wrong? Any help is greatly appreciated.
With best regards,
Bertolt
--
Dr. Bertolt Meyer
Senior research and teaching associate
Social Psychology, Institute of Psychology, University of Zurich
Binzmuehlestrasse 14, Box 15
CH-8050 Zurich
Switzerland
bmeyer at sozpsy.uzh.ch
--Forwarded Message Attachment--
From: Thorn.Thaler at rdls.nestle.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 18:18:48 +0200
Subject: [R] Sum a list of tables
Hi all,
In R it is possible to sum tables:
> (a <- table(rep(1:3, sample(10,3))))
1 2 3
2 5 7
> a+a
1 2 3
4 10 14
Now suppose that I have a list of tables, where each table counts the
same things
> k <- list(a,a,a)
How can I sum all tables in k?
> do.call(sum, k)
[1] 42
does not work since it sums over each table.
> do.call(`+`, list(a,a))
1 2 3
4 10 14
works not with lists containing not exactly two values (since `+` takes
exactly 2 values). So I think I should write something like
Summary.table <- function(..., na.rm) {
if (.Generic == "sum") {
...
} else { # use the default method
NextMethod()
}
}
So first question: where is the `+` operation defined for tables? Is it
S4? How can I see the source code of S4 functions (I'm not very
comfortable with S4)? Or in general how do I find all generic functions
for a specific class? I can get all S3 implementations of plot with
methods(plot), I can get all S4 functions with getMethods(plot). But
I've no idea of how to find all methods defined for a class? (e.g. all
functions that operate on tables, say)
Second question: my first dirty hack would be something like
args <- list(...)
table.sum <- args[[1]]
for (i in 2:length(args)) table.sum <- table.sum + args[[i]]
making use of the fact that `+` is defined for tables (and forgetting
about cases where two tables don't feature the same names). It works,
but isn't there a more elegant way to get the same?
Last question: is there a way to determine the call stack, such that I
can see the names of the function which are actually executed when I
commit a command? I know a little about R's dispatching mechanism for S3
classes (plot(a) actually calls plot.table) but I've no clue which
function is called if I type a + a (especially since `+` belongs to the
generic function group Ops and I do not know at all whether its S4 or
S3). I read the documentation about S3 Group Generic Functions and tried
to delve into S4, but obviously I was not able to understand everything
completely. So it would be great if somebody could help me out with this
specific topic and point me to some resources where I can learn more.
Thanks for your help in advance.
BR,
Thorn
--Forwarded Message Attachment--
From: Lars.Beckmann at gmx.net
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 18:24:40 +0200
Subject: [R] Problems installing JRI on Macintosh/Snowleopard
Dears,
I have a problem to install JRI on a Macintosh with Snowleopard OS.
runs without error message but gives the following error message:
$ make
make -C src JRI.jar
gcc -arch i386 -arch ppc -c -o Rengine.o Rengine.c -g -Iinclude -fno-common
-I/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/include
-I/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home/include/.
-I/Library/Frameworks/R.framework/Resources/include
Rengine.c: In function ?Java_org_rosuda_JRI_Rengine_rniParse?:
Rengine.c:89: error: too few arguments to function ?R_ParseVector?
Rengine.c: In function ?Java_org_rosuda_JRI_Rengine_rniParse?:
Rengine.c:89: error: too few arguments to function ?R_ParseVector?
lipo: can't figure out the architecture type of:
/var/folders/fh/fhFYQmHXHLyJzDtM+6Qq3E+++TI/-Tmp-//cc468LbA.out
make[1]: *** [Rengine.o] Error 1
make: *** [src/JRI.jar] Error 2
Current java version:
java -version
java version "1.6.0_20"
Java(TM) SE Runtime Environment (build 1.6.0_20-b02-279-10M3065)
Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01-279, mixed mode)
Current R-version:
R version 2.10.1 (2009-12-14)
Copyright (C) 2009 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Any help welcome
Lars
--
GRATIS f?r alle GMX-Mitglieder: Die maxdome Movie-FLAT!
--Forwarded Message Attachment--
From: hadley at rice.edu
To: R-help at r-project.org
Date: Mon, 23 Aug 2010 11:26:47 -0500
Subject: Re: [R] Recyclable
I should note that I realise this function is pretty trivial to write
(see below), I just want to avoid reinventing the wheel.
recyclable <- function(...) {
lengths <- vapply(list(...), length, 1)
all(max(lengths) %% lengths == 0)
}
Hadley
On Mon, Aug 23, 2010 at 10:33 AM, Hadley Wickham wrote:> Hi all,
>
> Is there a function to determine whether a set of vectors is cleanly
> recyclable? i.e. is there a common function for detecting the
> error/warnings that underlie the following two function calls?
>
>> 1:3 + 1:2
> [1] 2 4 4
> Warning message:
> In 1:3 + 1:2 :
> longer object length is not a multiple of shorter object length
>
>> data.frame(1:3, 1:2)
> Error in data.frame(1:3, 1:2) :
> arguments imply differing number of rows: 3, 2
>
>
> Hadley
>
> --
> Assistant Professor / Dobelman Family Junior Chair
> Department of Statistics / Rice University
> http://had.co.nz/
>
--
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/
--Forwarded Message Attachment--
From: colorleaf at hotmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 12:33:31 -0400
Subject: [R] (no subject)
Hi all,
I have a question about survplot in Design package. There is an option to print
the number of subjects at risk at the start of each time interval.
But I do not know how the time interval is decided, i.e. I do not know the
correspoding time to the number at risk printed.
How can I get the time?
Thanks!
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: rmh at temple.edu
CC: r-help at r-project.org
To: Thorn.Thaler at rdls.nestle.com
Date: Mon, 23 Aug 2010 12:35:36 -0400
Subject: Re: [R] Sum a list of tables
Reduce(`+`, k)
On Mon, Aug 23, 2010 at 12:18 PM, Thaler, Thorn, LAUSANNE, Applied
Mathematics wrote:
> Hi all,
>
> In R it is possible to sum tables:
>
>> (a <- table(rep(1:3, sample(10,3))))
>
> 1 2 3
> 2 5 7
>
>> a+a
>
> 1 2 3
> 4 10 14
>
> Now suppose that I have a list of tables, where each table counts the
> same things
>
>> k <- list(a,a,a)
>
> How can I sum all tables in k?
>
>> do.call(sum, k)
> [1] 42
>
> does not work since it sums over each table.
>
>> do.call(`+`, list(a,a))
>
> 1 2 3
> 4 10 14
>
> works not with lists containing not exactly two values (since `+` takes
> exactly 2 values). So I think I should write something like
>
> Summary.table <- function(..., na.rm) {
> if (.Generic == "sum") {
> ...
> } else { # use the default method
> NextMethod()
> }
> }
>
> So first question: where is the `+` operation defined for tables? Is it
> S4? How can I see the source code of S4 functions (I'm not very
> comfortable with S4)? Or in general how do I find all generic functions
> for a specific class? I can get all S3 implementations of plot with
> methods(plot), I can get all S4 functions with getMethods(plot). But
> I've no idea of how to find all methods defined for a class? (e.g. all
> functions that operate on tables, say)
>
> Second question: my first dirty hack would be something like
>
> args <- list(...)
> table.sum <- args[[1]]
> for (i in 2:length(args)) table.sum <- table.sum + args[[i]]
>
>
> making use of the fact that `+` is defined for tables (and forgetting
> about cases where two tables don't feature the same names). It works,
> but isn't there a more elegant way to get the same?
>
> Last question: is there a way to determine the call stack, such that I
> can see the names of the function which are actually executed when I
> commit a command? I know a little about R's dispatching mechanism for
S3
> classes (plot(a) actually calls plot.table) but I've no clue which
> function is called if I type a + a (especially since `+` belongs to the
> generic function group Ops and I do not know at all whether its S4 or
> S3). I read the documentation about S3 Group Generic Functions and tried
> to delve into S4, but obviously I was not able to understand everything
> completely. So it would be great if somebody could help me out with this
> specific topic and point me to some resources where I can learn more.
>
> Thanks for your help in advance.
>
> BR,
>
> Thorn
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: rmh at temple.edu
CC: r-help at r-project.org
To: Thorn.Thaler at rdls.nestle.com
Date: Mon, 23 Aug 2010 12:36:35 -0400
Subject: Re: [R] Sum a list of tables
sorry, cancel that. I will try again.
On Mon, Aug 23, 2010 at 12:35 PM, RICHARD M. HEIBERGER wrote:
> Reduce(`+`, k)
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: Thorn.Thaler at rdls.nestle.com
CC: r-help at r-project.org
To: rmh at temple.edu
Date: Mon, 23 Aug 2010 18:39:53 +0200
Subject: Re: [R] Sum a list of tables
Perfectly, works as expected. Regarding the other questions, can anybody point
me to the right direction?
BR Thorn
From: RICHARD M. HEIBERGER [mailto:rmh at temple.edu]
Sent: lundi 23 ao?t 2010 18:36
To: Thaler,Thorn,LAUSANNE,Applied Mathematics
Cc: r-help at r-project.org
Subject: Re: [R] Sum a list of tables
Reduce(`+`, k)
On Mon, Aug 23, 2010 at 12:18 PM, Thaler, Thorn, LAUSANNE, Applied Mathematics
wrote:
Hi all,
In R it is possible to sum tables:
> (a <- table(rep(1:3, sample(10,3))))
1 2 3
2 5 7
> a+a
1 2 3
4 10 14
Now suppose that I have a list of tables, where each table counts the
same things
> k <- list(a,a,a)
How can I sum all tables in k?
> do.call(sum, k)
[1] 42
does not work since it sums over each table.
> do.call(`+`, list(a,a))
1 2 3
4 10 14
works not with lists containing not exactly two values (since `+` takes
exactly 2 values). So I think I should write something like
Summary.table <- function(..., na.rm) {
if (.Generic == "sum") {
...
} else { # use the default method
NextMethod()
}
}
So first question: where is the `+` operation defined for tables? Is it
S4? How can I see the source code of S4 functions (I'm not very
comfortable with S4)? Or in general how do I find all generic functions
for a specific class? I can get all S3 implementations of plot with
methods(plot), I can get all S4 functions with getMethods(plot). But
I've no idea of how to find all methods defined for a class? (e.g. all
functions that operate on tables, say)
Second question: my first dirty hack would be something like
args <- list(...)
table.sum <- args[[1]]
for (i in 2:length(args)) table.sum <- table.sum + args[[i]]
making use of the fact that `+` is defined for tables (and forgetting
about cases where two tables don't feature the same names). It works,
but isn't there a more elegant way to get the same?
Last question: is there a way to determine the call stack, such that I
can see the names of the function which are actually executed when I
commit a command? I know a little about R's dispatching mechanism for S3
classes (plot(a) actually calls plot.table) but I've no clue which
function is called if I type a + a (especially since `+` belongs to the
generic function group Ops and I do not know at all whether its S4 or
S3). I read the documentation about S3 Group Generic Functions and tried
to delve into S4, but obviously I was not able to understand everything
completely. So it would be great if somebody could help me out with this
specific topic and point me to some resources where I can learn more.
Thanks for your help in advance.
BR,
Thorn
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: bbolker at gmail.com
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 16:46:52 +0000
Subject: Re: [R] lmer() causes segfault
Bertolt Meyer sozpsy.uzh.ch> writes:
>
> Hello lmer() - users,
>
> A call to the lmer() function causes my installation of R (2.11.1 on
> Mac OS X 10.5.8) to crash and I am trying to figure out the problem.
[snip snip]
> detach("package:nlme")
> library(lme4)
>
> mod1 <- lmer(performance ~ time + (time | GroupID/StudentNumber), data
> = dataset.long, na.action = na.omit)
>
> However, this call results in a segfault:
>
> *** caught segfault ***
> address 0x154c3000, cause 'memory not mapped'
>
> and a lengthy traceback. I can reproduce this error. It also occurs
> when I don't load nlme before lme4. Can someone tell me what I am
> doing wrong? Any help is greatly appreciated.
This may well be a bug in lmer. There have been a number of
fussy computational issues with the lme4 package on the Mac platform.
If it is at all possible, please (1) post the results of sessionInfo()
[which will in particular specify which version of lme4 you are using];
(2) possibly try this with the latest development version of lme4, from
R-forge, if that's feasible (it might be necessary to build the package
from source), and most importantly:
(3) create a reproducible (for others) example -- most easily by
posting your data on the web somewhere, but if that isn't possible
by simulating data similar to yours (if it doesn't happen with another
data set of similar structure, that's a clue -- it says it's some more
particular characteristic of your data that triggers the problem) and
(4) post to to *either* the R-sig-mac or the R-sig-mixed-models list,
where the post is more likely to come to the attention of those who
can help diagnose/fix ...
good luck
Ben Bolker
--Forwarded Message Attachment--
From: bbolker at gmail.com
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 16:52:55 +0000
Subject: Re: [R] sendmailR-package-valid code needed
Barry Rowlingson lancaster.ac.uk> writes:
>
> On Mon, Aug 23, 2010 at 3:55 PM, Velappan Periasamy gmail.com>
wrote:>> Hello Chris Campbell ,
>>
I just posted my embryonic 'Rmail' package, which does a form
of SMTP authentication (maybe not the version you want), to
http://www.mathserv.mcmaster.ca/~bolker/R/src/contrib/Rmail_1.0.tar.gz
You should be able to install it via
install.packages("Rmail",contriburl="http://www.math.mcmaster.ca/~bolker/R/src/contrib")
(although not that it is a source package -- there's no compiled
code, though, so if necessary you can download the tarball and dig
the R source code out ...)
--Forwarded Message Attachment--
From: bbolker at gmail.com
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 17:08:02 +0000
Subject: Re: [R] R Package about Variable Selection for GLMM (Generalized Linear
Mixed Model)?
Jun Bum Kwon uwm.edu> writes:
[lightly edited]> I saw several great R packages for Variable Selection.
> I also found several R packages for GLMM. But, I
> did not find yet R package about Variable Selection for
> GLMM even though several papers about it have
> been published.
Such methods may or may have been coded in R at present.
It would be helpful to (a) post references for the 'several papers'
you refer to and (b) post further queries to the R-sig-mixed-models
mailing list, which is more appropriate for questions about (G)LMMs.
This may also be an appropriate place for the comment, "tell us what
problem you are trying to solve, not just which methods you want to use",
since
sometimes people on the list are aware of different (and/or better, at
least according to some criteria) methods for solving the same problem.
cheers
Ben Bolker
--Forwarded Message Attachment--
From: stephens.js at gmail.com
To: r-help at R-project.org
Date: Mon, 23 Aug 2010 12:33:20 -0500
Subject: [R] Kalman Filtering with Singular State Noise Covariance Matrix
Since notation for state-space models vary, I'll use the following
convention:
x(t) indicates the state vector, y(t) indicates the vector of observed
quantities.
State Transition Equation: x(t+1) = Fx(t) + v(t)
Observation Equation: y(t) = Gx(t) + w(t)
Cov[v(t)] = V
Cov[w(t)] = W
I've found myself in a situation where I will have V = s%*%t(s)*k^2,
with s a vector the same length as the state vector, and k a constant.
Theoretically then, V should be exactly singular, with exactly 1
nonzero eigenvalue. In practice, of course, I often end up with V
having many very small positive eigenvalues, and a few very small
negative eigenvalues - I'm not sure whether this is a consequence of
the numerical error coming from the outer product or from the
eigenvalue decomposition. I've been trying to use the dlm package,
but it computes the eigenvalue decomposition of V and complains when
it's not numerically non-negative definite. My end goal with using
dlm is to compute the likelihood function in order to do an MLE.
Is there a recommended way of handling this problem? Would another
package be easier to use in this case?
-- Scott
--Forwarded Message Attachment--
From: Greg.Snow at imail.org
CC: r-help at r-project.org
To: r.ookie at live.com; dwinsemius at comcast.net
Date: Mon, 23 Aug 2010 11:42:44 -0600
Subject: Re: [R] Aspect Ratio
A couple of additional examples of when asp is important to use:
The command abline(0,1) adds a line to the current plot, this line is often
referred to as the 45 degree line, but the angle with the axes is only 45
degrees when asp==1, setting asp=1 will enforce this.
There are multiple packages that produce maps relating to real world geography.
These maps look really funny (and not related to real world geography) if they
are allowed to fill the available graph space rather than enforcing an
appropriate aspect ratio (usually not 1).
William Cleveland did research showing that many plots are easier to interpret
when the aspect ratio is set so that the average angle of the absolute value of
lines of interest is 45 degrees. Compare the following 2 plots (look at how
fast the sunspots increase vs. how fast they decrease):
plot(sunspots, type='l')
plot(sunspots, type='l', asp=1/10)
Another function to look at if you don't want all the white space inside of
the plot is the squishplot function in the TeachingDemos package.
Hope this helps,
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of r.ookie
> Sent: Thursday, August 19, 2010 3:29 PM
> To: David Winsemius
> Cc: r-help at r-project.org
> Subject: Re: [R] Aspect Ratio
>
> Well, I had to look further into the documentation to see 'If asp is a
> finite positive value then the window is set up so that one data unit
> in the x direction is equal in length to asp * one data unit in the y
> direction'
>
> Okay, so in what situations is the 'asp' helpful?
>
> On Aug 19, 2010, at 2:24 PM, David Winsemius wrote:
>
>
> On Aug 19, 2010, at 5:13 PM, r.ookie wrote:
>
>> set.seed(1)
>> x <- rnorm(n = 1000, mean = 0, sd = 1)
>> plot(x = x, asp = 2000)
>>
>> Could someone please explain what the 'asp' parameter is doing?
>
> You want us to read the help page to you?
>
> --
>
> David Winsemius, MD
> West Hartford, CT
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: ali at kmhome.org
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 11:02:13 -0700
Subject: [R] change order of plot panels in faceted ggplot/qplot
Hi,
I have a 5-paneled figure that i made using the facet function in
qplot (ggplot). I've managed to arrange the panels into two rows/
three columns, but for the sake of easy visual comparisons between
panels in my particular dataset, I want to have the two plots on the
bottom align on the right hand side of the figure instead of the left.
Here's an example:
m <- matrix(rnorm(300), nrow = 60)
colnames(m) <- paste('V', 1:5, sep = '')
b <- data.frame(site = factor(rep(c('A', 'B', 'C',
'D', 'E'), each 12)), status =
factor(rep(rep(c('D','L'), each = 3), 10)),
as.data.frame(m))
qplot(V2, V1, data=b, shape=status) + scale_shape_manual(value=c(1,16))
+facet_wrap(~site,nrow=2)
What I would like to do is keep the 2 row shape, keep the order
(A,B,C) of the top plots, but have the D and E panels in this example
align under the B and C plots.
Is this possible using qplot?
Many thanks,
Ali
--Forwarded Message Attachment--
From: Steven.Ranney at montana.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 12:15:33 -0600
Subject: [R] Quantile Regression and Goodness of Fit
All -
Does anyone know if there is a method to calculate a goodness-of-fit
statistic for quantile regressions with package quantreg?
Specifically, I'm wondering if anyone has implemented the
goodness-of-fit process developed by Koenker and Machado (1999) for R?
Though I have used package quantreg in the past, I may have overlooked
this function, if it is included.
Citation:
Koenker, R. and J. A. F. Machado. 1999. Goodness of fit and related
inference processes for quantile regression. Journal of the American
Statistical Association 94:1296-1310.
Thank you -
Steven H. Ranney
Graduate Research Assistant (Ph.D.)
USGS MT Cooperative Fishery Research Unit
Montana State University
PO Box 173460
Bozeman, MT 59717
office: 406-994-6643
fax: 406-994-7479
http://studentweb.montana.edu/steven.ranney
--Forwarded Message Attachment--
From: bbolker at gmail.com
To: f.harrell at vanderbilt.edu; r-help at r-project.org
Date: Mon, 23 Aug 2010 14:22:46 -0400
Subject: Re: [R] sendmailR-package-valid code needed
No, alas. It would have been nice but I decided I didn't need it that
badly/need to spend time reinventing that many wheels. (It does do
mail-merge, though, which is what I developed it for in the first
place.) It seemed to me that a more sensible solution would have been
to find a Perl or Python library/code fragment that knew the formats for
MIME attachments, but surprisingly (to me) I didn't easily discover
anything appropriate floating around on the web ...
Frank Harrell wrote:> Ben does yours create mime attachments?
> Thanks
> Frank
>
> Frank E Harrell Jr Professor and Chairman School of Medicine
> Department of Biostatistics Vanderbilt University
>
> On Mon, 23 Aug 2010, Ben Bolker wrote:
>
>> Barry Rowlingson lancaster.ac.uk> writes:
>>
>>>
>>> On Mon, Aug 23, 2010 at 3:55 PM, Velappan Periasamy
>>> gmail.com>
>> wrote:
>>>> Hello Chris Campbell ,
>>>>
>>
>> I just posted my embryonic 'Rmail' package, which does a form
>> of SMTP authentication (maybe not the version you want), to
>> http://www.mathserv.mcmaster.ca/~bolker/R/src/contrib/Rmail_1.0.tar.gz
>>
>> You should be able to install it via
>>
>>
install.packages("Rmail",contriburl="http://www.math.mcmaster.ca/~bolker/R/src/contrib")
>>
>>
>> (although not that it is a source package -- there's no compiled
>> code, though, so if necessary you can download the tarball and dig
>> the R source code out ...)
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
--Forwarded Message Attachment--
From: nikhil.list at gmail.com
CC: r-help at r-project.org
To: Steven.Ranney at montana.edu
Date: Mon, 23 Aug 2010 14:55:52 -0400
Subject: Re: [R] Quantile Regression and Goodness of Fit
http://www.econ.uiuc.edu/~roger/research/R1/R1.html
On Mon, Aug 23, 2010 at 2:15 PM, Steven Ranney wrote:
> All -
>
> Does anyone know if there is a method to calculate a goodness-of-fit
> statistic for quantile regressions with package quantreg?
> Specifically, I'm wondering if anyone has implemented the
> goodness-of-fit process developed by Koenker and Machado (1999) for R?
>
> Though I have used package quantreg in the past, I may have overlooked
> this function, if it is included.
>
> Citation:
>
> Koenker, R. and J. A. F. Machado. 1999. Goodness of fit and related
> inference processes for quantile regression. Journal of the American
> Statistical Association 94:1296-1310.
>
> Thank you -
>
> Steven H. Ranney
> Graduate Research Assistant (Ph.D.)
>
> USGS MT Cooperative Fishery Research Unit
> Montana State University
> PO Box 173460
> Bozeman, MT 59717
>
> office: 406-994-6643
> fax: 406-994-7479
>
> http://studentweb.montana.edu/steven.ranney
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: johan.steen at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 21:01:40 +0200
Subject: [R] extracting p-values from Anova objects (from the car library)
Dear all,
is there anyone who can help me extracting p-values from an Anova object
from the car library? I can't seem to locate the p-values using
str(result) or str(summary(result)) in the example below
> A <- factor( rep(1:2,each=3) )
> B <- factor( rep(1:3,times=2) )
> idata <- data.frame(A,B)
> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ? sex,
data=Data.wide)> result <- Anova(fit, type="III", test="Wilks",
idata=idata, idesign=?A*B)
Any help would be much appreciated!
Many thanks,
Johan
--Forwarded Message Attachment--
From: ggrothendieck at gmail.com
CC: r-help at r-project.org
To: f.veronesi at cranfield.ac.uk
Date: Mon, 23 Aug 2010 15:03:49 -0400
Subject: Re: [R] Fitting Weibull Model with Levenberg-Marquardt regression
method
On Mon, Aug 23, 2010 at 11:01 AM, Veronesi, Fabio
wrote:> Hi,
> I have a problem fitting the following Weibull Model to a set of data.
> The model is this one: a-b*exp(-c*x^d)
> If I fitted the model with CurveExpert I can find a very nice set of
coefficients which create a curve very close to my data, but when I use the
nls.lm function in R I can't obtain the same result.
> My data are these:
> X Y
> 15 13
> 50 13
> 75 9
> 90 4
>
> With the commercial software I obtain the following coefficients:
> Weibull Model: y=a-b*exp(-c*x^d)
> Coefficient Data:
> a = 1.31636909714E+001
> b = 7.61325570579E+002
> c = 2.82150000991E+002
> d = -9.23838785044E-001
>
> For fitting the Levenberg-Marquardt in R I'm using the following lines:
> pS<-list(a=1,b=1,c=1,d=1)
> model<-function(pS,xx){pS$a-pS$b*exp(-pS$c*xx^-pS$d)}
> resid<-function(observed,pS,xx){observed-model(pS,xx)}
> lin<-nls.lm(pS,resid,observed=Y,xx=X)
>
> Why I can't obtain the same results?
> Many thanks in advance,
> Fabio
Note that you have 4 parameters and only 4 data points! You are not
likely to get anything useful with that. At any rate try this noting
that with alg = "plinear" you don't have to provide starting
values
for the linear parameters:
> DF <- data.frame(X = c(15, 50, 75, 90), Y = c(13, 13, 9, 4))
>
> nls(Y ~ cbind(1, exp(-c*X^d)), DF, start = list(c = 1, d = 1), alg =
"plinear")
Nonlinear regression model
model: Y ~ cbind(1, exp(-c * X^d))
data: DF
c d .lin1 .lin2
1.000e+00 1.000e+00 8.667e+00 1.417e+07
residual sum-of-squares: 40.67
Number of iterations to convergence: 0
Achieved convergence tolerance: 0
--Forwarded Message Attachment--
From: pmilin at ff.uns.ac.rs
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 21:33:19 +0200
Subject: [R] Coinertia randtest
Hello!
I dunno why, but I cannot make randtes.coinertia() from ade4 package
working. I have two nice distance matrices (Euclidean):> dist1
1 2 3 4 5 6 7
2 2.5776799
3 1.7892825 1.0637487
4 1.0557991 2.4270728 2.0626604
5 1.6745483 4.1505805 3.4581614 1.8900295
6 2.5045058 0.8144662 0.7157900 2.6888306 4.1708413
7 4.1367323 1.6058153 2.6451634 3.7795019 5.6220092 2.1504953
8 1.5224318 1.4444836 1.3682047 1.0048085 2.8425813 1.8464349 2.7853605
9 4.5321276 4.5859822 3.8178836 5.4277885 5.7703245 3.7731505 5.6657106
10 4.0389095 1.6702235 2.7327563 3.5495557 5.4279792 2.3611163 0.5540045
8 9
2
3
4
5
6
7
8
9 5.1082493
10 2.5924107 6.0248981
> dist2
1 2 3 4 5 6 7
2 0.9364828
3 1.4333876 1.9160376
4 0.2523886 1.1884864 1.3697080
5 2.3753105 2.5185909 3.7830808 2.4140423
6 1.7928748 1.6959363 1.0841587 1.8848077 4.0827074
7 2.5117723 1.6858232 2.7455418 2.7406021 4.0115957 1.8140011
8 0.6014150 0.3748333 1.8022486 0.8514693 2.2931638 1.8062392 2.0515604
9 2.8802778 3.5524921 1.6707483 2.7134664 4.9800100 2.5480973 4.3439383
10 3.0686316 2.6196374 2.4765500 3.2133783 5.1381903 1.4228493 1.5749603
8 9
2
3
4
5
6
7
8
9 3.3777063
10 2.8790276 3.6806114
And then, I follow from help:> pco1 = dudi.pco(dist1, nf=2, scannf=F)
> pco2 = dudi.pco(dist2, nf=2, scannf=F)
> coi = coinertia(pco1, pco2, nf=2, scannf=F)
> testco1 = randtest.coinertia(coi, nrepet=1000)
The result is:> Error in randtest.coinertia(coi, nrepet = 1000) : Not yet available
Could anyone help with this?
Best,
PM
--Forwarded Message Attachment--
From: eva.nordstrom at yahoo.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 12:50:01 -0700
Subject: [R] "easiest" way to write an R dataframe to excel?
I am using R 2.11.1 in a Microsoft Windows 7 environment.
I tried using WriteXLS, but get the message " In system(cmd) : perl not
found"
What is the "easiest" way to write an R dataframe to Excel? (I am
familiar with
WriteXLS, but I do not have PERL installed, and if not needed, do not wish to
install it.)
I am also familiar with write.table, but if possible, wish to create an excel
file form within R.
I'm unsure if this is possible, or perhaps i should just go ahead and
install
PERL...?
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at r-project.org
To: johan.steen at gmail.com
Date: Mon, 23 Aug 2010 15:56:27 -0400
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
On Aug 23, 2010, at 3:01 PM, Johan Steen wrote:
> Dear all,
>
> is there anyone who can help me extracting p-values from an Anova
> object from the car library? I can't seem to locate the p-values
> using str(result) or str(summary(result)) in the example below
>
>> A <- factor( rep(1:2,each=3) )
>> B <- factor( rep(1:3,times=2) )
>> idata <- data.frame(A,B)
>> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ? sex,
> data=Data.wide)
>> result <- Anova(fit, type="III", test="Wilks",
idata=idata,
> idesign=?A*B)
# you forgot require(car)
> A <- factor( rep(1:2,each=3) )
> B <- factor( rep(1:3,times=2) )
> idata <- data.frame(A,B)
> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3)~sex,
data=Data.wide)
Error in inherits(x, "data.frame") : object 'Data.wide' not
found
I am guessing that you have an object Data.wide and you are not giving
us any look at it.
Using the lm help page example:
It appears that the print method for Anova is what would return the p-
values:
prtAnova <- Anova(lm.D9 <- lm(weight ~ group),
type="III")> str(prtAnova )
Classes ?anova? and 'data.frame': 3 obs. of 4 variables:
$ Sum Sq : num 253.21 0.688 8.729
$ Df : num 1 1 18
$ F value: num 522.13 1.42 NA
$ Pr(>F) : num 9.55e-15 2.49e-01 NA
- attr(*, "heading")= chr "Anova Table (Type III
tests)\n"
"Response: weight"
So htis is one way:
> prtAnova$'Pr(>F)'
[1] 9.547128e-15 2.490232e-01 NA
Further examination makes me wonder why you decided that the summary
method did not produce a p-value?
> sumryAnova <- summary(Anova(lm.D9 <- lm(weight ~ group),
type="III"))
> str(sumryAnova)
'table' chr [1:7, 1:4] "Min. : 0.6882 " "1st Qu.:
4.7087 "
"Median : 8.7293 " ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:7] "" "" "" "" ...
..$ : chr [1:4] " Sum Sq" " Df" " F
value" " Pr(>F)"
Perhaps perhaps you didn't realize that Pr(>F) was the p-value? (It
would be a bit more difficult to get the p-value from the summary
object since it needs to be extracted with attribute functions.
--
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: landronimirc at gmail.com
CC: r-help at r-project.org
To: eva.nordstrom at yahoo.com
Date: Mon, 23 Aug 2010 22:56:23 +0300
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
On Mon, Aug 23, 2010 at 10:50 PM, Eva Nordstrom wrote:> I am using R 2.11.1 in a Microsoft Windows 7 environment.
>
> I tried using WriteXLS, but get the message " In system(cmd) : perl
not found"
>
> What is the "easiest" way to write an R dataframe to Excel? (I
am familiar with
> WriteXLS, but I do not have PERL installed, and if not needed, do not wish
to
> install it.)
>
For easy ways, check RExcel and, perhaps, exporting a df to CSV via
Rcmdr menus (Data> Active data set> Export).
Liviu
> I am also familiar with write.table, but if possible, wish to create an
excel
> file form within R.
>
> I'm unsure if this is possible, or perhaps i should just go ahead and
install
> PERL...?
>
>
>
> [[alternative HTML version deleted]]
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
Do you know how to read?
http://www.alienetworks.com/srtest.cfm
http://goodies.xfce.org/projects/applications/xfce4-dict#speed-reader
Do you know how to write?
http://garbl.home.comcast.net/~garbl/stylemanual/e.htm#e-mail
--Forwarded Message Attachment--
From: liulei at virginia.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 15:58:27 -0400
Subject: [R] trajectory plot (growth curve)
Hi there,
I want to make trajectory plots for data as follows:
ID time y
1 1 1.4
1 2 2.0
1 3 2.5
2 1.5 2.3
2 4 4.5
2 5.5 1.6
2 6 2.0
...
That is, I will plot a growth curve for each subject ID, with y in
the y axis, and time in the x axis. I would like to have all growth
curves in the same plot. Is there a simple way in R to do it? Thanks a lot!
Lei Liu
Associate Professor
Division of Biostatistics and Epidemiology
Department of Public Health Sciences
University of Virginia School of Medicine
http://people.virginia.edu/~ll9f/
--Forwarded Message Attachment--
From: gunter.berton at gene.com
CC: r-help at r-project.org
To: ali at kmhome.org
Date: Mon, 23 Aug 2010 13:07:08 -0700
Subject: Re: [R] change order of plot panels in faceted ggplot/qplot
This is easy to do in xyplot (latice package) via the index.cond and
skip arguments. Don't know about ggplot though.
-- Bert
On Mon, Aug 23, 2010 at 11:02 AM, Alison Macalady
wrote:> Hi,
>
> I have a 5-paneled figure that i made using the facet function in qplot
> (ggplot). I've managed to arrange the panels into two rows/three
columns,
> but for the sake of easy visual comparisons between panels in my particular
> dataset, I want to have the two plots on the bottom align on the right hand
> side of the figure instead of the left.
>
> Here's an example:
> m <- matrix(rnorm(300), nrow = 60)
> colnames(m) <- paste('V', 1:5, sep = '')
> b <- data.frame(site = factor(rep(c('A', 'B',
'C', 'D', 'E'), each = 12)),
> status = factor(rep(rep(c('D','L'), each = 3), 10)),
as.data.frame(m))
>
> qplot(V2, V1, data=b, shape=status) +
> scale_shape_manual(value=c(1,16))+facet_wrap(~site,nrow=2)
>
> What I would like to do is keep the 2 row shape, keep the order (A,B,C) of
> the top plots, but have the D and E panels in this example align under the
B
> and C plots.
>
> Is this possible using qplot?
>
> Many thanks,
> Ali
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--Forwarded Message Attachment--
From: mark_difford at yahoo.co.uk
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 13:08:01 -0700
Subject: Re: [R] Coinertia randtest
Hi Petar,
>> I dunno why, but I cannot make randtes[t].coinertia() from ade4 package
>> working. I have two nice distance matrices (Euclidean):
>> Could anyone help with this?
Yes (sort of). The test has not yet been implemented for dudi.pco, as the
message at the end of your listing tells you.
>> The result is:
> Error in randtest.coinertia(coi, nrepet = 1000) : Not yet available
So far it has only been implemented for some types of dudi.pca, and for
dudi.coa, dudi.fca, and dudi.acm. You might be lucky and find code to do
what you want on the ade4 list.
http://pbil.univ-lyon1.fr/ADE-4/home.php?lang=eng
http://listes.univ-lyon1.fr/wws/info/adelist
Regards, Mark.
--
View this message in context:
http://r.789695.n4.nabble.com/Coinertia-randtest-tp2335696p2335748.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: ggrothendieck at gmail.com
CC: r-help at r-project.org
To: liulei at virginia.edu
Date: Mon, 23 Aug 2010 16:16:49 -0400
Subject: Re: [R] trajectory plot (growth curve)
On Mon, Aug 23, 2010 at 3:58 PM, Lei Liu wrote:> Hi there,
>
> I want to make trajectory plots for data as follows:
>
> ID time y
> 1 1 1.4
> 1 2 2.0
> 1 3 2.5
> 2 1.5 2.3
> 2 4 4.5
> 2 5.5 1.6
> 2 6 2.0
>
> ...
>
> That is, I will plot a growth curve for each subject ID, with y in the y
> axis, and time in the x axis. I would like to have all growth curves in the
> same plot. Is there a simple way in R to do it? Thanks a lot!
>
Try this.
Lines <- "ID time y
1 1 1.4
1 2 2.0
1 3 2.5
2 1.5 2.3
2 4 4.5
2 5.5 1.6
2 6 2.0"
library(zoo)
# z <- read.zoo("myfile.dat", header = TRUE, split = 1, index = 2)
z <- read.zoo(textConnection(Lines), header = TRUE, split = 1, index = 2)
plot(z) # each in separate panel
plot(z, col = 1:2) # all on same plot in different colors
--Forwarded Message Attachment--
From: ggrothendieck at gmail.com
CC: r-help at r-project.org
To: liulei at virginia.edu
Date: Mon, 23 Aug 2010 16:18:43 -0400
Subject: Re: [R] trajectory plot (growth curve)
On Mon, Aug 23, 2010 at 4:16 PM, Gabor Grothendieck
wrote:> On Mon, Aug 23, 2010 at 3:58 PM, Lei Liu wrote:
>> Hi there,
>>
>> I want to make trajectory plots for data as follows:
>>
>> ID time y
>> 1 1 1.4
>> 1 2 2.0
>> 1 3 2.5
>> 2 1.5 2.3
>> 2 4 4.5
>> 2 5.5 1.6
>> 2 6 2.0
>>
>> ...
>>
>> That is, I will plot a growth curve for each subject ID, with y in the
y
>> axis, and time in the x axis. I would like to have all growth curves in
the
>> same plot. Is there a simple way in R to do it? Thanks a lot!
>>
>
> Try this.
>
> Lines <- "ID time y
> 1 1 1.4
> 1 2 2.0
> 1 3 2.5
> 2 1.5 2.3
> 2 4 4.5
> 2 5.5 1.6
> 2 6 2.0"
>
> library(zoo)
>
> # z <- read.zoo("myfile.dat", header = TRUE, split = 1, index
= 2)
> z <- read.zoo(textConnection(Lines), header = TRUE, split = 1, index =
2)
>
> plot(z) # each in separate panel
> plot(z, col = 1:2) # all on same plot in different colors
>
or better:
plot(na.approx(z))
plot(na.approx(z), col = 1:2)
--Forwarded Message Attachment--
From: Greg.Snow at imail.org
To: gunter.berton at gene.com; r-help at r-project.org
Date: Mon, 23 Aug 2010 14:21:36 -0600
Subject: Re: [R] Regex exercise
How about:
x <- "1 2 -5, 3- 6 4 8 5-7 10"; x
library(gsubfn)
strapply( x, '(([0-9]+) *- *([0-9]+))|([0-9]+)',
function(one,two,three,four) {
if( nchar(four)> 0 ) return(as.numeric(four) )
return( seq( from=as.numeric(two), to=as.numeric(three) ) )
}
)[[1]]
If x is a vector of strings and you remove the [[1]] then you will get a list
with each element corresponding to a string in x (unlisting will give a single
vector).
This could be easily extended to handle floating point numbers instead of just
integers and even negative numbers (as long as you have a clear rule to
distinguish between a negative and a the end of the range).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow at imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at r-
> project.org] On Behalf Of Bert Gunter
> Sent: Friday, August 20, 2010 2:55 PM
> To: r-help at r-project.org
> Subject: [R] Regex exercise
>
> For regular expression afficianados, I'd like a cleverer solution to
> the following problem (my solution works just fine for my needs; I'm
> just trying to improve my regex skills):
>
> Given the string (entered, say, at a readline prompt):
>
> "1 2 -5, 3- 6 4 8 5-7 10" ## only integers will be entered
>
> parse it to produce the numeric vector:
>
> c(1, 2, 3, 4, 5, 3, 4, 5, 6, 8, 5, 6, 7, 10)
>
> Note that "-" in the expression is used to indicate a range of
values
> instead of ":"
>
> Here's my UNclever solution:
>
> First convert more than one space to a single space and then replace
> "-" by ":" by:
>
>> x1 <- gsub(" *- *",":",gsub("
+"," ",resp)) #giving
>> x1
> [1] "1 2:5, 3:6 4 8 5:7 10" ## Note that the comma remains
>
> Next convert the single string into a character vector via strsplit by
> splitting on anything but ":" or a digit:
>
>> x2 <- strsplit(x1,split="[^:[:digit:]]+")[[1]] #giving
>> x2
> [1] "1" "2:5" "3:6" "4"
"8" "5:7" "10"
>
> Finally, parse() the vector, eval() each element, and unlist() the
> resulting list of numeric vectors:
>
>> unlist(lapply(parse(text=x2),eval)) #giving, as desired,
> [1] 1 2 3 4 5 3 4 5 6 4 8 5 6 7 10
>
>
> This seems far too clumsy and circumlocuitous not to have a more
> elegant solution from a true regex expert.
>
> (Special note to Thomas Lumley: This seems one of the few instances
> where eval(parse..)) may actually be appropriate.)
>
> Cheers to all,
>
> Bert
>
> --
> Bert Gunter
> Genentech Nonclinical Biostatistics
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at r-project.org
To: Thorn.Thaler at rdls.nestle.com
Date: Mon, 23 Aug 2010 16:49:34 -0400
Subject: Re: [R] Sum a list of tables
On Aug 23, 2010, at 12:39 PM, Thaler, Thorn, LAUSANNE, Applied
Mathematics wrote:
> Perfectly, works as expected. Regarding the other questions, can
> anybody point me to the right direction?
>
>
>
> So first question: where is the `+` operation defined for tables?
tables are a modification via attributes of matrices.
> Is it
> S4? How can I see the source code of S4 functions
I don't think so, at least from reading:
?Arithmetic
?groupGeneric
?InternalMethods
> (I'm not very
> comfortable with S4)? Or in general how do I find all generic
> functions
> for a specific class? I can get all S3 implementations of plot with
> methods(plot), I can get all S4 functions with getMethods(plot). But
> I've no idea of how to find all methods defined for a class? (e.g. all
> functions that operate on tables, say)
>
> Second question: my first dirty hack would be something like
>
> args <- list(...)
> table.sum <- args[[1]]
> for (i in 2:length(args)) table.sum <- table.sum + args[[i]]
>
>
> making use of the fact that `+` is defined for tables (and forgetting
> about cases where two tables don't feature the same names). It works,
> but isn't there a more elegant way to get the same?
Would sum() work better than "+"? The sum function is not a dyadic
operator.
(a <- table(rep(1:3, sample(10,3))))
k <- list(a,a,a)
> Reduce(sum, k )
[1] 51> do.call("sum", k)
[1] 51>
>
> Last question: is there a way to determine the call stack, such that I
> can see the names of the function which are actually executed when I
> commit a command?
I know there is some feeling hereabouts that saying anything vaguely
like RTFM (of RTFA's) is passe, but I do believe that you can find
either the answer or pointers in the directions of an answer to this
by reviewing the posting over this last weekend to a question about
recovering from errors. (I was not the responder, so do not remember
the subject line. Oh heck. .... search, search, search, ... there it
is ... "on abort error, always show call stack?")
> I know a little about R's dispatching mechanism for S3
> classes (plot(a) actually calls plot.table) but I've no clue which
> function is called if I type a + a (especially since `+` belongs to
> the
> generic function group Ops and I do not know at all whether its S4 or
> S3). I read the documentation about S3 Group Generic Functions and
> tried
> to delve into S4,
Oh well. You've already been there. Maybe my advice above will help
someone else.
> but obviously I was not able to understand everything
> completely. So it would be great if somebody could help me out with
> this
> specific topic and point me to some resources where I can learn more.
>
> Thanks for your help in advance.
>
> BR,
>
> Thorn
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: marc_schwartz at me.com
CC: r-help at r-project.org
To: eva.nordstrom at yahoo.com
Date: Mon, 23 Aug 2010 15:52:37 -0500
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
On Aug 23, 2010, at 2:50 PM, Eva Nordstrom wrote:
> I am using R 2.11.1 in a Microsoft Windows 7 environment.
>
> I tried using WriteXLS, but get the message " In system(cmd) : perl
not found"
>
> What is the "easiest" way to write an R dataframe to Excel? (I
am familiar with
> WriteXLS, but I do not have PERL installed, and if not needed, do not wish
to
> install it.)
>
> I am also familiar with write.table, but if possible, wish to create an
excel
> file form within R.
>
> I'm unsure if this is possible, or perhaps i should just go ahead and
install
> PERL...?
There is a page in the R Wiki that provides an overview of your options:
http://rwiki.sciviews.org/doku.php?id=tips:data-io:ms_windows
If you just need to do this once or just with a single data frame and writing
the data frame to a CSV file and then opening the CSV file in Excel is
satisfactory from a process/time standpoint, then using ?write.table might be
the best approach.
On the other hand, if you are writing a number of data frames to a single Excel
file and perhaps need to do this with some frequency, you will want to consider
one of the other, more automated, approaches where you can create an Excel file
directly from within R.
If you wish to use WriteXLS, there is an INSTALL file that you can review online
(also in the package installation folder):
http://cran.r-project.org/web/packages/WriteXLS/INSTALL
The easiest way to install Perl on Windows, that also has the Perl modules
required for WriteXLS, is to use the ActiveState Perl distribution, available
from:
http://www.activestate.com/activeperl/
If you have problems or other questions with respect to WriteXLS, let me know.
HTH,
Marc Schwartz
--Forwarded Message Attachment--
From: msamtani at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 17:17:09 -0400
Subject: [R] Problem with step.gam
Hello,
I am running a step.gam with 21 explanatory variables. I run into a problem
with the "Error: cannot allocate vector of size 437.9 Mb" if the list
of
explanatory variables is longer than 17. I have to comment out 4 of the
variables and can't test than 17 variables (see below). I am wondering if
there is a work around this problem.
Please help,
Mahesh
gam.object <- gam(YVAR~1, data=gam.data)
step.object <-step.gam(gam.object, scope=list(
"FAC1"=~1+FAC1,
"FAC2"=~1+FAC2,
"FAC3"=~1+FAC3,
"NUM1"=~1+NUM1+ns(NUM1,df=2),
"NUM2"=~1+NUM2+ns(NUM2,df=2),
"NUM3"=~1+NUM3+ns(NUM3,df=2),
"NUM4"=~1+NUM4+ns(NUM4,df=2),
"NUM5"=~1+NUM5+ns(NUM5,df=2),
"NUM6"=~1+NUM6+ns(NUM6,df=2),
"NUM7"=~1+NUM7+ns(NUM7,df=2),
"NUM8"=~1+NUM8+ns(NUM8,df=2),
"NUM9"=~1+NUM9+ns(NUM9,df=2),
"NUM10"=~1+NUM10+ns(NUM10,df=2),
"NUM11"=~1+NUM11+ns(NUM11,df=2),
"NUM12"=~1+NUM12+ns(NUM12,df=2),
"NUM13"=~1+NUM13+ns(NUM13,df=2),
"NUM14"=~1+NUM14+ns(NUM14,df=2)
#"NUM15"=~1+NUM15+ns(NUM15,df=2),
#"NUM16"=~1+NUM16+ns(NUM16,df=2),
#"NUM17"=~1+NUM17+ns(NUM17,df=2),
#"NUM18"=~1+NUM18+ns(NUM18,df=2)
))
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at r-project.org; eva.nordstrom at yahoo.com
To: marc_schwartz at me.com
Date: Mon, 23 Aug 2010 17:22:28 -0400
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
On Aug 23, 2010, at 4:52 PM, Marc Schwartz wrote:
> On Aug 23, 2010, at 2:50 PM, Eva Nordstrom wrote:
>
>> I am using R 2.11.1 in a Microsoft Windows 7 environment.
>>
>> I tried using WriteXLS, but get the message " In system(cmd) :
perl
>> not found"
>>
> Snipped muc useful information
> If you have problems or other questions with respect to WriteXLS,
> let me know.
Thanks for creating WriteXLS, Marc. Others should know that WriteXLS
has the great advantage over *.csv writing, in that it lets you make
multi-sheet XLS files. Need a vector of 16 dataframes written to 16
worksheets in one workbook ... no problem, ... worked the first time
(at least it did so once I realized I needed to follow the directions
in the help page exactly as they were written :-).
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: kingsfordjones at gmail.com
CC: r-help at r-project.org
To: liulei at virginia.edu
Date: Mon, 23 Aug 2010 15:25:48 -0600
Subject: Re: [R] trajectory plot (growth curve)
and some more options...
dat <- structure(list(ID = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L),
.Label = c("1", "2"), class = "factor"),
time = c(1, 2, 3, 1.5, 4, 5.5, 6),
y = c(1.4, 2, 2.5, 2.3, 4.5, 1.6, 2)),
.Names = c("ID", "time", "y"),
row.names = c(NA, -7L), class = "data.frame")
library(lattice)
xyplot(y ~ time|ID, data = dat, type = 'l')
xyplot(y ~ time, data = dat, group = ID, type = 'l')
library(ggplot2)
qplot(time, y, data = dat, facets = .~ID, geom = 'line')
qplot(time, y, data = dat, group = ID, color = ID, geom = 'line')
hth,
Kingsford Jones
On Mon, Aug 23, 2010 at 1:58 PM, Lei Liu wrote:> Hi there,
>
> I want to make trajectory plots for data as follows:
>
> ID time y
> 1 1 1.4
> 1 2 2.0
> 1 3 2.5
> 2 1.5 2.3
> 2 4 4.5
> 2 5.5 1.6
> 2 6 2.0
>
> ...
>
> That is, I will plot a growth curve for each subject ID, with y in the y
> axis, and time in the x axis. I would like to have all growth curves in the
> same plot. Is there a simple way in R to do it? Thanks a lot!
>
> Lei Liu
> Associate Professor
> Division of Biostatistics and Epidemiology
> Department of Public Health Sciences
> University of Virginia School of Medicine
>
> http://people.virginia.edu/~ll9f/
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--Forwarded Message Attachment--
From: marc_schwartz at me.com
CC: r-help at r-project.org; eva.nordstrom at yahoo.com
To: dwinsemius at comcast.net
Date: Mon, 23 Aug 2010 16:32:28 -0500
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
On Aug 23, 2010, at 4:22 PM, David Winsemius wrote:
>
> On Aug 23, 2010, at 4:52 PM, Marc Schwartz wrote:
>
>> On Aug 23, 2010, at 2:50 PM, Eva Nordstrom wrote:
>>
>>> I am using R 2.11.1 in a Microsoft Windows 7 environment.
>>>
>>> I tried using WriteXLS, but get the message " In system(cmd) :
perl not found"
>>>
>> Snipped muc useful information
>
>> If you have problems or other questions with respect to WriteXLS, let
me know.
>
> Thanks for creating WriteXLS, Marc. Others should know that WriteXLS has
the great advantage over *.csv writing, in that it lets you make multi-sheet XLS
files. Need a vector of 16 dataframes written to 16 worksheets in one workbook
... no problem, ... worked the first time (at least it did so once I realized I
needed to follow the directions in the help page exactly as they were written
:-).
Thanks for your kind words David. Much appreciated.
If you have suggestions on improving the INSTALL file or help page content, let
me know. :-)
Regards,
Marc
--Forwarded Message Attachment--
From: johan.steen at gmail.com
CC: r-help at r-project.org
To: djmuser at gmail.com
Date: Mon, 23 Aug 2010 23:35:56 +0200
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
Thanks for your replies,
but unfortunately none of them seem to help.
I do get p-values in the output, but can't seem to locate them anywhere
in these objects via the str() function. I also get very different
output using str() than you obtained from the lm help page
Here's my output:
> A <- factor( rep(1:2,each=3) )
> B <- factor( rep(1:3,times=2) )
> idata <- data.frame(A,B)
> idata
A B
1 1 1
2 1 2
3 1 3
4 2 1
5 2 2
6 2 3>
> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
data=Data.wide)> result <- Anova(fit, type="III", test="Wilks",
idata=idata, idesign=~A*B)
> result
Type III Repeated Measures MANOVA Tests: Wilks test statistic
Df test stat approx F num Df den Df Pr(>F)
(Intercept) 1 0.02863 610.81 1 18 2.425e-15
sex 1 0.76040 5.67 1 18 0.02849
A 1 0.91390 1.70 1 18 0.20925
sex:A 1 0.99998 0.00 1 18 0.98536
B 1 0.26946 23.05 2 17 1.443e-05
sex:B 1 0.98394 0.14 2 17 0.87140
A:B 1 0.27478 22.43 2 17 1.704e-05
sex:A:B 1 0.98428 0.14 2 17
0.87397> summary(result)
Type III Repeated Measures MANOVA Tests:
------------------------------------------
Term: (Intercept)
Response transformation matrix:
(Intercept)
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 1
a2_b2 1
a2_b3 1
Sum of squares and products for the hypothesis:
(Intercept)
(Intercept) 1169345
Sum of squares and products for error:
(Intercept)
(Intercept) 34459.4
Multivariate Tests: (Intercept)
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.97137 610.8117 1 18 2.425e-15
Wilks 1 0.02863 610.8117 1 18 2.425e-15
Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
Roy 1 33.93399 610.8117 1 18 2.425e-15
------------------------------------------
Term: sex
Response transformation matrix:
(Intercept)
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 1
a2_b2 1
a2_b3 1
Sum of squares and products for the hypothesis:
(Intercept)
(Intercept) 10857.8
Sum of squares and products for error:
(Intercept)
(Intercept) 34459.4
Multivariate Tests: sex
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.2395956 5.671614 1 18 0.028486
Wilks 1 0.7604044 5.671614 1 18 0.028486
Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
Roy 1 0.3150896 5.671614 1 18 0.028486
------------------------------------------
Term: A
Response transformation matrix:
A1
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 -1
a2_b2 -1
a2_b3 -1
Sum of squares and products for the hypothesis:
A1
A1 980
Sum of squares and products for error:
A1
A1 10401.8
Multivariate Tests: A
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0861024 1.695860 1 18 0.20925
Wilks 1 0.9138976 1.695860 1 18 0.20925
Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
Roy 1 0.0942145 1.695860 1 18 0.20925
------------------------------------------
Term: sex:A
Response transformation matrix:
A1
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 -1
a2_b2 -1
a2_b3 -1
Sum of squares and products for the hypothesis:
A1
A1 0.2
Sum of squares and products for error:
A1
A1 10401.8
Multivariate Tests: sex:A
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0000192 0.0003460939 1 18 0.98536
Wilks 1 0.9999808 0.0003460939 1 18 0.98536
Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
Roy 1 0.0000192 0.0003460939 1 18 0.98536
------------------------------------------
Term: B
Response transformation matrix:
B1 B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 1 0
a2_b2 0 1
a2_b3 -1 -1
Sum of squares and products for the hypothesis:
B1 B2
B1 3618.05 3443.2
B2 3443.20 3276.8
Sum of squares and products for error:
B1 B2
B1 2304.5 1396.8
B2 1396.8 1225.2
Multivariate Tests: B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.730544 23.04504 2 17 1.4426e-05
Wilks 1 0.269456 23.04504 2 17 1.4426e-05
Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
Roy 1 2.711181 23.04504 2 17 1.4426e-05
------------------------------------------
Term: sex:B
Response transformation matrix:
B1 B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 1 0
a2_b2 0 1
a2_b3 -1 -1
Sum of squares and products for the hypothesis:
B1 B2
B1 26.45 23
B2 23.00 20
Sum of squares and products for error:
B1 B2
B1 2304.5 1396.8
B2 1396.8 1225.2
Multivariate Tests: sex:B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0160644 0.1387764 2 17 0.8714
Wilks 1 0.9839356 0.1387764 2 17 0.8714
Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
Roy 1 0.0163266 0.1387764 2 17 0.8714
------------------------------------------
Term: A:B
Response transformation matrix:
A1:B1 A1:B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 -1 0
a2_b2 0 -1
a2_b3 1 1
Sum of squares and products for the hypothesis:
A1:B1 A1:B2
A1:B1 5152.05 738.3
A1:B2 738.30 105.8
Sum of squares and products for error:
A1:B1 A1:B2
A1:B1 3210.5 1334.4
A1:B2 1334.4 924.0
Multivariate Tests: A:B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
Roy 1 2.6392162 22.43334 2 17 1.7039e-05
------------------------------------------
Term: sex:A:B
Response transformation matrix:
A1:B1 A1:B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 -1 0
a2_b2 0 -1
a2_b3 1 1
Sum of squares and products for the hypothesis:
A1:B1 A1:B2
A1:B1 26.45 2.3
A1:B2 2.30 0.2
Sum of squares and products for error:
A1:B1 A1:B2
A1:B1 3210.5 1334.4
A1:B2 1334.4 924.0
Multivariate Tests: sex:A:B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0157232 0.1357821 2 17 0.87397
Wilks 1 0.9842768 0.1357821 2 17 0.87397
Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
Roy 1 0.0159744 0.1357821 2 17 0.87397
Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
SS num Df Error SS den Df F Pr(>F)
(Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
sex 1810 1 5743.2 18 5.6716 0.02849
A 163 1 1733.6 18 1.6959 0.20925
sex:A 0 1 1733.6 18 0.0003 0.98536
B 1151 2 711.0 36 29.1292 2.990e-08
sex:B 8 2 711.0 36 0.1979 0.82134
A:B 1507 2 933.4 36 29.0532 3.078e-08
sex:A:B 8 2 933.4 36 0.1565 0.85568
Mauchly Tests for Sphericity
Test statistic p-value
B 0.57532 0.0091036
sex:B 0.57532 0.0091036
A:B 0.45375 0.0012104
sex:A:B 0.45375 0.0012104
Greenhouse-Geisser and Huynh-Feldt Corrections
for Departure from Sphericity
GG eps Pr(>F[GG])
B 0.70191 2.143e-06
sex:B 0.70191 0.7427
A:B 0.64672 4.838e-06
sex:A:B 0.64672 0.7599
HF eps Pr(>F[HF])
B 0.74332 1.181e-06
sex:B 0.74332 0.7560
A:B 0.67565 3.191e-06
sex:A:B 0.67565 0.7702> str(result)
List of 13
$ SSP :List of 8
..$ (Intercept): num [1, 1] 1169345
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ sex : num [1, 1] 10858
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ A : num [1, 1] 980
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ sex:A : num [1, 1] 0.2
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ B : num [1:2, 1:2] 3618 3443 3443 3277
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ A:B : num [1:2, 1:2] 5152 738 738 106
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
$ SSPE :List of 8
..$ (Intercept): num [1, 1] 34459
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ sex : num [1, 1] 34459
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ A : num [1, 1] 10402
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ sex:A : num [1, 1] 10402
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ B : num [1:2, 1:2] 2304 1397 1397 1225
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
$ P :List of 8
..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "(Intercept)"
..$ sex : num [1:6, 1] 1 1 1 1 1 1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "(Intercept)"
..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "A1"
..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "A1"
..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "B1" "B2"
..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "B1" "B2"
..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
$ df : Named num [1:8] 1 1 1 1 1 1 1 1
..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
$ error.df : int 18
$ terms : chr [1:8] "(Intercept)" "sex" "A"
"sex:A" ...
$ repeated : logi TRUE
$ type : chr "III"
$ test : chr "Wilks"
$ idata :'data.frame': 6 obs. of 2 variables:
..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
.. ..- attr(*, "contrasts")= chr "contr.sum"
..$ B: Factor w/ 3 levels "1","2","3": 1 2 3 1
2 3
.. ..- attr(*, "contrasts")= chr "contr.sum"
$ idesign :Class 'formula' length 2 ~A * B
.. ..- attr(*, ".Environment") $ icontrasts: chr [1:2]
"contr.sum" "contr.poly"
$ imatrix : NULL
- attr(*, "class")= chr
"Anova.mlm"> str(summary(result))
Type III Repeated Measures MANOVA Tests:
------------------------------------------
Term: (Intercept)
Response transformation matrix:
(Intercept)
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 1
a2_b2 1
a2_b3 1
Sum of squares and products for the hypothesis:
(Intercept)
(Intercept) 1169345
Sum of squares and products for error:
(Intercept)
(Intercept) 34459.4
Multivariate Tests: (Intercept)
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.97137 610.8117 1 18 2.425e-15
Wilks 1 0.02863 610.8117 1 18 2.425e-15
Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
Roy 1 33.93399 610.8117 1 18 2.425e-15
------------------------------------------
Term: sex
Response transformation matrix:
(Intercept)
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 1
a2_b2 1
a2_b3 1
Sum of squares and products for the hypothesis:
(Intercept)
(Intercept) 10857.8
Sum of squares and products for error:
(Intercept)
(Intercept) 34459.4
Multivariate Tests: sex
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.2395956 5.671614 1 18 0.028486
Wilks 1 0.7604044 5.671614 1 18 0.028486
Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
Roy 1 0.3150896 5.671614 1 18 0.028486
------------------------------------------
Term: A
Response transformation matrix:
A1
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 -1
a2_b2 -1
a2_b3 -1
Sum of squares and products for the hypothesis:
A1
A1 980
Sum of squares and products for error:
A1
A1 10401.8
Multivariate Tests: A
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0861024 1.695860 1 18 0.20925
Wilks 1 0.9138976 1.695860 1 18 0.20925
Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
Roy 1 0.0942145 1.695860 1 18 0.20925
------------------------------------------
Term: sex:A
Response transformation matrix:
A1
a1_b1 1
a1_b2 1
a1_b3 1
a2_b1 -1
a2_b2 -1
a2_b3 -1
Sum of squares and products for the hypothesis:
A1
A1 0.2
Sum of squares and products for error:
A1
A1 10401.8
Multivariate Tests: sex:A
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0000192 0.0003460939 1 18 0.98536
Wilks 1 0.9999808 0.0003460939 1 18 0.98536
Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
Roy 1 0.0000192 0.0003460939 1 18 0.98536
------------------------------------------
Term: B
Response transformation matrix:
B1 B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 1 0
a2_b2 0 1
a2_b3 -1 -1
Sum of squares and products for the hypothesis:
B1 B2
B1 3618.05 3443.2
B2 3443.20 3276.8
Sum of squares and products for error:
B1 B2
B1 2304.5 1396.8
B2 1396.8 1225.2
Multivariate Tests: B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.730544 23.04504 2 17 1.4426e-05
Wilks 1 0.269456 23.04504 2 17 1.4426e-05
Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
Roy 1 2.711181 23.04504 2 17 1.4426e-05
------------------------------------------
Term: sex:B
Response transformation matrix:
B1 B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 1 0
a2_b2 0 1
a2_b3 -1 -1
Sum of squares and products for the hypothesis:
B1 B2
B1 26.45 23
B2 23.00 20
Sum of squares and products for error:
B1 B2
B1 2304.5 1396.8
B2 1396.8 1225.2
Multivariate Tests: sex:B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0160644 0.1387764 2 17 0.8714
Wilks 1 0.9839356 0.1387764 2 17 0.8714
Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
Roy 1 0.0163266 0.1387764 2 17 0.8714
------------------------------------------
Term: A:B
Response transformation matrix:
A1:B1 A1:B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 -1 0
a2_b2 0 -1
a2_b3 1 1
Sum of squares and products for the hypothesis:
A1:B1 A1:B2
A1:B1 5152.05 738.3
A1:B2 738.30 105.8
Sum of squares and products for error:
A1:B1 A1:B2
A1:B1 3210.5 1334.4
A1:B2 1334.4 924.0
Multivariate Tests: A:B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
Roy 1 2.6392162 22.43334 2 17 1.7039e-05
------------------------------------------
Term: sex:A:B
Response transformation matrix:
A1:B1 A1:B2
a1_b1 1 0
a1_b2 0 1
a1_b3 -1 -1
a2_b1 -1 0
a2_b2 0 -1
a2_b3 1 1
Sum of squares and products for the hypothesis:
A1:B1 A1:B2
A1:B1 26.45 2.3
A1:B2 2.30 0.2
Sum of squares and products for error:
A1:B1 A1:B2
A1:B1 3210.5 1334.4
A1:B2 1334.4 924.0
Multivariate Tests: sex:A:B
Df test stat approx F num Df den Df Pr(>F)
Pillai 1 0.0157232 0.1357821 2 17 0.87397
Wilks 1 0.9842768 0.1357821 2 17 0.87397
Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
Roy 1 0.0159744 0.1357821 2 17 0.87397
Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
SS num Df Error SS den Df F Pr(>F)
(Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
sex 1810 1 5743.2 18 5.6716 0.02849
A 163 1 1733.6 18 1.6959 0.20925
sex:A 0 1 1733.6 18 0.0003 0.98536
B 1151 2 711.0 36 29.1292 2.990e-08
sex:B 8 2 711.0 36 0.1979 0.82134
A:B 1507 2 933.4 36 29.0532 3.078e-08
sex:A:B 8 2 933.4 36 0.1565 0.85568
Mauchly Tests for Sphericity
Test statistic p-value
B 0.57532 0.0091036
sex:B 0.57532 0.0091036
A:B 0.45375 0.0012104
sex:A:B 0.45375 0.0012104
Greenhouse-Geisser and Huynh-Feldt Corrections
for Departure from Sphericity
GG eps Pr(>F[GG])
B 0.70191 2.143e-06
sex:B 0.70191 0.7427
A:B 0.64672 4.838e-06
sex:A:B 0.64672 0.7599
HF eps Pr(>F[HF])
B 0.74332 1.181e-06
sex:B 0.74332 0.7560
A:B 0.67565 3.191e-06
sex:A:B 0.67565 0.7702
List of 13
$ SSP :List of 8
..$ (Intercept): num [1, 1] 1169345
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ sex : num [1, 1] 10858
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ A : num [1, 1] 980
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ sex:A : num [1, 1] 0.2
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ B : num [1:2, 1:2] 3618 3443 3443 3277
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ A:B : num [1:2, 1:2] 5152 738 738 106
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
$ SSPE :List of 8
..$ (Intercept): num [1, 1] 34459
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ sex : num [1, 1] 34459
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "(Intercept)"
.. .. ..$ : chr "(Intercept)"
..$ A : num [1, 1] 10402
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ sex:A : num [1, 1] 10402
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "A1"
.. .. ..$ : chr "A1"
..$ B : num [1:2, 1:2] 2304 1397 1397 1225
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "B1" "B2"
.. .. ..$ : chr [1:2] "B1" "B2"
..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
$ P :List of 8
..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "(Intercept)"
..$ sex : num [1:6, 1] 1 1 1 1 1 1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "(Intercept)"
..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "A1"
..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr "A1"
..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "B1" "B2"
..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "B1" "B2"
..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
.. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
$ df : Named num [1:8] 1 1 1 1 1 1 1 1
..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
$ error.df : int 18
$ terms : chr [1:8] "(Intercept)" "sex" "A"
"sex:A" ...
$ repeated : logi TRUE
$ type : chr "III"
$ test : chr "Wilks"
$ idata :'data.frame': 6 obs. of 2 variables:
..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
.. ..- attr(*, "contrasts")= chr "contr.sum"
..$ B: Factor w/ 3 levels "1","2","3": 1 2 3 1
2 3
.. ..- attr(*, "contrasts")= chr "contr.sum"
$ idesign :Class 'formula' length 2 ~A * B
.. ..- attr(*, ".Environment") $ icontrasts: chr [1:2]
"contr.sum" "contr.poly"
$ imatrix : NULL
- attr(*, "class")= chr
"Anova.mlm"> result$`Pr(>F)`
NULL> result[[4]]
(Intercept) sex A sex:A B sex:B
1 1 1 1 1 1
A:B sex:A:B
1 1>
Op 23/08/2010 22:23, Johan Steen schreef:> Thanks for your replies,
>
> but unfortunately none of them seem to help.
> I do get p-values in the output, but can't seem to locate them anywhere
> in these objects via the str() function. I also get very different
> output using str() than you obtained from the lm help page
>
> Here's my output:
>
> > A <- factor( rep(1:2,each=3) )
> > B <- factor( rep(1:3,times=2) )
> > idata <- data.frame(A,B)
> > idata
> A B
> 1 1 1
> 2 1 2
> 3 1 3
> 4 2 1
> 5 2 2
> 6 2 3
> >
> > fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
> data=Data.wide)
> > result <- Anova(fit, type="III", test="Wilks",
idata=idata,
> idesign=~A*B)
> > result
>
> Type III Repeated Measures MANOVA Tests: Wilks test statistic
> Df test stat approx F num Df den Df Pr(>F)
> (Intercept) 1 0.02863 610.81 1 18 2.425e-15
> sex 1 0.76040 5.67 1 18 0.02849
> A 1 0.91390 1.70 1 18 0.20925
> sex:A 1 0.99998 0.00 1 18 0.98536
> B 1 0.26946 23.05 2 17 1.443e-05
> sex:B 1 0.98394 0.14 2 17 0.87140
> A:B 1 0.27478 22.43 2 17 1.704e-05
> sex:A:B 1 0.98428 0.14 2 17 0.87397
> > summary(result)
>
> Type III Repeated Measures MANOVA Tests:
>
> ------------------------------------------
>
> Term: (Intercept)
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 1169345
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: (Intercept)
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.97137 610.8117 1 18 2.425e-15
> Wilks 1 0.02863 610.8117 1 18 2.425e-15
> Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
> Roy 1 33.93399 610.8117 1 18 2.425e-15
>
> ------------------------------------------
>
> Term: sex
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 10857.8
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: sex
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.2395956 5.671614 1 18 0.028486
> Wilks 1 0.7604044 5.671614 1 18 0.028486
> Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
> Roy 1 0.3150896 5.671614 1 18 0.028486
>
> ------------------------------------------
>
> Term: A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 980
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0861024 1.695860 1 18 0.20925
> Wilks 1 0.9138976 1.695860 1 18 0.20925
> Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
> Roy 1 0.0942145 1.695860 1 18 0.20925
>
> ------------------------------------------
>
> Term: sex:A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 0.2
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: sex:A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0000192 0.0003460939 1 18 0.98536
> Wilks 1 0.9999808 0.0003460939 1 18 0.98536
> Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
> Roy 1 0.0000192 0.0003460939 1 18 0.98536
>
> ------------------------------------------
>
> Term: B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 3618.05 3443.2
> B2 3443.20 3276.8
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.730544 23.04504 2 17 1.4426e-05
> Wilks 1 0.269456 23.04504 2 17 1.4426e-05
> Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
> Roy 1 2.711181 23.04504 2 17 1.4426e-05
>
> ------------------------------------------
>
> Term: sex:B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 26.45 23
> B2 23.00 20
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: sex:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0160644 0.1387764 2 17 0.8714
> Wilks 1 0.9839356 0.1387764 2 17 0.8714
> Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
> Roy 1 0.0163266 0.1387764 2 17 0.8714
>
> ------------------------------------------
>
> Term: A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 5152.05 738.3
> A1:B2 738.30 105.8
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
> Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
> Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
> Roy 1 2.6392162 22.43334 2 17 1.7039e-05
>
> ------------------------------------------
>
> Term: sex:A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 26.45 2.3
> A1:B2 2.30 0.2
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: sex:A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0157232 0.1357821 2 17 0.87397
> Wilks 1 0.9842768 0.1357821 2 17 0.87397
> Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
> Roy 1 0.0159744 0.1357821 2 17 0.87397
>
> Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
>
> SS num Df Error SS den Df F Pr(>F)
> (Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
> sex 1810 1 5743.2 18 5.6716 0.02849
> A 163 1 1733.6 18 1.6959 0.20925
> sex:A 0 1 1733.6 18 0.0003 0.98536
> B 1151 2 711.0 36 29.1292 2.990e-08
> sex:B 8 2 711.0 36 0.1979 0.82134
> A:B 1507 2 933.4 36 29.0532 3.078e-08
> sex:A:B 8 2 933.4 36 0.1565 0.85568
>
>
> Mauchly Tests for Sphericity
>
> Test statistic p-value
> B 0.57532 0.0091036
> sex:B 0.57532 0.0091036
> A:B 0.45375 0.0012104
> sex:A:B 0.45375 0.0012104
>
>
> Greenhouse-Geisser and Huynh-Feldt Corrections
> for Departure from Sphericity
>
> GG eps Pr(>F[GG])
> B 0.70191 2.143e-06
> sex:B 0.70191 0.7427
> A:B 0.64672 4.838e-06
> sex:A:B 0.64672 0.7599
>
> HF eps Pr(>F[HF])
> B 0.74332 1.181e-06
> sex:B 0.74332 0.7560
> A:B 0.67565 3.191e-06
> sex:A:B 0.67565 0.7702
> > str(result)
> List of 13
> $ SSP :List of 8
> ..$ (Intercept): num [1, 1] 1169345
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 10858
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 980
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 3618 3443 3443 3277
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 5152 738 738 106
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ SSPE :List of 8
> ..$ (Intercept): num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ P :List of 8
> ..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ df : Named num [1:8] 1 1 1 1 1 1 1 1
> ..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
> $ error.df : int 18
> $ terms : chr [1:8] "(Intercept)" "sex" "A"
"sex:A" ...
> $ repeated : logi TRUE
> $ type : chr "III"
> $ test : chr "Wilks"
> $ idata :'data.frame': 6 obs. of 2 variables:
> ..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> ..$ B: Factor w/ 3 levels "1","2","3": 1 2 3
1 2 3
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> $ idesign :Class 'formula' length 2 ~A * B
> .. ..- attr(*, ".Environment")> $ icontrasts: chr [1:2]
"contr.sum" "contr.poly"
> $ imatrix : NULL
> - attr(*, "class")= chr "Anova.mlm"
> > str(summary(result))
>
> Type III Repeated Measures MANOVA Tests:
>
> ------------------------------------------
>
> Term: (Intercept)
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 1169345
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: (Intercept)
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.97137 610.8117 1 18 2.425e-15
> Wilks 1 0.02863 610.8117 1 18 2.425e-15
> Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
> Roy 1 33.93399 610.8117 1 18 2.425e-15
>
> ------------------------------------------
>
> Term: sex
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 10857.8
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: sex
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.2395956 5.671614 1 18 0.028486
> Wilks 1 0.7604044 5.671614 1 18 0.028486
> Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
> Roy 1 0.3150896 5.671614 1 18 0.028486
>
> ------------------------------------------
>
> Term: A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 980
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0861024 1.695860 1 18 0.20925
> Wilks 1 0.9138976 1.695860 1 18 0.20925
> Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
> Roy 1 0.0942145 1.695860 1 18 0.20925
>
> ------------------------------------------
>
> Term: sex:A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 0.2
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: sex:A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0000192 0.0003460939 1 18 0.98536
> Wilks 1 0.9999808 0.0003460939 1 18 0.98536
> Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
> Roy 1 0.0000192 0.0003460939 1 18 0.98536
>
> ------------------------------------------
>
> Term: B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 3618.05 3443.2
> B2 3443.20 3276.8
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.730544 23.04504 2 17 1.4426e-05
> Wilks 1 0.269456 23.04504 2 17 1.4426e-05
> Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
> Roy 1 2.711181 23.04504 2 17 1.4426e-05
>
> ------------------------------------------
>
> Term: sex:B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 26.45 23
> B2 23.00 20
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: sex:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0160644 0.1387764 2 17 0.8714
> Wilks 1 0.9839356 0.1387764 2 17 0.8714
> Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
> Roy 1 0.0163266 0.1387764 2 17 0.8714
>
> ------------------------------------------
>
> Term: A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 5152.05 738.3
> A1:B2 738.30 105.8
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
> Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
> Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
> Roy 1 2.6392162 22.43334 2 17 1.7039e-05
>
> ------------------------------------------
>
> Term: sex:A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 26.45 2.3
> A1:B2 2.30 0.2
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: sex:A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0157232 0.1357821 2 17 0.87397
> Wilks 1 0.9842768 0.1357821 2 17 0.87397
> Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
> Roy 1 0.0159744 0.1357821 2 17 0.87397
>
> Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
>
> SS num Df Error SS den Df F Pr(>F)
> (Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
> sex 1810 1 5743.2 18 5.6716 0.02849
> A 163 1 1733.6 18 1.6959 0.20925
> sex:A 0 1 1733.6 18 0.0003 0.98536
> B 1151 2 711.0 36 29.1292 2.990e-08
> sex:B 8 2 711.0 36 0.1979 0.82134
> A:B 1507 2 933.4 36 29.0532 3.078e-08
> sex:A:B 8 2 933.4 36 0.1565 0.85568
>
>
> Mauchly Tests for Sphericity
>
> Test statistic p-value
> B 0.57532 0.0091036
> sex:B 0.57532 0.0091036
> A:B 0.45375 0.0012104
> sex:A:B 0.45375 0.0012104
>
>
> Greenhouse-Geisser and Huynh-Feldt Corrections
> for Departure from Sphericity
>
> GG eps Pr(>F[GG])
> B 0.70191 2.143e-06
> sex:B 0.70191 0.7427
> A:B 0.64672 4.838e-06
> sex:A:B 0.64672 0.7599
>
> HF eps Pr(>F[HF])
> B 0.74332 1.181e-06
> sex:B 0.74332 0.7560
> A:B 0.67565 3.191e-06
> sex:A:B 0.67565 0.7702
> List of 13
> $ SSP :List of 8
> ..$ (Intercept): num [1, 1] 1169345
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 10858
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 980
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 3618 3443 3443 3277
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 5152 738 738 106
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ SSPE :List of 8
> ..$ (Intercept): num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ P :List of 8
> ..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2" "a1_b3"
"a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ df : Named num [1:8] 1 1 1 1 1 1 1 1
> ..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
> $ error.df : int 18
> $ terms : chr [1:8] "(Intercept)" "sex" "A"
"sex:A" ...
> $ repeated : logi TRUE
> $ type : chr "III"
> $ test : chr "Wilks"
> $ idata :'data.frame': 6 obs. of 2 variables:
> ..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> ..$ B: Factor w/ 3 levels "1","2","3": 1 2 3
1 2 3
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> $ idesign :Class 'formula' length 2 ~A * B
> .. ..- attr(*, ".Environment")> $ icontrasts: chr [1:2]
"contr.sum" "contr.poly"
> $ imatrix : NULL
> - attr(*, "class")= chr "Anova.mlm"
> > result$`Pr(>F)`
> NULL
> > result[[4]]
> (Intercept) sex A sex:A B sex:B
> 1 1 1 1 1 1
> A:B sex:A:B
> 1 1
> >
>
>
>
>
>
>
>
> Op 23/08/2010 21:56, Dennis Murphy schreef:
>> Hi:
>>
>> Look at
>> result$`Pr(>F)`
>>
>> (with backticks around Pr(>F) ), or more succinctly, result[[4]].
>>
>> HTH,
>> Dennis
>>
>> On Mon, Aug 23, 2010 at 12:01 PM, Johan Steen
>>> wrote:
>>
>> Dear all,
>>
>> is there anyone who can help me extracting p-values from an Anova
>> object from the car library? I can't seem to locate the p-values
>> using str(result) or str(summary(result)) in the example below
>>
>>> A <- factor( rep(1:2,each=3) )
>>> B <- factor( rep(1:3,times=2) )
>>> idata <- data.frame(A,B)
>>> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ? sex,
>> data=Data.wide)
>>> result <- Anova(fit, type="III",
test="Wilks", idata=idata,
>> idesign=?A*B)
>>
>>
>> Any help would be much appreciated!
>>
>>
>> Many thanks,
>>
>> Johan
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
--Forwarded Message Attachment--
From: eriki at ccbr.umn.edu
CC: r-help at r-project.org
To: eva.nordstrom at yahoo.com
Date: Mon, 23 Aug 2010 16:41:16 -0500
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
In addition to the Wiki already mentioned, the following may be
useful:
http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/
Eva Nordstrom wrote:> I am using R 2.11.1 in a Microsoft Windows 7 environment.
>
> I tried using WriteXLS, but get the message " In system(cmd) : perl
not found"
>
> What is the "easiest" way to write an R dataframe to Excel? (I
am familiar with
> WriteXLS, but I do not have PERL installed, and if not needed, do not wish
to
> install it.)
>
> I am also familiar with write.table, but if possible, wish to create an
excel
> file form within R.
>
> I'm unsure if this is possible, or perhaps i should just go ahead and
install
> PERL...?
>
>
>
> [[alternative HTML version deleted]]
>
>
>
> ------------------------------------------------------------------------
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: mauede at alice.it
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 23:43:42 +0200
Subject: [R] 3D stariway plot
Please, is there an R function /package that allows for 3D stairway plots like
the attached one ?
In addition, how can I overlay a parametric grid plot??
Thank you
Maura
Alice body {margin:0;padding:0;}
#footer { height:13px; font-size:11px; font-family:Arial, FreeSans, sans-serif;
color:#ADADAD; margin:0; padding:7px 12px; text-align:right; border-top:1px
solid #dcdcdc; } #footer a { text-decoration:none; color:#ADADAD; }
#footer a:hover { color:#848484; } Inviato dalla
nuova Alice mail
--Forwarded Message Attachment--
From: cuckovic.paik at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 14:44:56 -0700
Subject: [R] Memory Issue
Dear All,
I have an issue on memory use in R programming.
Here is the brief story: I want to simulate the power of a nonparameteric
test and compare it with the existing tests. The basic steps are
1. I need to use Newton method to obtain the nonparametric MLE that involves
the inversion of a large matrix (n-by-n matrix, it takes about less than 3
seconds in average to get the MLE. n = sample size)
2. Since the test statistic has an unknown sample distribution, the p-value
is simmulated using Monte Carlo (1000 runs). it takes about 3-4 minutes to
get an p-value.
3. I need to simulate 1000 random samples and reapte steps 1 and 2 to get
the p-value for each of the simulated samples to get the power of the test.
Here is the question:
It initially completes 5-6 simulations per hour, after that, the time needed
to complete a single simulation increases exponentially. After a 24 hour
running, I only get about 15-20 simulations completed. My computer is a PC
(Pentium Dual Core CPU 2.5 GHz, RAM 6.00GB, 64-bit). Appearently, the memory
is the problem.
I also tried various memory re-allocation procedures, They didn't work. Can
anyboy help on this? Thanks in advance.
--
View this message in context:
http://r.789695.n4.nabble.com/Memory-Issue-tp2335860p2335860.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: kw.stat at gmail.com
CC: satchwinston at yahoo.com; r-help at r-project.org
To: dhajage.r at gmail.com
Date: Mon, 23 Aug 2010 16:46:57 -0500
Subject: Re: [R] R reports
I'm mindful of the volunteer nature of R-core, but I'm also sympathetic
to
Donald.
I use Sweave to create documents, though I tend to view Sweave as a
typesetter, not a report writer. What do I see as the difference? Sweave
typesets _raw_ R output. A report writer makes it easier to quickly grasp
the key points. For example, floating-point output is a poor choice for
reports to management and I have been experimenting with something I call
"lucid" printing. Here are two examples that show how standard
floating
point can make simple question needlessly hard to answer:
Question 1: How do the coefficients compare in size? How large is the
intercept?
R> print(df1)
effect
hyb_A -1.350000e+01
hyb_B 4.500000e+00
hyb_C 2.450000e+01
hyb_C1 6.927792e-14
hyb_C2 -1.750000e+00
hyb_D 1.650000e+01
(Intercept) 1.135000e+02
R> lucid(df1)
effect
hyb_A -13.5
hyb_B 4.5
hyb_C 24.5
hyb_C1 0
hyb_C2 -1.75
hyb_D 16.5
(Intercept) 113.5
Question 2: Which are the smallest / largest / significant variance
components?
R> print(df2)
effect component std.error z.ratio constraint
1 hyb 1.09e+01 4.40e+00 2.471607 Postive
2 mlra 2.77e+02 1.66e+02 1.669316 Postive
3 mlra:loc 4.93e+02 2.61e+01 18.899825 Postive
4 hyb:mlra 1.30e-04 1.58e-06 82.242618 Boundary
5 yr 1.26e+02 1.19e+02 1.060751 Postive
6 hyb:yr 2.23e+01 4.50e+00 4.951793 Postive
7 mlra:yr 4.81e+02 1.08e+02 4.442904 Postive
8 R!variance 2.68e+02 3.25e+00 82.242618 Postive
R> lucid(df2)
effect component std.error z.ratio constraint
1 hyb 10.9 4.4 2.472 Postive
2 mlra 277 166 1.669 Postive
3 mlra:loc 493 26.1 18.9 Postive
4 hyb:mlra 0.0001 0 82.24 Boundary
5 yr 126 119 1.061 Postive
6 hyb:yr 22.3 4.5 4.952 Postive
7 mlra:yr 481 108 4.443 Postive
8 R!variance 268 3.25 82.24 Postive
The (beta) code:
lucid <- function(x, dig=4, ...) UseMethod("lucid")
lucid.default <- function(x, dig=4, ...) { x } # do nothing for non-numeric
lucid.numeric <- function(x, dig=4, ...) {
# Use 4 significant digits, drop trailing zero, align decimals
if(class(x)=="numeric" | class(x)=="integer")
format(format(signif(zapsmall(x), dig), scientific=FALSE,
drop0trailing=TRUE))
else x
}
lucid.data.frame <- function(x, dig=4, quote=FALSE, ...){
x[] <- lapply(x, lucid, dig)
print(x, quote=quote)
invisible(x)
}
Kevin Wright
On Sun, Aug 22, 2010 at 2:43 AM, David Hajage wrote:
> 2010/8/21 Donald Winston
>
>> I know how to program in a dozen languages. I have a B.A. in
mathematics,
>> and an M.S. in operations research and statistics.
>
>
> I just don't care, I would try to answer you even if you had no
formation.
>
>
>> I can figure out how to write reports in R. The point is I don't
want to.
>> There should be a function that generates a report just like there is a
>> function that generates a plot without using loops or if statements.
>
>
> The question is: what do you want? What is the result of this fictive
> function? I am sure you don't need loop to do that.
>
>
>> I've read "R IN A NUTSHELL" from cover to cover. The word
"report" does
> not
>> even appear in the index.
>>
>
> Just ask Joseph Adler.
>
>
>>
>> On Aug 21, 2010, at 11:39 AM, David Hajage wrote:
>>
>>> I must repeat: "just show us what is the kind of report you
want to
>>> do, and you will perhaps
>>> get a solution to reproduce it"
>>> We still don't know what is the output of your report()
function.
>>> This, is ridiculous.
>>>
>>> On Saturday, August 21, 2010, Frank Harrell
>> wrote:
>>>>
>>>>
>>>> On Sat, 21 Aug 2010, Donald Paul Winston wrote:
>>>>
>>>>
>>>>
>>>> Good grief. Adding a report function is not going to make R
less
>> flexible. Don't
>>>> you want to use a tool that's relevant to the rest of the
world? That
>> world is
>>>> much bigger then your world. This is ridiculous.
>>>>
>>>> Looks like some people are complaining about me criticizing R
and the
>> people who
>>>> defend it. Good grief again.
>>>>
>>>>
>>>> I think your philosophy is now more clear Donald. People who
don't
> like
>> learning new things place themselves at a competitive disadvantage. The
R
>> community, where learning is highly valued, may be fundamentally
>> incompatible with your philosophy. You may do well to stay with SAS.
>>>>
>>>> Frank
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ________________________________
>>>> From: David Hajage-2 [via R] <
>> ml-node+2333348-1227050197-138200 at n4.nabble.com
>
>>
>>>
>>>>
>>>> Sent: Sat, August 21, 2010 4:54:12 AM
>>>> Subject: Re: R reports
>>>>
>>>> Just show us what is the kind of report you want to do, and you
will
>> perhaps
>>>> get a solution to reproduce it. Then, if you don't like the
way to do
>> that,
>>>> write your own code or don't use R, noone force you. The
majority of R
>> users
>>>> are satisfied with the way to generate reports, because it is
> flexible.
>>>> There is ABSOLUTELY *NO WARRANTY with R, this means also you
have no
>>>> warranty to find exactly what you want, and what you can find
in SAS.
>> Just
>>>> deal with it.*
>>>>
>>>> 2010/8/21 Donald Paul Winston <[hidden email]>
>>>>
>>>>
>>>>
>>>>
>>>> Sweave and LaTex is way to much overhead to deal with. There
should be
> a
>>>> built in standard report() function analogous to plot().
>>>>
>>>> Something like the following is necessary if you want real
people to
>> take R
>>>> seriously:
>>>>
>>>> report(data=, vars=,
>>>> label=, by=,
>>>> sum=vectorOfColumnNames>, title=,
>>>> footer=,
>>>> pageBy=, sumBy=,
>>>> filename=, fileType=...etc)
>>>>
>>>> Did I say "real" people? I've been Palinized.
>>>> --
>>>> View this message in context:
>>>>
>>
http://r.789695.n4.nabble.com/R-reports-tp2330733p2333264.html?by-user=t
>>>> Sent from the R help mailing list archive at Nabble.com.
>>>>
>>>>
>>>> ______________________________________________
>>>> R-help at r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>>>> and provide commented, minimal, self-contained, reproducible
code.
>>>>
>>
>>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Kevin Wright
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: sarah.goslee at gmail.com
CC: r-help at r-project.org
To: johan.steen at gmail.com
Date: Mon, 23 Aug 2010 17:49:01 -0400
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
I didn't follow the earlier replies, but since you still can't seem to
get p-values, what about this:
(I also couldn't run your sample code, so this is a toy example;
and do you mean anova() - I'm not familiar with Anova(), and
didn't see anything with ??Anova)
> x <- runif(100)
> y <- runif(100)
> xy.anova <- anova(lm(y ~ x))
> xy.anova
Analysis of Variance Table
Response: y
Df Sum Sq Mean Sq F value Pr(>F)
x 1 0.0001 0.000086 0.001 0.9745
Residuals 98 8.2018 0.083692> names(xy.anova)
[1] "Df" "Sum Sq" "Mean Sq" "F
value" "Pr(>F)"> xy.anova$"Pr(>F)"
[1] 0.9745443 NA
Sarah
On Mon, Aug 23, 2010 at 5:35 PM, Johan Steen wrote:> Thanks for your replies,
>
> but unfortunately none of them seem to help.
> I do get p-values in the output, but can't seem to locate them anywhere
in
> these objects via the str() function. I also get very different output
using
> str() than you obtained from the lm help page
>
--
Sarah Goslee
http://www.functionaldiversity.org
--Forwarded Message Attachment--
From: ivo.welch at gmail.com
To: r-help at stat.math.ethz.ch
Date: Mon, 23 Aug 2010 17:51:38 -0400
Subject: [R] unexpected subset select results?
quizz---what does this produce?
d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
attach(d); c <- (a+b)>25; detach(d)
d= subset(d, TRUE, select=c( a, b, c ))
yes, I know I have made a mistake, in that the code does not do what I
presumably would have wanted. it does seem like unexpected behavior,
though, without an error. there probably is some reason why this does
not ring an alarm bell...
/iaw
----
Ivo Welch (ivo.welch at brown.edu, ivo.welch at gmail.com)
--Forwarded Message Attachment--
From: jfox at mcmaster.ca
CC: r-help at r-project.org
To: johan.steen at gmail.com
Date: Mon, 23 Aug 2010 17:53:30 -0400
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
Dear Johan,
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at
r-project.org]
On> Behalf Of Johan Steen
> Sent: August-23-10 3:02 PM
> To: r-help at r-project.org
> Subject: [R] extracting p-values from Anova objects (from the car library)
>
> Dear all,
>
> is there anyone who can help me extracting p-values from an Anova object
> from the car library? I can't seem to locate the p-values using
> str(result) or str(summary(result)) in the example below
>
> > A <- factor( rep(1:2,each=3) )
> > B <- factor( rep(1:3,times=2) )
> > idata <- data.frame(A,B)
> > fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
> data=Data.wide)
> > result <- Anova(fit, type="III", test="Wilks",
idata=idata,
idesign=~A*B)>
>
> Any help would be much appreciated!
I'm afraid that the answer is that the p-values aren't easily
accessible.
The print method for Anova.mlm objects just passes through the object
invisibly as its result, as is conventional for print methods. The summary
method does the same -- that isn't conventional, but the summary method can
produce so many different kinds of printed output (various multivariate test
criteria for models with and without repeated measures; for the latter,
multivariate and univariate tests with and without corrections for
non-sphericity) that the printed output is produced directly rather than put
in an object with its own print method.
What you can do is take a look at car:::print.Anova.mlm or
car:::summary.Anova.mlm (probably the print method, which is simpler) to see
how the p-values that you want are computed and write a small function to
return them.
I hope this helps,
John
--------------------------------
John Fox
Senator William McMaster
Professor of Social Statistics
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox>
>
> Many thanks,
>
> Johan
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: jfox at mcmaster.ca
CC: r-help at r-project.org
To: johan.steen at gmail.com; djmuser at gmail.com
Date: Mon, 23 Aug 2010 18:01:09 -0400
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
Dear Johan and Dennis,
I believe that the source of confusion is the difference between Anova.lm(),
the Anova method for a linear-model object, which indeed has a summary
method that returns an object from which you can extract p-values, and
Anova.mlm(), which passes the multivariate-linear-model object through (as I
explained in a previous response).
Best,
John
--------------------------------
John Fox
Senator William McMaster
Professor of Social Statistics
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at
r-project.org]
On> Behalf Of Johan Steen
> Sent: August-23-10 5:36 PM
> To: Dennis Murphy
> Cc: r-help at r-project.org
> Subject: Re: [R] extracting p-values from Anova objects (from the car
> library)
>
> Thanks for your replies,
>
> but unfortunately none of them seem to help.
> I do get p-values in the output, but can't seem to locate them anywhere
> in these objects via the str() function. I also get very different
> output using str() than you obtained from the lm help page
>
> Here's my output:
>
> > A <- factor( rep(1:2,each=3) )
> > B <- factor( rep(1:3,times=2) )
> > idata <- data.frame(A,B)
> > idata
> A B
> 1 1 1
> 2 1 2
> 3 1 3
> 4 2 1
> 5 2 2
> 6 2 3
> >
> > fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
> data=Data.wide)
> > result <- Anova(fit, type="III", test="Wilks",
idata=idata,
idesign=~A*B)> > result
>
> Type III Repeated Measures MANOVA Tests: Wilks test statistic
> Df test stat approx F num Df den Df Pr(>F)
> (Intercept) 1 0.02863 610.81 1 18 2.425e-15
> sex 1 0.76040 5.67 1 18 0.02849
> A 1 0.91390 1.70 1 18 0.20925
> sex:A 1 0.99998 0.00 1 18 0.98536
> B 1 0.26946 23.05 2 17 1.443e-05
> sex:B 1 0.98394 0.14 2 17 0.87140
> A:B 1 0.27478 22.43 2 17 1.704e-05
> sex:A:B 1 0.98428 0.14 2 17 0.87397
> > summary(result)
>
> Type III Repeated Measures MANOVA Tests:
>
> ------------------------------------------
>
> Term: (Intercept)
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 1169345
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: (Intercept)
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.97137 610.8117 1 18 2.425e-15
> Wilks 1 0.02863 610.8117 1 18 2.425e-15
> Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
> Roy 1 33.93399 610.8117 1 18 2.425e-15
>
> ------------------------------------------
>
> Term: sex
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 10857.8
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: sex
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.2395956 5.671614 1 18 0.028486
> Wilks 1 0.7604044 5.671614 1 18 0.028486
> Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
> Roy 1 0.3150896 5.671614 1 18 0.028486
>
> ------------------------------------------
>
> Term: A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 980
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0861024 1.695860 1 18 0.20925
> Wilks 1 0.9138976 1.695860 1 18 0.20925
> Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
> Roy 1 0.0942145 1.695860 1 18 0.20925
>
> ------------------------------------------
>
> Term: sex:A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 0.2
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: sex:A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0000192 0.0003460939 1 18 0.98536
> Wilks 1 0.9999808 0.0003460939 1 18 0.98536
> Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
> Roy 1 0.0000192 0.0003460939 1 18 0.98536
>
> ------------------------------------------
>
> Term: B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 3618.05 3443.2
> B2 3443.20 3276.8
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.730544 23.04504 2 17 1.4426e-05
> Wilks 1 0.269456 23.04504 2 17 1.4426e-05
> Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
> Roy 1 2.711181 23.04504 2 17 1.4426e-05
>
> ------------------------------------------
>
> Term: sex:B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 26.45 23
> B2 23.00 20
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: sex:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0160644 0.1387764 2 17 0.8714
> Wilks 1 0.9839356 0.1387764 2 17 0.8714
> Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
> Roy 1 0.0163266 0.1387764 2 17 0.8714
>
> ------------------------------------------
>
> Term: A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 5152.05 738.3
> A1:B2 738.30 105.8
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
> Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
> Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
> Roy 1 2.6392162 22.43334 2 17 1.7039e-05
>
> ------------------------------------------
>
> Term: sex:A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 26.45 2.3
> A1:B2 2.30 0.2
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: sex:A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0157232 0.1357821 2 17 0.87397
> Wilks 1 0.9842768 0.1357821 2 17 0.87397
> Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
> Roy 1 0.0159744 0.1357821 2 17 0.87397
>
> Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
>
> SS num Df Error SS den Df F Pr(>F)
> (Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
> sex 1810 1 5743.2 18 5.6716 0.02849
> A 163 1 1733.6 18 1.6959 0.20925
> sex:A 0 1 1733.6 18 0.0003 0.98536
> B 1151 2 711.0 36 29.1292 2.990e-08
> sex:B 8 2 711.0 36 0.1979 0.82134
> A:B 1507 2 933.4 36 29.0532 3.078e-08
> sex:A:B 8 2 933.4 36 0.1565 0.85568
>
>
> Mauchly Tests for Sphericity
>
> Test statistic p-value
> B 0.57532 0.0091036
> sex:B 0.57532 0.0091036
> A:B 0.45375 0.0012104
> sex:A:B 0.45375 0.0012104
>
>
> Greenhouse-Geisser and Huynh-Feldt Corrections
> for Departure from Sphericity
>
> GG eps Pr(>F[GG])
> B 0.70191 2.143e-06
> sex:B 0.70191 0.7427
> A:B 0.64672 4.838e-06
> sex:A:B 0.64672 0.7599
>
> HF eps Pr(>F[HF])
> B 0.74332 1.181e-06
> sex:B 0.74332 0.7560
> A:B 0.67565 3.191e-06
> sex:A:B 0.67565 0.7702
> > str(result)
> List of 13
> $ SSP :List of 8
> ..$ (Intercept): num [1, 1] 1169345
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 10858
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 980
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 3618 3443 3443 3277
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 5152 738 738 106
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ SSPE :List of 8
> ..$ (Intercept): num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ P :List of 8
> ..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ df : Named num [1:8] 1 1 1 1 1 1 1 1
> ..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
> $ error.df : int 18
> $ terms : chr [1:8] "(Intercept)" "sex"
"A" "sex:A" ...
> $ repeated : logi TRUE
> $ type : chr "III"
> $ test : chr "Wilks"
> $ idata :'data.frame': 6 obs. of 2 variables:
> ..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> ..$ B: Factor w/ 3 levels "1","2","3": 1 2
3 1 2 3
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> $ idesign :Class 'formula' length 2 ~A * B
> .. ..- attr(*, ".Environment")> $ icontrasts: chr [1:2]
"contr.sum" "contr.poly"
> $ imatrix : NULL
> - attr(*, "class")= chr "Anova.mlm"
> > str(summary(result))
>
> Type III Repeated Measures MANOVA Tests:
>
> ------------------------------------------
>
> Term: (Intercept)
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 1169345
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: (Intercept)
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.97137 610.8117 1 18 2.425e-15
> Wilks 1 0.02863 610.8117 1 18 2.425e-15
> Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
> Roy 1 33.93399 610.8117 1 18 2.425e-15
>
> ------------------------------------------
>
> Term: sex
>
> Response transformation matrix:
> (Intercept)
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 1
> a2_b2 1
> a2_b3 1
>
> Sum of squares and products for the hypothesis:
> (Intercept)
> (Intercept) 10857.8
>
> Sum of squares and products for error:
> (Intercept)
> (Intercept) 34459.4
>
> Multivariate Tests: sex
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.2395956 5.671614 1 18 0.028486
> Wilks 1 0.7604044 5.671614 1 18 0.028486
> Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
> Roy 1 0.3150896 5.671614 1 18 0.028486
>
> ------------------------------------------
>
> Term: A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 980
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0861024 1.695860 1 18 0.20925
> Wilks 1 0.9138976 1.695860 1 18 0.20925
> Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
> Roy 1 0.0942145 1.695860 1 18 0.20925
>
> ------------------------------------------
>
> Term: sex:A
>
> Response transformation matrix:
> A1
> a1_b1 1
> a1_b2 1
> a1_b3 1
> a2_b1 -1
> a2_b2 -1
> a2_b3 -1
>
> Sum of squares and products for the hypothesis:
> A1
> A1 0.2
>
> Sum of squares and products for error:
> A1
> A1 10401.8
>
> Multivariate Tests: sex:A
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0000192 0.0003460939 1 18 0.98536
> Wilks 1 0.9999808 0.0003460939 1 18 0.98536
> Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
> Roy 1 0.0000192 0.0003460939 1 18 0.98536
>
> ------------------------------------------
>
> Term: B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 3618.05 3443.2
> B2 3443.20 3276.8
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.730544 23.04504 2 17 1.4426e-05
> Wilks 1 0.269456 23.04504 2 17 1.4426e-05
> Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
> Roy 1 2.711181 23.04504 2 17 1.4426e-05
>
> ------------------------------------------
>
> Term: sex:B
>
> Response transformation matrix:
> B1 B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 1 0
> a2_b2 0 1
> a2_b3 -1 -1
>
> Sum of squares and products for the hypothesis:
> B1 B2
> B1 26.45 23
> B2 23.00 20
>
> Sum of squares and products for error:
> B1 B2
> B1 2304.5 1396.8
> B2 1396.8 1225.2
>
> Multivariate Tests: sex:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0160644 0.1387764 2 17 0.8714
> Wilks 1 0.9839356 0.1387764 2 17 0.8714
> Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
> Roy 1 0.0163266 0.1387764 2 17 0.8714
>
> ------------------------------------------
>
> Term: A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 5152.05 738.3
> A1:B2 738.30 105.8
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
> Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
> Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
> Roy 1 2.6392162 22.43334 2 17 1.7039e-05
>
> ------------------------------------------
>
> Term: sex:A:B
>
> Response transformation matrix:
> A1:B1 A1:B2
> a1_b1 1 0
> a1_b2 0 1
> a1_b3 -1 -1
> a2_b1 -1 0
> a2_b2 0 -1
> a2_b3 1 1
>
> Sum of squares and products for the hypothesis:
> A1:B1 A1:B2
> A1:B1 26.45 2.3
> A1:B2 2.30 0.2
>
> Sum of squares and products for error:
> A1:B1 A1:B2
> A1:B1 3210.5 1334.4
> A1:B2 1334.4 924.0
>
> Multivariate Tests: sex:A:B
> Df test stat approx F num Df den Df Pr(>F)
> Pillai 1 0.0157232 0.1357821 2 17 0.87397
> Wilks 1 0.9842768 0.1357821 2 17 0.87397
> Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
> Roy 1 0.0159744 0.1357821 2 17 0.87397
>
> Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
>
> SS num Df Error SS den Df F Pr(>F)
> (Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
> sex 1810 1 5743.2 18 5.6716 0.02849
> A 163 1 1733.6 18 1.6959 0.20925
> sex:A 0 1 1733.6 18 0.0003 0.98536
> B 1151 2 711.0 36 29.1292 2.990e-08
> sex:B 8 2 711.0 36 0.1979 0.82134
> A:B 1507 2 933.4 36 29.0532 3.078e-08
> sex:A:B 8 2 933.4 36 0.1565 0.85568
>
>
> Mauchly Tests for Sphericity
>
> Test statistic p-value
> B 0.57532 0.0091036
> sex:B 0.57532 0.0091036
> A:B 0.45375 0.0012104
> sex:A:B 0.45375 0.0012104
>
>
> Greenhouse-Geisser and Huynh-Feldt Corrections
> for Departure from Sphericity
>
> GG eps Pr(>F[GG])
> B 0.70191 2.143e-06
> sex:B 0.70191 0.7427
> A:B 0.64672 4.838e-06
> sex:A:B 0.64672 0.7599
>
> HF eps Pr(>F[HF])
> B 0.74332 1.181e-06
> sex:B 0.74332 0.7560
> A:B 0.67565 3.191e-06
> sex:A:B 0.67565 0.7702
> List of 13
> $ SSP :List of 8
> ..$ (Intercept): num [1, 1] 1169345
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 10858
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 980
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 3618 3443 3443 3277
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 5152 738 738 106
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ SSPE :List of 8
> ..$ (Intercept): num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1, 1] 34459
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "(Intercept)"
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1, 1] 10402
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr "A1"
> .. .. ..$ : chr "A1"
> ..$ B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "B1" "B2"
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ P :List of 8
> ..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ sex : num [1:6, 1] 1 1 1 1 1 1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "(Intercept)"
> ..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr "A1"
> ..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "B1" "B2"
> ..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> ..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
> .. ..- attr(*, "dimnames")=List of 2
> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
> $ df : Named num [1:8] 1 1 1 1 1 1 1 1
> ..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
> $ error.df : int 18
> $ terms : chr [1:8] "(Intercept)" "sex"
"A" "sex:A" ...
> $ repeated : logi TRUE
> $ type : chr "III"
> $ test : chr "Wilks"
> $ idata :'data.frame': 6 obs. of 2 variables:
> ..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> ..$ B: Factor w/ 3 levels "1","2","3": 1 2
3 1 2 3
> .. ..- attr(*, "contrasts")= chr "contr.sum"
> $ idesign :Class 'formula' length 2 ~A * B
> .. ..- attr(*, ".Environment")> $ icontrasts: chr [1:2]
"contr.sum" "contr.poly"
> $ imatrix : NULL
> - attr(*, "class")= chr "Anova.mlm"
> > result$`Pr(>F)`
> NULL
> > result[[4]]
> (Intercept) sex A sex:A B sex:B
> 1 1 1 1 1 1
> A:B sex:A:B
> 1 1
> >
>
> Op 23/08/2010 22:23, Johan Steen schreef:
>> Thanks for your replies,
>>
>> but unfortunately none of them seem to help.
>> I do get p-values in the output, but can't seem to locate them
anywhere
>> in these objects via the str() function. I also get very different
>> output using str() than you obtained from the lm help page
>>
>> Here's my output:
>>
>> > A <- factor( rep(1:2,each=3) )
>> > B <- factor( rep(1:3,times=2) )
>> > idata <- data.frame(A,B)
>> > idata
>> A B
>> 1 1 1
>> 2 1 2
>> 3 1 3
>> 4 2 1
>> 5 2 2
>> 6 2 3
>> >
>> > fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
>> data=Data.wide)
>> > result <- Anova(fit, type="III",
test="Wilks", idata=idata,
>> idesign=~A*B)
>> > result
>>
>> Type III Repeated Measures MANOVA Tests: Wilks test statistic
>> Df test stat approx F num Df den Df Pr(>F)
>> (Intercept) 1 0.02863 610.81 1 18 2.425e-15
>> sex 1 0.76040 5.67 1 18 0.02849
>> A 1 0.91390 1.70 1 18 0.20925
>> sex:A 1 0.99998 0.00 1 18 0.98536
>> B 1 0.26946 23.05 2 17 1.443e-05
>> sex:B 1 0.98394 0.14 2 17 0.87140
>> A:B 1 0.27478 22.43 2 17 1.704e-05
>> sex:A:B 1 0.98428 0.14 2 17 0.87397
>> > summary(result)
>>
>> Type III Repeated Measures MANOVA Tests:
>>
>> ------------------------------------------
>>
>> Term: (Intercept)
>>
>> Response transformation matrix:
>> (Intercept)
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 1
>> a2_b2 1
>> a2_b3 1
>>
>> Sum of squares and products for the hypothesis:
>> (Intercept)
>> (Intercept) 1169345
>>
>> Sum of squares and products for error:
>> (Intercept)
>> (Intercept) 34459.4
>>
>> Multivariate Tests: (Intercept)
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.97137 610.8117 1 18 2.425e-15
>> Wilks 1 0.02863 610.8117 1 18 2.425e-15
>> Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
>> Roy 1 33.93399 610.8117 1 18 2.425e-15
>>
>> ------------------------------------------
>>
>> Term: sex
>>
>> Response transformation matrix:
>> (Intercept)
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 1
>> a2_b2 1
>> a2_b3 1
>>
>> Sum of squares and products for the hypothesis:
>> (Intercept)
>> (Intercept) 10857.8
>>
>> Sum of squares and products for error:
>> (Intercept)
>> (Intercept) 34459.4
>>
>> Multivariate Tests: sex
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.2395956 5.671614 1 18 0.028486
>> Wilks 1 0.7604044 5.671614 1 18 0.028486
>> Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
>> Roy 1 0.3150896 5.671614 1 18 0.028486
>>
>> ------------------------------------------
>>
>> Term: A
>>
>> Response transformation matrix:
>> A1
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 -1
>> a2_b2 -1
>> a2_b3 -1
>>
>> Sum of squares and products for the hypothesis:
>> A1
>> A1 980
>>
>> Sum of squares and products for error:
>> A1
>> A1 10401.8
>>
>> Multivariate Tests: A
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0861024 1.695860 1 18 0.20925
>> Wilks 1 0.9138976 1.695860 1 18 0.20925
>> Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
>> Roy 1 0.0942145 1.695860 1 18 0.20925
>>
>> ------------------------------------------
>>
>> Term: sex:A
>>
>> Response transformation matrix:
>> A1
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 -1
>> a2_b2 -1
>> a2_b3 -1
>>
>> Sum of squares and products for the hypothesis:
>> A1
>> A1 0.2
>>
>> Sum of squares and products for error:
>> A1
>> A1 10401.8
>>
>> Multivariate Tests: sex:A
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0000192 0.0003460939 1 18 0.98536
>> Wilks 1 0.9999808 0.0003460939 1 18 0.98536
>> Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
>> Roy 1 0.0000192 0.0003460939 1 18 0.98536
>>
>> ------------------------------------------
>>
>> Term: B
>>
>> Response transformation matrix:
>> B1 B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 1 0
>> a2_b2 0 1
>> a2_b3 -1 -1
>>
>> Sum of squares and products for the hypothesis:
>> B1 B2
>> B1 3618.05 3443.2
>> B2 3443.20 3276.8
>>
>> Sum of squares and products for error:
>> B1 B2
>> B1 2304.5 1396.8
>> B2 1396.8 1225.2
>>
>> Multivariate Tests: B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.730544 23.04504 2 17 1.4426e-05
>> Wilks 1 0.269456 23.04504 2 17 1.4426e-05
>> Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
>> Roy 1 2.711181 23.04504 2 17 1.4426e-05
>>
>> ------------------------------------------
>>
>> Term: sex:B
>>
>> Response transformation matrix:
>> B1 B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 1 0
>> a2_b2 0 1
>> a2_b3 -1 -1
>>
>> Sum of squares and products for the hypothesis:
>> B1 B2
>> B1 26.45 23
>> B2 23.00 20
>>
>> Sum of squares and products for error:
>> B1 B2
>> B1 2304.5 1396.8
>> B2 1396.8 1225.2
>>
>> Multivariate Tests: sex:B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0160644 0.1387764 2 17 0.8714
>> Wilks 1 0.9839356 0.1387764 2 17 0.8714
>> Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
>> Roy 1 0.0163266 0.1387764 2 17 0.8714
>>
>> ------------------------------------------
>>
>> Term: A:B
>>
>> Response transformation matrix:
>> A1:B1 A1:B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 -1 0
>> a2_b2 0 -1
>> a2_b3 1 1
>>
>> Sum of squares and products for the hypothesis:
>> A1:B1 A1:B2
>> A1:B1 5152.05 738.3
>> A1:B2 738.30 105.8
>>
>> Sum of squares and products for error:
>> A1:B1 A1:B2
>> A1:B1 3210.5 1334.4
>> A1:B2 1334.4 924.0
>>
>> Multivariate Tests: A:B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
>> Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
>> Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
>> Roy 1 2.6392162 22.43334 2 17 1.7039e-05
>>
>> ------------------------------------------
>>
>> Term: sex:A:B
>>
>> Response transformation matrix:
>> A1:B1 A1:B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 -1 0
>> a2_b2 0 -1
>> a2_b3 1 1
>>
>> Sum of squares and products for the hypothesis:
>> A1:B1 A1:B2
>> A1:B1 26.45 2.3
>> A1:B2 2.30 0.2
>>
>> Sum of squares and products for error:
>> A1:B1 A1:B2
>> A1:B1 3210.5 1334.4
>> A1:B2 1334.4 924.0
>>
>> Multivariate Tests: sex:A:B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0157232 0.1357821 2 17 0.87397
>> Wilks 1 0.9842768 0.1357821 2 17 0.87397
>> Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
>> Roy 1 0.0159744 0.1357821 2 17 0.87397
>>
>> Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
>>
>> SS num Df Error SS den Df F Pr(>F)
>> (Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
>> sex 1810 1 5743.2 18 5.6716 0.02849
>> A 163 1 1733.6 18 1.6959 0.20925
>> sex:A 0 1 1733.6 18 0.0003 0.98536
>> B 1151 2 711.0 36 29.1292 2.990e-08
>> sex:B 8 2 711.0 36 0.1979 0.82134
>> A:B 1507 2 933.4 36 29.0532 3.078e-08
>> sex:A:B 8 2 933.4 36 0.1565 0.85568
>>
>>
>> Mauchly Tests for Sphericity
>>
>> Test statistic p-value
>> B 0.57532 0.0091036
>> sex:B 0.57532 0.0091036
>> A:B 0.45375 0.0012104
>> sex:A:B 0.45375 0.0012104
>>
>>
>> Greenhouse-Geisser and Huynh-Feldt Corrections
>> for Departure from Sphericity
>>
>> GG eps Pr(>F[GG])
>> B 0.70191 2.143e-06
>> sex:B 0.70191 0.7427
>> A:B 0.64672 4.838e-06
>> sex:A:B 0.64672 0.7599
>>
>> HF eps Pr(>F[HF])
>> B 0.74332 1.181e-06
>> sex:B 0.74332 0.7560
>> A:B 0.67565 3.191e-06
>> sex:A:B 0.67565 0.7702
>> > str(result)
>> List of 13
>> $ SSP :List of 8
>> ..$ (Intercept): num [1, 1] 1169345
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ sex : num [1, 1] 10858
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ A : num [1, 1] 980
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ sex:A : num [1, 1] 0.2
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ B : num [1:2, 1:2] 3618 3443 3443 3277
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ A:B : num [1:2, 1:2] 5152 738 738 106
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> ..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> $ SSPE :List of 8
>> ..$ (Intercept): num [1, 1] 34459
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ sex : num [1, 1] 34459
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ A : num [1, 1] 10402
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ sex:A : num [1, 1] 10402
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ B : num [1:2, 1:2] 2304 1397 1397 1225
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> ..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> $ P :List of 8
>> ..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "(Intercept)"
>> ..$ sex : num [1:6, 1] 1 1 1 1 1 1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "(Intercept)"
>> ..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "A1"
>> ..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "A1"
>> ..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> ..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> $ df : Named num [1:8] 1 1 1 1 1 1 1 1
>> ..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
>> $ error.df : int 18
>> $ terms : chr [1:8] "(Intercept)" "sex"
"A" "sex:A" ...
>> $ repeated : logi TRUE
>> $ type : chr "III"
>> $ test : chr "Wilks"
>> $ idata :'data.frame': 6 obs. of 2 variables:
>> ..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
>> .. ..- attr(*, "contrasts")= chr "contr.sum"
>> ..$ B: Factor w/ 3 levels "1","2","3": 1
2 3 1 2 3
>> .. ..- attr(*, "contrasts")= chr "contr.sum"
>> $ idesign :Class 'formula' length 2 ~A * B
>> .. ..- attr(*, ".Environment")>> $ icontrasts: chr
[1:2] "contr.sum" "contr.poly"
>> $ imatrix : NULL
>> - attr(*, "class")= chr "Anova.mlm"
>> > str(summary(result))
>>
>> Type III Repeated Measures MANOVA Tests:
>>
>> ------------------------------------------
>>
>> Term: (Intercept)
>>
>> Response transformation matrix:
>> (Intercept)
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 1
>> a2_b2 1
>> a2_b3 1
>>
>> Sum of squares and products for the hypothesis:
>> (Intercept)
>> (Intercept) 1169345
>>
>> Sum of squares and products for error:
>> (Intercept)
>> (Intercept) 34459.4
>>
>> Multivariate Tests: (Intercept)
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.97137 610.8117 1 18 2.425e-15
>> Wilks 1 0.02863 610.8117 1 18 2.425e-15
>> Hotelling-Lawley 1 33.93399 610.8117 1 18 2.425e-15
>> Roy 1 33.93399 610.8117 1 18 2.425e-15
>>
>> ------------------------------------------
>>
>> Term: sex
>>
>> Response transformation matrix:
>> (Intercept)
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 1
>> a2_b2 1
>> a2_b3 1
>>
>> Sum of squares and products for the hypothesis:
>> (Intercept)
>> (Intercept) 10857.8
>>
>> Sum of squares and products for error:
>> (Intercept)
>> (Intercept) 34459.4
>>
>> Multivariate Tests: sex
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.2395956 5.671614 1 18 0.028486
>> Wilks 1 0.7604044 5.671614 1 18 0.028486
>> Hotelling-Lawley 1 0.3150896 5.671614 1 18 0.028486
>> Roy 1 0.3150896 5.671614 1 18 0.028486
>>
>> ------------------------------------------
>>
>> Term: A
>>
>> Response transformation matrix:
>> A1
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 -1
>> a2_b2 -1
>> a2_b3 -1
>>
>> Sum of squares and products for the hypothesis:
>> A1
>> A1 980
>>
>> Sum of squares and products for error:
>> A1
>> A1 10401.8
>>
>> Multivariate Tests: A
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0861024 1.695860 1 18 0.20925
>> Wilks 1 0.9138976 1.695860 1 18 0.20925
>> Hotelling-Lawley 1 0.0942145 1.695860 1 18 0.20925
>> Roy 1 0.0942145 1.695860 1 18 0.20925
>>
>> ------------------------------------------
>>
>> Term: sex:A
>>
>> Response transformation matrix:
>> A1
>> a1_b1 1
>> a1_b2 1
>> a1_b3 1
>> a2_b1 -1
>> a2_b2 -1
>> a2_b3 -1
>>
>> Sum of squares and products for the hypothesis:
>> A1
>> A1 0.2
>>
>> Sum of squares and products for error:
>> A1
>> A1 10401.8
>>
>> Multivariate Tests: sex:A
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0000192 0.0003460939 1 18 0.98536
>> Wilks 1 0.9999808 0.0003460939 1 18 0.98536
>> Hotelling-Lawley 1 0.0000192 0.0003460939 1 18 0.98536
>> Roy 1 0.0000192 0.0003460939 1 18 0.98536
>>
>> ------------------------------------------
>>
>> Term: B
>>
>> Response transformation matrix:
>> B1 B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 1 0
>> a2_b2 0 1
>> a2_b3 -1 -1
>>
>> Sum of squares and products for the hypothesis:
>> B1 B2
>> B1 3618.05 3443.2
>> B2 3443.20 3276.8
>>
>> Sum of squares and products for error:
>> B1 B2
>> B1 2304.5 1396.8
>> B2 1396.8 1225.2
>>
>> Multivariate Tests: B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.730544 23.04504 2 17 1.4426e-05
>> Wilks 1 0.269456 23.04504 2 17 1.4426e-05
>> Hotelling-Lawley 1 2.711181 23.04504 2 17 1.4426e-05
>> Roy 1 2.711181 23.04504 2 17 1.4426e-05
>>
>> ------------------------------------------
>>
>> Term: sex:B
>>
>> Response transformation matrix:
>> B1 B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 1 0
>> a2_b2 0 1
>> a2_b3 -1 -1
>>
>> Sum of squares and products for the hypothesis:
>> B1 B2
>> B1 26.45 23
>> B2 23.00 20
>>
>> Sum of squares and products for error:
>> B1 B2
>> B1 2304.5 1396.8
>> B2 1396.8 1225.2
>>
>> Multivariate Tests: sex:B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0160644 0.1387764 2 17 0.8714
>> Wilks 1 0.9839356 0.1387764 2 17 0.8714
>> Hotelling-Lawley 1 0.0163266 0.1387764 2 17 0.8714
>> Roy 1 0.0163266 0.1387764 2 17 0.8714
>>
>> ------------------------------------------
>>
>> Term: A:B
>>
>> Response transformation matrix:
>> A1:B1 A1:B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 -1 0
>> a2_b2 0 -1
>> a2_b3 1 1
>>
>> Sum of squares and products for the hypothesis:
>> A1:B1 A1:B2
>> A1:B1 5152.05 738.3
>> A1:B2 738.30 105.8
>>
>> Sum of squares and products for error:
>> A1:B1 A1:B2
>> A1:B1 3210.5 1334.4
>> A1:B2 1334.4 924.0
>>
>> Multivariate Tests: A:B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.7252156 22.43334 2 17 1.7039e-05
>> Wilks 1 0.2747844 22.43334 2 17 1.7039e-05
>> Hotelling-Lawley 1 2.6392162 22.43334 2 17 1.7039e-05
>> Roy 1 2.6392162 22.43334 2 17 1.7039e-05
>>
>> ------------------------------------------
>>
>> Term: sex:A:B
>>
>> Response transformation matrix:
>> A1:B1 A1:B2
>> a1_b1 1 0
>> a1_b2 0 1
>> a1_b3 -1 -1
>> a2_b1 -1 0
>> a2_b2 0 -1
>> a2_b3 1 1
>>
>> Sum of squares and products for the hypothesis:
>> A1:B1 A1:B2
>> A1:B1 26.45 2.3
>> A1:B2 2.30 0.2
>>
>> Sum of squares and products for error:
>> A1:B1 A1:B2
>> A1:B1 3210.5 1334.4
>> A1:B2 1334.4 924.0
>>
>> Multivariate Tests: sex:A:B
>> Df test stat approx F num Df den Df Pr(>F)
>> Pillai 1 0.0157232 0.1357821 2 17 0.87397
>> Wilks 1 0.9842768 0.1357821 2 17 0.87397
>> Hotelling-Lawley 1 0.0159744 0.1357821 2 17 0.87397
>> Roy 1 0.0159744 0.1357821 2 17 0.87397
>>
>> Univariate Type III Repeated-Measures ANOVA Assuming Sphericity
>>
>> SS num Df Error SS den Df F Pr(>F)
>> (Intercept) 194891 1 5743.2 18 610.8117 2.425e-15
>> sex 1810 1 5743.2 18 5.6716 0.02849
>> A 163 1 1733.6 18 1.6959 0.20925
>> sex:A 0 1 1733.6 18 0.0003 0.98536
>> B 1151 2 711.0 36 29.1292 2.990e-08
>> sex:B 8 2 711.0 36 0.1979 0.82134
>> A:B 1507 2 933.4 36 29.0532 3.078e-08
>> sex:A:B 8 2 933.4 36 0.1565 0.85568
>>
>>
>> Mauchly Tests for Sphericity
>>
>> Test statistic p-value
>> B 0.57532 0.0091036
>> sex:B 0.57532 0.0091036
>> A:B 0.45375 0.0012104
>> sex:A:B 0.45375 0.0012104
>>
>>
>> Greenhouse-Geisser and Huynh-Feldt Corrections
>> for Departure from Sphericity
>>
>> GG eps Pr(>F[GG])
>> B 0.70191 2.143e-06
>> sex:B 0.70191 0.7427
>> A:B 0.64672 4.838e-06
>> sex:A:B 0.64672 0.7599
>>
>> HF eps Pr(>F[HF])
>> B 0.74332 1.181e-06
>> sex:B 0.74332 0.7560
>> A:B 0.67565 3.191e-06
>> sex:A:B 0.67565 0.7702
>> List of 13
>> $ SSP :List of 8
>> ..$ (Intercept): num [1, 1] 1169345
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ sex : num [1, 1] 10858
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ A : num [1, 1] 980
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ sex:A : num [1, 1] 0.2
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ B : num [1:2, 1:2] 3618 3443 3443 3277
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ sex:B : num [1:2, 1:2] 26.4 23 23 20
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ A:B : num [1:2, 1:2] 5152 738 738 106
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> ..$ sex:A:B : num [1:2, 1:2] 26.4 2.3 2.3 0.2
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> $ SSPE :List of 8
>> ..$ (Intercept): num [1, 1] 34459
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ sex : num [1, 1] 34459
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "(Intercept)"
>> .. .. ..$ : chr "(Intercept)"
>> ..$ A : num [1, 1] 10402
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ sex:A : num [1, 1] 10402
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr "A1"
>> .. .. ..$ : chr "A1"
>> ..$ B : num [1:2, 1:2] 2304 1397 1397 1225
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ sex:B : num [1:2, 1:2] 2304 1397 1397 1225
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ A:B : num [1:2, 1:2] 3210 1334 1334 924
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> ..$ sex:A:B : num [1:2, 1:2] 3210 1334 1334 924
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> $ P :List of 8
>> ..$ (Intercept): num [1:6, 1] 1 1 1 1 1 1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "(Intercept)"
>> ..$ sex : num [1:6, 1] 1 1 1 1 1 1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "(Intercept)"
>> ..$ A : num [1:6, 1] 1 1 1 -1 -1 -1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "A1"
>> ..$ sex:A : num [1:6, 1] 1 1 1 -1 -1 -1
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr "A1"
>> ..$ B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ sex:B : num [1:6, 1:2] 1 0 -1 1 0 -1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "B1" "B2"
>> ..$ A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> ..$ sex:A:B : num [1:6, 1:2] 1 0 -1 -1 0 1 0 1 -1 0 ...
>> .. ..- attr(*, "dimnames")=List of 2
>> .. .. ..$ : chr [1:6] "a1_b1" "a1_b2"
"a1_b3" "a2_b1" ...
>> .. .. ..$ : chr [1:2] "A1:B1" "A1:B2"
>> $ df : Named num [1:8] 1 1 1 1 1 1 1 1
>> ..- attr(*, "names")= chr [1:8] "(Intercept)"
"sex" "A" "sex:A" ...
>> $ error.df : int 18
>> $ terms : chr [1:8] "(Intercept)" "sex"
"A" "sex:A" ...
>> $ repeated : logi TRUE
>> $ type : chr "III"
>> $ test : chr "Wilks"
>> $ idata :'data.frame': 6 obs. of 2 variables:
>> ..$ A: Factor w/ 2 levels "1","2": 1 1 1 2 2 2
>> .. ..- attr(*, "contrasts")= chr "contr.sum"
>> ..$ B: Factor w/ 3 levels "1","2","3": 1
2 3 1 2 3
>> .. ..- attr(*, "contrasts")= chr "contr.sum"
>> $ idesign :Class 'formula' length 2 ~A * B
>> .. ..- attr(*, ".Environment")>> $ icontrasts: chr
[1:2] "contr.sum" "contr.poly"
>> $ imatrix : NULL
>> - attr(*, "class")= chr "Anova.mlm"
>> > result$`Pr(>F)`
>> NULL
>> > result[[4]]
>> (Intercept) sex A sex:A B sex:B
>> 1 1 1 1 1 1
>> A:B sex:A:B
>> 1 1
>> >
>>
>>
>>
>>
>>
>>
>>
>> Op 23/08/2010 21:56, Dennis Murphy schreef:
>>> Hi:
>>>
>>> Look at
>>> result$`Pr(>F)`
>>>
>>> (with backticks around Pr(>F) ), or more succinctly,
result[[4]].
>>>
>>> HTH,
>>> Dennis
>>>
>>> On Mon, Aug 23, 2010 at 12:01 PM, Johan Steen
>>>> wrote:
>>>
>>> Dear all,
>>>
>>> is there anyone who can help me extracting p-values from an Anova
>>> object from the car library? I can't seem to locate the
p-values
>>> using str(result) or str(summary(result)) in the example below
>>>
>>>> A <- factor( rep(1:2,each=3) )
>>>> B <- factor( rep(1:3,times=2) )
>>>> idata <- data.frame(A,B)
>>>> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
>>> data=Data.wide)
>>>> result <- Anova(fit, type="III",
test="Wilks", idata=idata,
>>> idesign=~A*B)
>>>
>>>
>>> Any help would be much appreciated!
>>>
>>>
>>> Many thanks,
>>>
>>> Johan
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: slu at ccsr.uchicago.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 17:01:49 -0500
Subject: Re: [R] trajectory plot (growth curve)
On Mon, 2010-08-23 at 15:58 -0400, Lei Liu wrote:> That is, I will plot a growth curve for each subject ID, with y in
> the y axis, and time in the x axis. I would like to have all growth
> curves in the same plot. Is there a simple way in R to do it? Thanks a
> lot!
This article, entitled, "Fitting Value-added Models in R", by Harold
Doran, is relevant and very useful and interesting.
www-stat.stanford.edu/~rag/ed351longit/doran.pdf
--
Stuart Luppescu -=- slu .at. ccsr.uchicago.edu
University of Chicago -=- CCSR
???????? -=- Kernel 2.6.33-gentoo-r2
I have mentioned
several times on this list that I'm in the process
of developing a new and wonderful implementation
of lme and I would prefer to continue working on
that rather than modifying old-style code. --
Douglas Bates R-help (March 2004) >
--Forwarded Message Attachment--
From: eriki at ccbr.umn.edu
CC: r-help at stat.math.ethz.ch
To: ivo.welch at gmail.com
Date: Mon, 23 Aug 2010 17:04:25 -0500
Subject: Re: [R] unexpected subset select results?
ivo welch wrote:> quizz---what does this produce?
>
> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
> attach(d); c <- (a+b)>25; detach(d)
> d= subset(d, TRUE, select=c( a, b, c ))
>
> yes, I know I have made a mistake, in that the code does not do what I
> presumably would have wanted.
What exactly did you want?
--Forwarded Message Attachment--
From: ivo.welch at gmail.com
CC: r-help at stat.math.ethz.ch
To: eriki at ccbr.umn.edu
Date: Mon, 23 Aug 2010 18:15:43 -0400
Subject: Re: [R] unexpected subset select results?
I would not have wanted a data set with 1000 variables, but an error
message. the intent was
d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
attach(d); d$c <- (a+b)>25; detach(d)
d= subset(d, TRUE, select=c( a, b, c ))
-iaw
On Mon, Aug 23, 2010 at 6:04 PM, Erik Iverson wrote:>
>
> ivo welch wrote:
>>
>> quizz---what does this produce?
>>
>> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
>> attach(d); c <- (a+b)>25; detach(d)
>> d= subset(d, TRUE, select=c( a, b, c ))
>>
>> yes, I know I have made a mistake, in that the code does not do what I
>> presumably would have wanted.
>
> What exactly did you want?
>
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at r-project.org
To: johan.steen at gmail.com
Date: Mon, 23 Aug 2010 18:19:42 -0400
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
On Aug 23, 2010, at 5:35 PM, Johan Steen wrote:
> Thanks for your replies,
>
> but unfortunately none of them seem to help.
> I do get p-values in the output, but can't seem to locate them
> anywhere in these objects via the str() function.
That is because in the case of Anova.mlm objects ... they aren't there.
> I also get very different output using str() than you obtained from
> the lm help page
>
> Here's my output:
>
>> A <- factor( rep(1:2,each=3) )
>> B <- factor( rep(1:3,times=2) )
>> idata <- data.frame(A,B)
>> idata
> A B
> 1 1 1
> 2 1 2
> 3 1 3
> 4 2 1
> 5 2 2
> 6 2 3
>>
>> fit <- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
> data=Data.wide)
>> result <- Anova(fit, type="III", test="Wilks",
idata=idata,
> idesign=~A*B)
>> result
>
> Type III Repeated Measures MANOVA Tests: Wilks test statistic
> Df test stat approx F num Df den Df Pr(>F)
> (Intercept) 1 0.02863 610.81 1 18 2.425e-15
> sex 1 0.76040 5.67 1 18 0.02849
> A 1 0.91390 1.70 1 18 0.20925
> sex:A 1 0.99998 0.00 1 18 0.98536
> B 1 0.26946 23.05 2 17 1.443e-05
> sex:B 1 0.98394 0.14 2 17 0.87140
> A:B 1 0.27478 22.43 2 17 1.704e-05
> sex:A:B 1 0.98428 0.14 2 17 0.87397
Remember I suggested looking for print methods? methods(print)
produces a print.Anova.mlm citation. The above output is produced by
that function which is not visible unless you do:
> getAnywhere(print.Anova.mlm)
A single object matching ?print.Anova.mlm? was found
It was found in the following places
registered S3 method for print from namespace car
namespace:car
with value
function (x, ...)
{
test <- x$test
repeated <- x$repeated
ntests <- length(x$terms)
tests <- matrix(NA, ntests, 4)
if (!repeated)
SSPE.qr <- qr(x$SSPE)
for (term in 1:ntests) {
eigs <- Re(eigen(qr.coef(if (repeated) qr(x$SSPE[[term]])
else SSPE.qr,
x$SSP[[term]]), symmetric = FALSE)$values)
tests[term, 1:4] <- switch(test, Pillai = stats:::Pillai(eigs,
x$df[term], x$error.df), Wilks = stats:::Wilks(eigs,
x$df[term], x$error.df), `Hotelling-Lawley` stats:::HL(eigs,
x$df[term], x$error.df), Roy = stats:::Roy(eigs,
x$df[term], x$error.df))
}
ok <- tests[, 2]>= 0 & tests[, 3]> 0 & tests[, 4]> 0
ok <- !is.na(ok) & ok
tests <- cbind(x$df, tests, pf(tests[ok, 2], tests[ok, 3],
tests[ok, 4], lower.tail = FALSE))
rownames(tests) <- x$terms
colnames(tests) <- c("Df", "test stat", "approx
F", "num Df",
"den Df", "Pr(>F)")
tests <- structure(as.data.frame(tests), heading = paste("\nType
",
x$type, if (repeated)
" Repeated Measures", " MANOVA Tests: ", test,
" test
statistic",
sep = ""), class = c("anova",
"data.frame"))
print(tests)
invisible(x)
}
Notice that after printing "tests", those results are basically
thrown away, and the function invisibly returns its argument. Hacking
that function would be simple matter if you wanted to build a list of
results.
>> summary(result)
>
> snipped long and uninformative output.
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at stat.math.ethz.ch
To: ivo.welch at gmail.com
Date: Mon, 23 Aug 2010 18:28:10 -0400
Subject: Re: [R] unexpected subset select results?
On Aug 23, 2010, at 5:51 PM, ivo welch wrote:
> quizz---what does this produce?
>
> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
> attach(d); c <- (a+b)>25; detach(d)
> d= subset(d, TRUE, select=c( a, b, c ))
>
> yes, I know I have made a mistake, in that the code does not do what I
> presumably would have wanted. it does seem like unexpected behavior,
> though, without an error. there probably is some reason why this does
> not ring an alarm bell...
You have created a perfect example for why it is a bad idea to attach
data.frames.
?attach # yes, I am yet again saying: "read the help page..."
... especially the 4th paragraph of the Details section.
--
David.
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: wwwhsd at gmail.com
CC: r-help at stat.math.ethz.ch
To: ivo.welch at gmail.com
Date: Mon, 23 Aug 2010 19:28:05 -0300
Subject: Re: [R] unexpected subset select results?
Try this:
subset(d, TRUE, select=c( 'a', 'b', 'c' ))
On Mon, Aug 23, 2010 at 7:15 PM, ivo welch wrote:
> I would not have wanted a data set with 1000 variables, but an error
> message. the intent was
>
> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
> attach(d); d$c <- (a+b)>25; detach(d)
> d= subset(d, TRUE, select=c( a, b, c ))
>
> -iaw
>
>
> On Mon, Aug 23, 2010 at 6:04 PM, Erik Iverson wrote:
>>
>>
>> ivo welch wrote:
>>>
>>> quizz---what does this produce?
>>>
>>> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
>>> attach(d); c <- (a+b)>25; detach(d)
>>> d= subset(d, TRUE, select=c( a, b, c ))
>>>
>>> yes, I know I have made a mistake, in that the code does not do
what I
>>> presumably would have wanted.
>>
>> What exactly did you want?
>>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Paran?-Brasil
25? 25' 40" S 49? 16' 22" O
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: erich.neuwirth at univie.ac.at
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 00:50:47 +0200
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
Search for an older message with the subject line
[R] export tables to excel files on multiple sheets with titles for each
table
On 8/23/2010 11:41 PM, Erik Iverson wrote:> In addition to the Wiki already mentioned, the following may be
> useful:
>
>
http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/
>
>
> Eva Nordstrom wrote:
--Forwarded Message Attachment--
From: ehlers at ucalgary.ca
CC: r-help at r-project.org; ted.harding at manchester.ac.uk
To: r.ookie at live.com
Date: Mon, 23 Aug 2010 16:52:20 -0600
Subject: Re: [R] Aspect Ratio
On 2010-08-19 16:36, r.ookie wrote:> This example definitely clarified a situation where 'asp' is
useful/helpful. Thanks!
>
Ted's last example is a bit misleading. You don't get the same
result from setting xlim and ylim equal as you do from using 'asp'.
Indeed, this should help you to understand aspect ratio even
better.
Try this:
x11(width = 10, height = 5)
plot(X,Y,pch="+",col="blue",xlim=c(-2.5,2.5),ylim=c(-2.5,2.5))
(using Ted's X,Y) and compare with the 'asp' version.
-Peter Ehlers
> On Aug 19, 2010, at 3:05 PM, (Ted Harding) wrote:
>
> Spencer, you came up with your example just as I finished making mine:
>
> set.seed(54321); X<- rnorm(200) ; Y<- 0.25*X+0.25*rnorm(200)
> ##Compare:
> plot(X,Y,pch="+",col="blue")
> ##with:
> plot(X,Y,pch="+",col="blue",asp=1.0)
>
> With R left to choose the X and Y limits by itself, the first
> plot gives the superficial impression that Y increases equally
> as X increases -- until you look at the scales on the Y and X
> axes. Hence it tends to be misleading about how Y depends on X.
> The second plot shows their proportional relationship correctly.
>
> Of course you could achive a similar effect by explicitly setting
> the X and Y limits yourself:
>
>
plot(X,Y,pch="+",col="blue",xlim=c(-2.5,2.5),ylim=c(-2.5,2.5))
>
> but "asp=1.0" saves you the bother of working out what they
> should be.
>
> There are, of course, cases where, for the sake of the desired
> visual effect, you would want to use an aspect ratio different
> from 1. The basic point is that it is a tool to help you get
> the vertical and horizontal dimensions of the graph in the
> proportions that help to achieve the visual effect you seek.
>
> Ted.
>
> On 19-Aug-10 21:50:12, Spencer Graves wrote:
>> The documentation is not clear. It would help if it had an
>> example like the following:
>>
>> plot(1:2, 1:2/10)
>> plot(1:2, 1:2/10, asp=1)
>>
>> Does looking at these two plots answer the question?
>> Spencer Graves
>>
>> On 8/19/2010 2:36 PM, David Winsemius wrote:
>>>
>>> On Aug 19, 2010, at 5:28 PM, r.ookie wrote:
>>>
>>>> Well, I had to look further into the documentation to see
'If asp is
>>>> a finite positive value then the window is set up so that one
data
>>>> unit in the x direction is equal in length to asp * one data
unit in
>>>> the y direction'
>>>>
>>>> Okay, so in what situations is the 'asp' helpful?
>>>
>>> It yet again appears that you are asking us to read the help pages
for
>>> you.
>>>
>>>
>>>>
>>>> On Aug 19, 2010, at 2:24 PM, David Winsemius wrote:
>>>>
>>>>
>>>> On Aug 19, 2010, at 5:13 PM, r.ookie wrote:
>>>>
>>>>> set.seed(1)
>>>>> x<- rnorm(n = 1000, mean = 0, sd = 1)
>>>>> plot(x = x, asp = 2000)
>>>>>
>>>>> Could someone please explain what the 'asp'
parameter is doing?
>>>>
>>>> You want us to read the help page to you?
>>>>
>>>> --
>>>
>>> David Winsemius, MD
>>> West Hartford, CT
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>> --
>> Spencer Graves, PE, PhD
>> President and Chief Operating Officer
>> Structure Inspection and Monitoring, Inc.
>> 751 Emerson Ct.
>> San Jos?, CA 95126
>> ph: 408-655-4567
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> --------------------------------------------------------------------
> E-Mail: (Ted Harding)
> Fax-to-email: +44 (0)870 094 0861
> Date: 19-Aug-10 Time: 23:05:49
> ------------------------------ XFMail ------------------------------
>
--Forwarded Message Attachment--
From: kingsfordjones at gmail.com
To: liulei at virginia.edu; r-help at r-project.org
Date: Mon, 23 Aug 2010 17:19:47 -0600
Subject: Re: [R] trajectory plot (growth curve)
Hi Lei,
Hope you don't mind I'm moving this back to the list in case others
may benefit. Answers below...
On Mon, Aug 23, 2010 at 3:37 PM, Lei Liu wrote:> Hi Kingsford,
>
> Thanks a lot! I got some help from my colleague by using the following
code:
>
> xyplot(y~month,group=id, type="l"), the same as you suggested.
It worked
> fine.
>
> However, when I tried to add an additional line for the mean at each time
> point by the following code:
>
> y.mean=aggregate(y, by=list(time), FUN=mean)[, 2]
> uniq.time=sort(unique(time))
>
> lines(uniq.time, y.mean, type="l", lty=1, lw=2)
>
> I find the line of mean does not overlap well with the trajectory plot!!!
It
> seems to me that "lines" statement does work well under
"xyplot"! I tried
> different strategies, e.g., add xlim and ylim in both xyplot and lines
> statements, but still the problem exists. I also tried the ggplot2 package
> and it had the same problem. Any help here? Thanks!
Both lattice and ggplot2 use grid graphics which is a different beast
from the base graphics. I don't believe the lines function has
methods to add to grid plots. There are many approaches you could
take here. The first thing that comes to my mind is to add another
subjects (named 'mean' below) whose values are the observed average
within time points:
#the original data (no replicates within time points)
dat <- structure(list(ID = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L),
.Label = c("1", "2"), class = "factor"),
time = c(1, 2, 3, 1.5, 4, 5.5, 6),
y = c(1.4, 2, 2.5, 2.3, 4.5, 1.6, 2)),
.Names = c("ID", "time", "y"),
row.names = c(NA, -7L), class = "data.frame")
#adding another subject to introduce replicates
id3 <- data.frame(ID=as.factor(rep(3, 4)),time = c(1, 1.5, 2, 5.5),
y = c(1, 2.2, 3, 2))
dat <- rbind(dat, id3)
mean.y <- aggregate(formula = y ~ time, data = dat, FUN = mean)
mean.y <- cbind(ID = as.factor('mean'), mean.y)
dat <- rbind(dat, mean.y)
dat
library(ggplot2)
qplot(time, y, data=dat, group = ID, color = ID, geom = c('point',
'line'))
best,
Kingsford Jones
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at r-project.org
To: cuckovic.paik at gmail.com
Date: Mon, 23 Aug 2010 16:28:28 -0700
Subject: Re: [R] Memory Issue
Hi:
Are you running 32-bit or 64-bit R? For memory-intensive processes like
these, 64-bit R is almost a necessity. You might also look into more
efficient ways to invert the matrix, especially if it has special properties
that can be exploited (e.g., symmetry). More to the point, you want to
compute the nonparametric MLE as efficiently as you can, since it affects
everything downstream. In addition, if you're trying to do all of this in a
single function, it may be better to break the job up into several
functions, one for each task, with a wrapper function to put them together
(i.e., modularize).
Memory problems in R often arise from repeatedly copying objects in memory
while accumulating promises in a loop that do not get evaluated until the
end. Forcing evaluations or performing garbage collection at judicious
points can improve efficiency. Pre-allocating memory to result objects is
more efficient than adding a new element to an output vector or matrix every
iteration. Vectorizing where you can is critical.
Since you didn't provide any code, one is left to speculate where the
bottleneck(s) in your code lie(s), but here's a little example I did for
someone recently that shows how much vectorization and pre-allocation of
memory can make a difference:
# Problem: Simulate 1000 U(0, 1) random numbers, discretize them
# into a factor and generate a table.
# vectorized version using cut()
f <- function() {
x <- runif(1000)
z <- cut(x, breaks = c(-0.1, 0.1, 0.2, 0.4, 0.7, 0.9, 1), labels = 1:6)
table(z)
}
# use ifelse(), a vectorized function, to divide into groups
g <- function() {
x <- runif(1000)
z <- ifelse(x <= 0.1, '1', ifelse(x> 0.1 & x <= 0.2,
'2',
ifelse(x> 0.2 & x <= 0.4, '3',
ifelse(x> 0.4 & x <= 0.7, '4',
ifelse(x> 0.7 & x <= 0.9, '5',
'6')))))
table(z)
}
# Elementwise loop with preallocation of memory
h <- function() {
x <- runif(1000)
z <- character(1000) # <= for(i in 1:1000) {
z[i] <- if(x[i] <= 0.1) '1' else
if(x[i]> 0.1 && x[i] <= 0.2) '2' else
if(x[i]> 0.2 && x[i] <= 0.4) '3' else
if(x[i]> 0.4 && x[i] <= 0.7) '4' else
if(x[i]> 0.7 && x[i] <= 0.9) '5' else
'6'
}
table(z)
}
# Same as h() w/o memory preallocation
h2 <- function() {
x <- runif(1000)
for(i in 1:1000) {
z[i] <- if(x[i] <= 0.1) '1' else
if(x[i]> 0.1 && x[i] <= 0.2) '2' else
if(x[i]> 0.2 && x[i] <= 0.4) '3' else
if(x[i]> 0.4 && x[i] <= 0.7) '4' else
if(x[i]> 0.7 && x[i] <= 0.9) '5' else
'6'
}
table(z)
}
# Same as h(), but initialize with an empty vector
h3 <- function() {
x <- runif(1000)
z <- character(0) # empty vector
for(i in 1:1000) {
z[i] <- if(x[i] <= 0.1) '1' else
if(x[i]> 0.1 && x[i] <= 0.2) '2' else
if(x[i]> 0.2 && x[i] <= 0.4) '3' else
if(x[i]> 0.4 && x[i] <= 0.7) '4' else
if(x[i]> 0.7 && x[i] <= 0.9) '5' else
'6'
}
table(z)
}
########## Timings using the function replicate():
> system.time(replicate(1000, f()))
user system elapsed
1.14 0.04 1.20> system.time(replicate(1000, g()))
user system elapsed
3.90 0.00 3.92> system.time(replicate(1000, h()))
user system elapsed
9.24 0.00 9.26> system.time(replicate(1000, h2()))
user system elapsed
15.49 0.00 15.55> system.time(replicate(1000, h3()))
user system elapsed
15.60 0.03 15.68
The vectorized version is over three times as fast as the vectorized
ifelse() approach, and the vectorized ifelse() is almost three times as fast
as the preallocated memory, non-vectorized approach. The h* functions are
all non-vectorized, but differ in the way they initialize memory for output
objects. Full preallocation of memory (h) takes about 60% as long as the
non-preallocated memory versions. Initializing an empty vector is about as
fast as no initialization at all. The effects of vectorization and the use
of pre-allocated memory for result objects filled in a loop are clear.
If you're carrying around copies of a large n x n matrix in memory over a
number of iterations of a loop, you are certainly going to gobble up
available memory, no matter how much you have. You can see the result in a
much simpler problem above. I'd recommend that you invest some time
improving the efficiency of the MLE function. Profiling tools like Rprof()
is one place to start - you can find tutorial material on the web in various
places on the topic (try Googling 'Profiling R functions'), as well as
some
past discussion in this forum. Use RSiteSearch() and/or search the mail
archives for information there.
HTH,
Dennis
On Mon, Aug 23, 2010 at 2:44 PM, Cuckovic Paik wrote:
>
> Dear All,
>
> I have an issue on memory use in R programming.
>
> Here is the brief story: I want to simulate the power of a nonparameteric
> test and compare it with the existing tests. The basic steps are
>
> 1. I need to use Newton method to obtain the nonparametric MLE that
> involves
> the inversion of a large matrix (n-by-n matrix, it takes about less than 3
> seconds in average to get the MLE. n = sample size)
>
>
> 2. Since the test statistic has an unknown sample distribution, the p-value
> is simmulated using Monte Carlo (1000 runs). it takes about 3-4 minutes to
> get an p-value.
>
>
> 3. I need to simulate 1000 random samples and reapte steps 1 and 2 to get
> the p-value for each of the simulated samples to get the power of the test.
>
>
> Here is the question:
>
> It initially completes 5-6 simulations per hour, after that, the time
> needed
> to complete a single simulation increases exponentially. After a 24 hour
> running, I only get about 15-20 simulations completed. My computer is a PC
> (Pentium Dual Core CPU 2.5 GHz, RAM 6.00GB, 64-bit). Appearently, the
> memory
> is the problem.
>
> I also tried various memory re-allocation procedures, They didn't work.
Can
> anyboy help on this? Thanks in advance.
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Memory-Issue-tp2335860p2335860.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: johan.steen at gmail.com
CC: r-help at r-project.org
To: jfox at mcmaster.ca
Date: Tue, 24 Aug 2010 01:31:39 +0200
Subject: Re: [R] extracting p-values from Anova objects (from the car library)
Thanks a lot! It sure helped.
also thanks to all other repliers.
kind regards
Johan
Op 23/08/2010 23:53, John Fox schreef:> Dear Johan,
>
>> -----Original Message-----
>> From: r-help-bounces at r-project.org [mailto:r-help-bounces at
r-project.org]
> On
>> Behalf Of Johan Steen
>> Sent: August-23-10 3:02 PM
>> To: r-help at r-project.org
>> Subject: [R] extracting p-values from Anova objects (from the car
library)
>>
>> Dear all,
>>
>> is there anyone who can help me extracting p-values from an Anova
object
>> from the car library? I can't seem to locate the p-values using
>> str(result) or str(summary(result)) in the example below
>>
>> > A<- factor( rep(1:2,each=3) )
>> > B<- factor( rep(1:3,times=2) )
>> > idata<- data.frame(A,B)
>> > fit<- lm( cbind(a1_b1,a1_b2,a1_b3,a2_b1,a2_b2,a2_b3) ~ sex,
>> data=Data.wide)
>> > result<- Anova(fit, type="III",
test="Wilks", idata=idata,
> idesign=~A*B)
>>
>>
>> Any help would be much appreciated!
>
> I'm afraid that the answer is that the p-values aren't easily
accessible.
> The print method for Anova.mlm objects just passes through the object
> invisibly as its result, as is conventional for print methods. The summary
> method does the same -- that isn't conventional, but the summary method
can
> produce so many different kinds of printed output (various multivariate
test
> criteria for models with and without repeated measures; for the latter,
> multivariate and univariate tests with and without corrections for
> non-sphericity) that the printed output is produced directly rather than
put
> in an object with its own print method.
>
> What you can do is take a look at car:::print.Anova.mlm or
> car:::summary.Anova.mlm (probably the print method, which is simpler) to
see
> how the p-values that you want are computed and write a small function to
> return them.
>
> I hope this helps,
> John
>
> --------------------------------
> John Fox
> Senator William McMaster
> Professor of Social Statistics
> Department of Sociology
> McMaster University
> Hamilton, Ontario, Canada
> web: socserv.mcmaster.ca/jfox
>>
>>
>> Many thanks,
>>
>> Johan
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
>
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at r-project.org; liulei at virginia.edu
To: kingsfordjones at gmail.com
Date: Mon, 23 Aug 2010 17:19:10 -0700
Subject: Re: [R] trajectory plot (growth curve)
Hi:
On Mon, Aug 23, 2010 at 4:19 PM, Kingsford Jones
wrote:
> Hi Lei,
>
> Hope you don't mind I'm moving this back to the list in case others
> may benefit. Answers below...
>
> On Mon, Aug 23, 2010 at 3:37 PM, Lei Liu wrote:
>> Hi Kingsford,
>>
>> Thanks a lot! I got some help from my colleague by using the following
> code:
>>
>> xyplot(y~month,group=id, type="l"), the same as you
suggested. It worked
>> fine.
>>
>> However, when I tried to add an additional line for the mean at each
time
>> point by the following code:
>>
>> y.mean=aggregate(y, by=list(time), FUN=mean)[, 2]
>> uniq.time=sort(unique(time))
>>
>> lines(uniq.time, y.mean, type="l", lty=1, lw=2)
>>
>> I find the line of mean does not overlap well with the trajectory
plot!!!
> It
>> seems to me that "lines" statement does work well under
"xyplot"! I tried
>> different strategies, e.g., add xlim and ylim in both xyplot and lines
>> statements, but still the problem exists. I also tried the ggplot2
> package
>> and it had the same problem. Any help here? Thanks!
>
> Both lattice and ggplot2 use grid graphics which is a different beast
> from the base graphics. I don't believe the lines function has
> methods to add to grid plots. There are many approaches you could
> take here. The first thing that comes to my mind is to add another
> subjects (named 'mean' below) whose values are the observed average
> within time points:
>
This is an excellent idea - the only snag might occur if someone wants
the mean line to be thicker :) Having said that, it's usually easier to
'fix' the
problem externally in the data rather than to fiddle with graphics commands.
> #the original data (no replicates within time points)
> dat <- structure(list(ID = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L),
> .Label = c("1", "2"), class = "factor"),
> time = c(1, 2, 3, 1.5, 4, 5.5, 6),
> y = c(1.4, 2, 2.5, 2.3, 4.5, 1.6, 2)),
> .Names = c("ID", "time", "y"),
> row.names = c(NA, -7L), class = "data.frame")
>
> #adding another subject to introduce replicates
> id3 <- data.frame(ID=as.factor(rep(3, 4)),time = c(1, 1.5, 2, 5.5),
> y = c(1, 2.2, 3, 2))
> dat <- rbind(dat, id3)
> mean.y <- aggregate(formula = y ~ time, data = dat, FUN = mean)
> mean.y <- cbind(ID = as.factor('mean'), mean.y)
> dat <- rbind(dat, mean.y)
> dat
> library(ggplot2)
> qplot(time, y, data=dat, group = ID, color = ID, geom = c('point',
'line'))
>
A lattice version with a legend is:
mykey <- list(space = 'right',
title = 'ID',
cex.title = 1.2,
text = list(levels(dat$ID), cex = 0.8),
lines = list(lty = 1, col = 1:4))
xyplot(y ~ time, data = dat, lty = 1, col.lines = 1:4, col = 1:4,
groups = ID, type = c('g', 'p', 'l'), key = mykey)
Defining the key externally modularizes the problem, lets one define
the features one wants to contain, and simplifies the high-level
xyplot() call.
There is a type = 'a' (shorthand for panel.average()) that can be
used to good effect in xyplot(), but it creates 'holes' where missing
data reside, so taking care of the problem externally at the data
level is much cleaner.
HTH,
Dennis
> best,
>
> Kingsford Jones
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: bbolker at gmail.com
To: r-help at stat.math.ethz.ch
Date: Tue, 24 Aug 2010 00:20:46 +0000
Subject: Re: [R] 3D stariway plot
mauede alice.it alice.it> writes:
>
>
> Please, is there an R function /package that allows for 3D stairway plots
like the attached one ?> In addition, how can I overlay a parametric grid plot??
Not exactly, that I know of, but maybe you can adapt
library(rgl)
demo(hist3d)
to do what you want.
Most of the R graphics developers are unenthusiastic about 3D
graphics representations, thinking there is often a better (= easier
to actually draw quantitative conclusions from) way to present
the data in 2D (conditioning plots, bubble plots, etc.).
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at stat.math.ethz.ch; ivo.welch at gmail.com
To: dwinsemius at comcast.net
Date: Mon, 23 Aug 2010 20:36:51 -0400
Subject: Re: [R] unexpected subset select results?
On Aug 23, 2010, at 6:28 PM, David Winsemius wrote:
>
> On Aug 23, 2010, at 5:51 PM, ivo welch wrote:
>
>> quizz---what does this produce?
>>
>> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
>> attach(d); c <- (a+b)>25; detach(d)
>> d= subset(d, TRUE, select=c( a, b, c ))
>>
>> yes, I know I have made a mistake, in that the code does not do
>> what I
>> presumably would have wanted. it does seem like unexpected behavior,
>> though, without an error. there probably is some reason why this
>> does
>> not ring an alarm bell...
>
> You have created a perfect example for why it is a bad idea to
> attach data.frames.
>
> ?attach # yes, I am yet again saying: "read the help page..."
>
> ... especially the 4th paragraph of the Details section.
I think it might helpful to consider the right way and the wrong way
to do the same assignment using with(), which is my choice as an
alternative to attache
Right;
d$c <- with(d, a+b>25) # note: using "c" as an object name is a
really confusing strategy
Wrong:
with(d, c <- a+b <25)
The wrong way is similar to what you might have thought would be
happening. The attach() operation created its own environment, but
that did not necessarily mean that all assignments would be creating
new columns inside "d".
>
> --
> David.
>
>
>
> David Winsemius, MD
> West Hartford, CT
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: kingsfordjones at gmail.com
CC: r-help at r-project.org; liulei at virginia.edu
To: djmuser at gmail.com
Date: Mon, 23 Aug 2010 19:20:24 -0600
Subject: Re: [R] trajectory plot (growth curve)
On Mon, Aug 23, 2010 at 6:19 PM, Dennis Murphy wrote:>
> This is an excellent idea - the only snag might occur if someone wants
> the mean line to be thicker :)
fortunately, with your lattice solution this is easily accomplished by
passing a vector to lwd:
i <- c(1, 1, 1, 3)
mykey <- list(space = 'right',
title = 'ID',
cex.title = 1.2,
text = list(levels(dat$ID), cex = 0.8),
lines = list(lty = i, lwd = i, col = 1:4))
xyplot(y ~ time, data = dat, lty = i, lwd = i, col.lines = 1:4, col = 1:4,
groups = ID, type = c('g', 'p', 'l'), key = mykey)
but I didn't have luck trying the same with qplot:
> qplot(time, y, data = dat, group = ID, color = ID,
+ geom = c('point', 'line'), lty = i, lwd = i)
Error in data.frame(colour = c(1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, :
arguments imply differing number of rows: 18, 4
perhaps using the construct ggplot(...) + geom_line(...) would be more fruitful?
King
> Having said that, it's usually easier to
> 'fix' the
> problem externally in the data rather than to fiddle with graphics
commands.
>
>>
>> #the original data (no replicates within time points)
>> dat <- structure(list(ID = structure(c(1L, 1L, 1L, 2L, 2L, 2L, 2L),
>> .Label = c("1", "2"), class = "factor"),
>> time = c(1, 2, 3, 1.5, 4, 5.5, 6),
>> y = c(1.4, 2, 2.5, 2.3, 4.5, 1.6, 2)),
>> .Names = c("ID", "time", "y"),
>> row.names = c(NA, -7L), class = "data.frame")
>>
>> #adding another subject to introduce replicates
>> id3 <- data.frame(ID=as.factor(rep(3, 4)),time = c(1, 1.5, 2, 5.5),
>> y = c(1, 2.2, 3, 2))
>> dat <- rbind(dat, id3)
>> mean.y <- aggregate(formula = y ~ time, data = dat, FUN = mean)
>> mean.y <- cbind(ID = as.factor('mean'), mean.y)
>> dat <- rbind(dat, mean.y)
>> dat
>> library(ggplot2)
>> qplot(time, y, data=dat, group = ID, color = ID, geom =
c('point',
>> 'line'))
>
> A lattice version with a legend is:
>
> mykey <- list(space = 'right',
> title = 'ID',
> cex.title = 1.2,
> text = list(levels(dat$ID), cex = 0.8),
> lines = list(lty = 1, col = 1:4))
>
> xyplot(y ~ time, data = dat, lty = 1, col.lines = 1:4, col = 1:4,
> groups = ID, type = c('g', 'p', 'l'), key =
mykey)
>
> Defining the key externally modularizes the problem, lets one define
> the features one wants to contain, and simplifies the high-level
> xyplot() call.
>
> There is a type = 'a' (shorthand for panel.average()) that can be
> used to good effect in xyplot(), but it creates 'holes' where
missing
> data reside, so taking care of the problem externally at the data
> level is much cleaner.
>
> HTH,
> Dennis
>
>>
>> best,
>>
>> Kingsford Jones
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at stat.math.ethz.ch; ivo.welch at gmail.com
To: dwinsemius at comcast.net
Date: Mon, 23 Aug 2010 18:20:51 -0700
Subject: Re: [R] unexpected subset select results?
And, of course, there's transform, as David knows very well:
d <- transform(d, c = (a + b)> 25)> head(d)
a b z c
1 1 2001 5001 TRUE
2 2 2002 5002 TRUE
3 3 2003 5003 TRUE
4 4 2004 5004 TRUE
5 5 2005 5005 TRUE
6 6 2006 5006 TRUE
As far as David's 'wrong way' is concerned, I think he may have
meant to use
within() instead of with():
d0 <- d[, -4]> head(d0, 2)
a b z
1 1 2001 5001
2 2 2002 5002
d2 <- within(d0, c <- (a + b)> 25)
> identical(d, d2)
[1] TRUE
HTH,
Dennis
On Mon, Aug 23, 2010 at 5:36 PM, David Winsemius wrote:
>
> On Aug 23, 2010, at 6:28 PM, David Winsemius wrote:
>
>
>> On Aug 23, 2010, at 5:51 PM, ivo welch wrote:
>>
>> quizz---what does this produce?
>>>
>>> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
>>> attach(d); c <- (a+b)>25; detach(d)
>>> d= subset(d, TRUE, select=c( a, b, c ))
>>>
>>> yes, I know I have made a mistake, in that the code does not do
what I
>>> presumably would have wanted. it does seem like unexpected
behavior,
>>> though, without an error. there probably is some reason why this
does
>>> not ring an alarm bell...
>>>
>>
>> You have created a perfect example for why it is a bad idea to attach
>> data.frames.
>>
>> ?attach # yes, I am yet again saying: "read the help
page..."
>>
>> ... especially the 4th paragraph of the Details section.
>>
>
> I think it might helpful to consider the right way and the wrong way to do
> the same assignment using with(), which is my choice as an alternative to
> attache
>
> Right;
>
> d$c <- with(d, a+b>25) # note: using "c" as an object name
is a really
> confusing strategy
>
> Wrong:
> with(d, c <- a+b <25)
>
> The wrong way is similar to what you might have thought would be happening.
> The attach() operation created its own environment, but that did not
> necessarily mean that all assignments would be creating new columns inside
> "d".
>
>
>
>> --
>> David.
>>
>>
>>
>> David Winsemius, MD
>> West Hartford, CT
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> David Winsemius, MD
> West Hartford, CT
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: rusers.sh at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 21:40:10 -0400
Subject: [R] generate random numbers from a multivariate distribution with
specified correlation matrix
Hi all,
rmvnorm()can be used to generate the random numbers from a multivariate
normal distribution with specified means and covariance matrix, but i want
to specify the correlation matrix instead of covariance matrix for the
multivariate
normal distribution.
Does anybody know how to generate the random numbers from a multivariate
normal distribution with specified correlation matrix? What about
other non-normal
distribution?
Thanks a lot.
--
-----------------
Jane Chang
Queen's
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: tesutton at hku.hk
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 09:44:13 +0800
Subject: [R] Rotate x-axis label on log scale
Hi
I'd appreciate some help with plotting odds ratios. I want to rotate the
labels on the x-axis by 45 degrees.
The usual way of doing this, using text - e.g. text(1,
par('usr')[3]-2.25..)
- gives no result when the y-axis is a log scale.
I guess this is because, as the par help says, for a logarithmic y-axis:
y-limits will be 10 ^ par("usr")[3:4]
Does anyone know a solution for this?
The example below works well if log='y' is omitted.
Thanks very much for your help
Tim
#Create plot with log-scale on the y-axis
par(mar = c(7, 4, 4, 2) + 0.1)
plot(1, type='n', bty='n',
xlab="",
ylab='Odds Ratio',
xlim= c(0.5,4.5),
ylim= c(0.75, 2),
cex=2, xaxt='n', yaxt='n', cex.lab=1.3,
log='y')
# Line of unity
segments(0,1,10,1, lwd=2, lty=2)
#Estimates and confidence intervals
points(c(1:4),c(1.1,1.32,1.14,1.36), pch=17, cex=1.5, col='blue')
segments (c(1:4),c(0.93,1.11,0.94,1.15),c(1:4),c(1.3,1.58,1.37,1.61),
col='blue', lwd=2)
axis(1,c(1:4), labels= F)
axis(2, at=seq(0.75,2, by=0.25), labels=seq(0.75,2, by=0.25), las=1)
labels <- paste("Label", 1:4, sep = " ")
text(1:4-0.25, par('usr')[3]-0.15, xpd=TRUE, labels=labels, adj=0.1,
srt=45)
mtext("Exposure", side=1, line=4.5, cex=1.5)
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at r-project.org; liulei at virginia.edu
To: kingsfordjones at gmail.com
Date: Mon, 23 Aug 2010 19:08:13 -0700
Subject: Re: [R] trajectory plot (growth curve)
Hi:
I think it would be tough to do that in qplot(), but it's easier in
ggplot(), even if you don't add the mean information to the data frame.
Here's one way - use the three person data frame (call it dat1) and the
mean.y data frame that you created from aggregate() without adding the
factor info as follows:
# Set up the framework of the plot:
g <- ggplot(dat1, aes(x = time, y = y, groups = ID, colour = ID))
# This associates colors with groups. Next, add the points and
# lines from dat1 and then add the mean data with a separate
# geom_line() call, where the mean line is about twice as thick:
g + geom_point(size = 2) + geom_line() +
geom_line(data = mean.y, aes(x = time, y = y, colour = 'mean'), size
1.5)
# Notice how the name 'mean' that we associated with colour got into the
# legend. This is because we *mapped* the same aesthetic (color) in the
second
# geom_line() call to the one existing for IDs. ggplot2 is smart enough to
pick
# this up. [We just have to be smart enough to realize it :)]. To exert more
# control over line colors, add the following:
last_plot() + scale_colour_manual(values = c('1' = 'red',
'2' = 'green',
'3' = 'blue', 'mean' =
'black'))
The LHS is the value of ID, the RHS the color to associate with it.
As usual, it took me about five iterations of scale_* to get it right :) The
line
thicknesses in the scale are all the same as the thickest, but I see that as
a
feature rather than a bug :)
One more comment below.
On Mon, Aug 23, 2010 at 6:20 PM, Kingsford Jones
wrote:
> On Mon, Aug 23, 2010 at 6:19 PM, Dennis Murphy wrote:
>>
>> This is an excellent idea - the only snag might occur if someone wants
>> the mean line to be thicker :)
>
> fortunately, with your lattice solution this is easily accomplished by
> passing a vector to lwd:
>
> i <- c(1, 1, 1, 3)
>
I was going to do that, too, but I used 1.5 instead of 3, saw no material
difference,
and gave up...should have kept trying, huh?
HTH,
Dennis
> mykey <- list(space = 'right',
> title = 'ID',
> cex.title = 1.2,
> text = list(levels(dat$ID), cex = 0.8),
> lines = list(lty = i, lwd = i, col = 1:4))
>
> xyplot(y ~ time, data = dat, lty = i, lwd = i, col.lines = 1:4, col = 1:4,
> groups = ID, type = c('g', 'p', 'l'), key =
mykey)
>
>
> but I didn't have luck trying the same with qplot:
>
>> qplot(time, y, data = dat, group = ID, color = ID,
> + geom = c('point', 'line'), lty = i, lwd = i)
> Error in data.frame(colour = c(1L, 1L, 1L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, :
> arguments imply differing number of rows: 18, 4
>
> perhaps using the construct ggplot(...) + geom_line(...) would be more
> fruitful?
>
> King
>
>
>
>
>> Having said that, it's usually easier to
>> 'fix' the
>> problem externally in the data rather than to fiddle with graphics
> commands.
>>
>>>
>>> #the original data (no replicates within time points)
>>> dat <- structure(list(ID = structure(c(1L, 1L, 1L, 2L, 2L, 2L,
2L),
>>> .Label = c("1", "2"), class =
"factor"),
>>> time = c(1, 2, 3, 1.5, 4, 5.5, 6),
>>> y = c(1.4, 2, 2.5, 2.3, 4.5, 1.6, 2)),
>>> .Names = c("ID", "time", "y"),
>>> row.names = c(NA, -7L), class = "data.frame")
>>>
>>> #adding another subject to introduce replicates
>>> id3 <- data.frame(ID=as.factor(rep(3, 4)),time = c(1, 1.5, 2,
5.5),
>>> y = c(1, 2.2, 3, 2))
>>> dat <- rbind(dat, id3)
>>> mean.y <- aggregate(formula = y ~ time, data = dat, FUN = mean)
>>> mean.y <- cbind(ID = as.factor('mean'), mean.y)
>>> dat <- rbind(dat, mean.y)
>>> dat
>>> library(ggplot2)
>>> qplot(time, y, data=dat, group = ID, color = ID, geom =
c('point',
>>> 'line'))
>>
>> A lattice version with a legend is:
>>
>> mykey <- list(space = 'right',
>> title = 'ID',
>> cex.title = 1.2,
>> text = list(levels(dat$ID), cex = 0.8),
>> lines = list(lty = 1, col = 1:4))
>>
>> xyplot(y ~ time, data = dat, lty = 1, col.lines = 1:4, col = 1:4,
>> groups = ID, type = c('g', 'p', 'l'), key
= mykey)
>>
>> Defining the key externally modularizes the problem, lets one define
>> the features one wants to contain, and simplifies the high-level
>> xyplot() call.
>>
>> There is a type = 'a' (shorthand for panel.average()) that can
be
>> used to good effect in xyplot(), but it creates 'holes' where
missing
>> data reside, so taking care of the problem externally at the data
>> level is much cleaner.
>>
>> HTH,
>> Dennis
>>
>>>
>>> best,
>>>
>>> Kingsford Jones
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: bbolker at gmail.com
To: r-help at stat.math.ethz.ch
Date: Tue, 24 Aug 2010 02:43:53 +0000
Subject: Re: [R] generate random numbers from a multivariate distribution with
specified correlation matrix
rusers.sh gmail.com> writes:
> rmvnorm()can be used to generate the random numbers from a multivariate
> normal distribution with specified means and covariance matrix, but i want
> to specify the correlation matrix instead of covariance matrix for the
> multivariate
> normal distribution.
> Does anybody know how to generate the random numbers from a multivariate
> normal distribution with specified correlation matrix? What about
> other non-normal
> distribution?
What do you want the variances to be? If you don't mind that they're
all equal to 1, then using your correlation matrix as the Sigma argument
to the mvrnorm() [sic] function in MASS should work fine. They have to
be defined as *something* ....
If you want multivariate distributions with non-normal marginal
distributions, consider the 'copula' package, but be prepared to do
some reading -- this is a fairly big/deep topic.
good luck.
--Forwarded Message Attachment--
From: mattcstats at gmail.com
To: jholtman at gmail.com; r-help at r-project.org
Date: Tue, 24 Aug 2010 11:27:51 +0800
Subject: Re: [R] Adding column to dataframe
>
> Provide and 'str' and 'object.size' of the object
> so that we can see what you are working with. My rule of thumb is
> that no single object should take more than 25-30% of memory since
> copies may be made. So the reasons things are taking 20 minutes is
> you might be paging. It is always good to break the problem into
> pieces to see what is happening. Read in only 25% of the data and
> time it; then 50% and so on. In any performance related problems you
> need to determine where the "knee of the curve" it. Never
undertake
> processing the large data file at once; start with some pieces and
> work up so that you know what to expect.
>
> On Wed, Aug 18, 2010 at 9:46 PM, Matt Cooper wrote:
> > 2) My specific problem with this dataset.
>>
>> I am essentially trying to convert a date and add it to a data frame. I
>> imagine any 'data manipulation on a column within dataframe into a
new
>> column' will present the same issue, be it as.Date or anything
else.
>>
>> I have a dataset, size
>>
>>> dim(morbidity)
>> [1] 1775683 264
>>
>> This was read in from a STATA .dta file. The dates have come in as the
>> number of ms from 1960 so I have the following to convert these to
usable
>> dates.
>>
>> as.Date(morbidity$adm_date / (100*10*60*60*24),
origin="1960-01-01")
>>
>> when I store this as a vector it is near instant, <5 seconds
>> test <- as.Date(etc)
>> when I place it over itself it takes ~20 minutes
>> morbidity$adm_date <- as.Date(etc)
>> when I place the vector over it (so no computation involved), or place
it
> as
>> a new column it still takes ~20 minutes
>> morbidity$adm_date <- test
>> morbidity$new_col <- test
>> when I tried a cbind to add it that way it took>20 minutes
>> new_morb <- cbind(morbidity,test)
>>
>> Has anyone done something similar or know of a different command that
> should
>> work faster? I can't get my head around what R is doing, if it can
create
>> the vector instantly then the computation is quite simple, I don't
>> understand why then adding it as a column to a dataframe can take that
> long.
>>
>> R64 bit on mac os x, 2.4 GHz dual core, 8gb ram so more than enough
>> resources.
>
Thanks Jim, results below.
~2.66 gig for the object so I guess there is no way to speed up working with
that entire data frame. What I've done in the mean time is removed most of
the columns down to what I want to do data manipulations with (approx 40 of
the 264) and this is much much quicker, then I will join them back on at the
end (am expecting that to take a while!).
Any other feedback appreciated.
> object.size(morbidity)
2865834800 bytes> str(morbidity)
'data.frame': 1775683 obs. of 264 variables:
$ root : chr "2G5PQVQH5KYZY" "DDSMVGQEW9YXP"
"DDSMVGQEW9YXP"
"DDSMVGQEW9YXP" ...
$ lpnot : chr "58GDA44MJSG3P" "4ZAM2XCK332NX"
"5KX4FB6NTM831"
"8CGXVV2A25C3M" ...
$ hospital : int 226 616 633 633 616 631 631 631 616 629 ...
$ hosp_area : int 2 1 1 1 1 1 1 1 1 1 ...
$ hosp_region : int NA NA NA NA NA NA NA NA NA NA ...
$ hosp_type : int 1 2 2 2 2 2 2 2 2 2 ...
$ hosp_category : int 3 2 2 2 2 2 2 2 2 2 ...
$ hsa : int NA NA NA NA NA NA NA NA NA NA ...
$ adm_date :Class 'Date' num [1:1775683] 11079 11084 11534
11869
11051 ...
$ adm_date_ddmwdob: int 0 0 450 785 0 70 122 125 0 91 ...
$ sep_date :Class 'Date' num [1:1775683] 11089 11089 11534
11869
11057 ...
$ sep_date_ddmwdob: int 10 5 450 785 6 70 122 136 23 98 ...
$ adm_time : int 2345 750 630 630 651 930 930 930 1146 1728 ...
$ sep_time : int 630 1715 1014 1013 1951 1630 1630 1000 941 1020
...
$ mf_los : int 10 5 1 1 6 1 1 8 23 7 ...
$ suburb : chr "WARBURTON COMMUNITY"
"WESTMINSTER" "WESTMINSTER"
"WESTMINSTER" ...
$ postcode : int 6431 6061 6061 6061 6150 6160 6160 6160 6016 6016
...
$ state : int 5 5 5 5 5 5 5 5 5 5 ...
$ loc_code : chr "E06001" "" ""
"" ...
$ lga : int NA NA NA NA NA NA NA NA NA NA ...
$ dob_my : num 1.27e+12 1.27e+12 1.27e+12 1.27e+12 1.27e+12 ...
$ dob_ddmwdob : int 0 0 0 0 0 0 0 0 0 0 ...
$ age : int 0 0 1 2 0 0 0 0 0 0 ...
$ age_group : int 1 1 1 1 1 1 1 1 1 1 ...
$ sex : int 1 2 2 2 2 2 2 2 2 2 ...
$ aborig : int 1 4 4 4 4 4 4 4 4 4 ...
$ cob : int 1105 1105 1100 1101 1105 3 3 3 1105 1105 ...
$ marital : int 1 1 1 1 1 1 1 1 1 1 ...
$ emp_stat : int 1 1 8 1 1 1 1 1 1 1 ...
$ interp : int 2 2 2 2 2 2 2 2 2 2 ...
$ occup : int 96 96 NA NA 96 96 NA NA 96 NA ...
$ src_ref : int 0 0 NA NA 0 0 NA NA 0 NA ...
$ pat_epi : int 2 2 1 1 2 1 1 2 2 2 ...
$ adm_from : int 900 900 900 900 900 900 900 900 900 900 ...
$ spl_adm : int 25 39 50 50 39 84 25 25 39 25 ...
$ spl_sep : int 25 39 50 50 39 84 25 25 21 25 ...
$ adm_type : int 1 1 4 4 1 1 3 3 1 4 ...
$ d_o_leav : int 0 0 0 0 0 NA NA 3 0 NA ...
$ psych_days : int NA NA NA NA NA NA NA NA NA NA ...
$ mh_legal : int NA NA NA NA NA NA NA NA NA NA ...
$ pay_clas : int 9 9 3 3 9 3 3 3 3 3 ...
$ vet_ent : int NA NA NA NA NA NA NA NA NA NA ...
$ ins_stat : int 2 1 1 1 1 1 1 1 1 1 ...
$ days_icu : int 0 0 0 0 0 NA NA NA 20 NA ...
$ hours_cmv : int 0 0 NA NA 0 NA NA NA 0 NA ...
$ readmis : int NA NA NA NA NA NA NA NA NA NA ...
$ ret_thea : int NA NA NA NA NA NA NA NA NA NA ...
$ epi_care : int 6 6 21 21 6 1 21 21 6 21 ...
$ pat_type : int 2 2 6 6 2 6 6 6 1 6 ...
$ cont_hos : int NA NA NA NA NA NA NA NA NA NA ...
$ sep_type : int 9 9 9 9 9 9 9 9 9 9 ...
$ sep_to : int 900 900 900 900 900 900 900 900 900 900 ...
$ language : int NA NA NA NA NA NA NA NA NA NA ...
$ src_refl : int NA NA 1 1 NA NA 1 1 NA 1 ...
$ src_refm : int NA NA 2 2 NA NA 1 1 NA 2 ...
$ src_reft : int NA NA 1 1 NA NA 1 1 NA 1 ...
$ accomod : int NA NA 2 2 NA NA 2 2 NA 2 ...
$ dqualnew : int NA NA 0 0 NA NA NA NA NA 0 ...
$ n_of_leav : int NA NA 0 0 NA NA NA 1 NA NA ...
$ prev_treat : int NA NA NA NA NA NA NA NA NA NA ...
$ sor : int NA NA NA NA NA NA NA NA NA NA ...
$ further_care : int NA NA NA NA NA NA NA NA NA NA ...
$ type_accomm : int NA NA NA NA NA NA NA NA NA NA ...
$ hith : int NA NA NA 0 NA NA NA NA NA NA ...
$ diag_imp_1 : chr "P" "P" "P"
"P" ...
$ diag_imp_2 : chr "" "" "" ""
...
$ diag_imp_3 : chr "A" "" "" ""
...
$ diag_imp_4 : chr "A" "" "" ""
...
$ diag_imp_5 : chr "A" "" "" ""
...
$ diag_imp_6 : chr "A" "" "" ""
...
$ diag_imp_7 : chr "" "" "" ""
...
$ diag_imp_8 : chr "" "" "" ""
...
$ diag_imp_9 : chr "" "" "" ""
...
$ diag_imp_10 : chr "" "" "" ""
...
$ diag_imp_11 : chr "" "" "" ""
...
$ diag_imp_12 : chr "" "" "" ""
...
$ diag_imp_13 : chr "" "" "" ""
...
$ diag_imp_14 : chr "" "" "" ""
...
$ diag_imp_15 : chr "" "" "" ""
...
$ diag_imp_16 : chr "" "" "" ""
...
$ diag_imp_17 : chr "" "" "" ""
...
$ diag_imp_18 : chr "" "" "" ""
...
$ diag_imp_19 : chr "" "" "" ""
...
$ diag_imp_20 : chr "" "" "" ""
...
$ diag_imp_21 : chr "" "" "" ""
...
$ diag_imp_22 : chr "" "" "" ""
...
$ diag_seq_1 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_2 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_3 : int 4 NA NA NA NA 4 4 NA 4 4 ...
$ diag_seq_4 : int 5 NA NA NA NA NA NA NA 5 NA ...
$ diag_seq_5 : int 6 NA NA NA NA NA NA NA 6 NA ...
$ diag_seq_6 : int 7 NA NA NA NA NA NA NA NA NA ...
$ diag_seq_7 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_8 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_9 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_10 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_11 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_12 : int NA NA NA NA NA NA NA NA NA NA ...
$ diag_seq_13 : int NA NA NA NA NA NA NA NA NA NA ...
[list output truncated]
- attr(*, "datalabel")= chr ""
- attr(*, "time.stamp")= chr ""
- attr(*, "formats")= chr "%13s" "%13s"
"%8.0g" "%8.0g" ...
- attr(*, "types")= int 13 13 252 251 251 251 251 251 255 252 ...
- attr(*, "val.labels")= chr "" "" ""
"" ...
- attr(*, "var.labels")= chr "" "" ""
"hosp_area" ...
- attr(*, "version")= int 10
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: rusers.sh at gmail.com
CC: r-help at stat.math.ethz.ch
To: bbolker at gmail.com
Date: Mon, 23 Aug 2010 23:05:22 -0400
Subject: Re: [R] generate random numbers from a multivariate distribution with
specified correlation matrix
Hi,
If you see the link http://www.stata.com/help.cgi?drawnorm, and you can
see an example,
#draw a sample of 1000 observations from a bivariate standard
normal distribution, with correlation 0.5.
#drawnorm x y, n(1000) corr(0.5)
This is what Stata software did. What i hope to do in R should be similar
as that.
It will be better to only need us to specify the correlation matrix, mean
values and possible variances. One of my aim is to simulate random fields.
Thanks.
2010/8/23 Ben Bolker
> rusers.sh gmail.com> writes:
>
>> rmvnorm()can be used to generate the random numbers from a
multivariate
>> normal distribution with specified means and covariance matrix, but i
> want
>> to specify the correlation matrix instead of covariance matrix for the
>> multivariate
>> normal distribution.
>> Does anybody know how to generate the random numbers from a
multivariate
>> normal distribution with specified correlation matrix? What about
>> other non-normal
>> distribution?
>
> What do you want the variances to be? If you don't mind that
they're
> all equal to 1, then using your correlation matrix as the Sigma argument
> to the mvrnorm() [sic] function in MASS should work fine. They have to
> be defined as *something* ....
> If you want multivariate distributions with non-normal marginal
> distributions, consider the 'copula' package, but be prepared to do
> some reading -- this is a fairly big/deep topic.
>
> good luck.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
-----------------
Jane Chang
Queen's
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: dwinsemius at comcast.net
CC: r-help at stat.math.ethz.ch; bbolker at gmail.com
To: rusers.sh at gmail.com
Date: Mon, 23 Aug 2010 23:52:07 -0400
Subject: Re: [R] generate random numbers from a multivariate distribution with
specified correlation matrix
On Aug 23, 2010, at 11:05 PM, rusers.sh wrote:
> Hi,
> If you see the link http://www.stata.com/help.cgi?drawnorm, and you
> can
> see an example,
> #draw a sample of 1000 observations from a bivariate standard
> normal distribution, with correlation 0.5.
> #drawnorm x y, n(1000) corr(0.5)
> This is what Stata software did. What i hope to do in R should be
> similar
> as that.
> It will be better to only need us to specify the correlation
> matrix, mean
> values and possible variances. One of my aim is to simulate random
> fields.
> Thanks.
?cov2cor
--
David.>
>
> 2010/8/23 Ben Bolker
>
>> rusers.sh gmail.com> writes:
>>
>>> rmvnorm()can be used to generate the random numbers from a
>>> multivariate
>>> normal distribution with specified means and covariance matrix,
>>> but i
>> want
>>> to specify the correlation matrix instead of covariance matrix for
>>> the
>>> multivariate
>>> normal distribution.
>>> Does anybody know how to generate the random numbers from a
>>> multivariate
>>> normal distribution with specified correlation matrix? What about
>>> other non-normal
>>> distribution?
>>
>> What do you want the variances to be? If you don't mind that
>> they're
>> all equal to 1, then using your correlation matrix as the Sigma
>> argument
>> to the mvrnorm() [sic] function in MASS should work fine. They
>> have to
>> be defined as *something* ....
>> If you want multivariate distributions with non-normal marginal
>> distributions, consider the 'copula' package, but be prepared
to do
>> some reading -- this is a fairly big/deep topic.
>>
>> good luck.
>>
>
David Winsemius, MD
West Hartford, CT
--Forwarded Message Attachment--
From: jroll at lcog.org
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 09:27:07 -0700
Subject: [R] Plotting multiple histograms on same panel
Hey everyone,
So i cant figure this out. when using histogram() from lattice instead
of hist() i get what i want as far as output. But using histogram i can
seem to be able to figure out how to get multiple plots on the same panel.
So
par(mfrow=c(3,2))
for (i in 1:20) hist(rnorm(100),main="",cex.axis=.8)
gets me about what i want but i want to use histogram() cause it gives me
the format i want but the identical code does not work for histogram().
par(mfrow=c(3,2))
for (i in 1:20) histogram(rnorm(100),main="",cex.axis=.8)
I thought this was simply set in par() but it doesn't seem to do any good.
Thoughts.
Thanks
JR
--
View this message in context:
http://r.789695.n4.nabble.com/Plotting-multiple-histograms-on-same-panel-tp2335426p2335426.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: dtran7 at student.gsu.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 13:40:12 -0700
Subject: Re: [R] An example of using bootrap to find confidence interval from a
corrected correlation value
Good afternoon Mr. William,
Hope you had a wonderful weekend. I am reading material on Splus to try to
find a example of how to find confidence interval from a corrected correlation
value.
I saw example in R but i have not found yet an example in Splus to show how to
do it.
If you happen to know one, please give it to me.
I am very appreciate for all of your help.
Have a wonderful day,
Minh
--
View this message in context:
http://r.789695.n4.nabble.com/Can-t-read-write-to-nonfi-tp2331707p2335787.html
Sent from the R help mailing list archive at Nabble.com.
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: HughBBoyle at hotmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 09:55:53 -0700
Subject: [R] : Automatic Debugging
Hi, I need help executing the debug() function automatically, as opposed to
line-by-line execution. Essentially I want to create something like this:
execute <- function(fname , farguments)
{
debug(fname)
....
undebug(fname)
}
Where "..." is a process which automatically runs debug until an error
is
found.
Any help would be appreciated, thanks.
--
View this message in context:
http://r.789695.n4.nabble.com/R-Automatic-Debugging-tp2335475p2335475.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: cpeng at usm.maine.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 12:00:04 -0700
Subject: [R] How to remove all objects except a few specified objects?
How to remove all R objects in the RAM except for a few specified ones?
rm(list=ls()) removes all R objects in the R work space.
Another question is that whether removing all R objects actually releases
the RAM? Thanks.
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-remove-all-objects-except-a-few-specified-objects-tp2335651p2335651.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: cxzhang at ualr.edu
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 15:04:41 -0700
Subject: [R] Draw a perpendicular line?
Hi,
I am trying to draw a perpendicular line from a point to two points.
Mathematically I know how to do it, but to program it, I encounter some
problem and hope can get help. Thanks.
I have points, A, B and C. I calculate the slope and intercept for line
drawn between A and B.
I am trying to check whether I can draw a perpendicular line from C to line
AB and get the x,y value for the point D at the intersection.
Assume I get the slope of the perpendicular line, I will have my point (D)
using variable x and y which is potentially on line AB. My idea was using
|AC|*|AC| = |AD|*|AD|+ |CD|*|CD|. I don't know what function I may need to
call to calculate the values for point D (uniroot?).
Thank you.
--
View this message in context:
http://r.789695.n4.nabble.com/Draw-a-perpendicular-line-tp2335882p2335882.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: alice.ly at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 18:00:13 -0700
Subject: [R] Read data in R
I have a txt file with column data separated by commas.
Subject,Sessionblock,LotteryImg,SubjectResp,Pictime,Bidtime,Voltage,ForcedAns
10816,Session1,75_C2.jpg,No,7095,9548,Mid,Yes
10816,Session1,25_C1.jpg,No,16629,18130,Low,Yes
10816,Session1,5_C1.jpg,No,23217,24276,Low,Yes
10816,Session1,75_C1.jpg,NULL,36359,-66179,Low,Yes
10816,Session1,25_C2.jpg,NULL,49468,-66179,Mid,Yes
10816,Session1,75_C3.jpg,Yes,60602,62119,High,Yes
I have tried to read the data with this command
data<-read.table("/mrdata/embodied_val/data/Conditioning_TIM4_082310.txt",sep
= ",",header=T)
but I get this error
Error in make.names(col.names, unique = TRUE) :
invalid multibyte string at 'C'
What am I doing wrong?
Thanks,
Alice
--
View this message in context:
http://r.789695.n4.nabble.com/Read-data-in-R-tp2336018p2336018.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: weiweizhang56 at hotmail.com
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 03:03:46 +0000
Subject: [R] forest plot
Dear Sir or Madam,
I am trying to plot forest plot. I extracted odds ratio and their corresponding
95% confidence interval from papers, then I calculated the log(OR) and standard
error using the following command
OR<-metagen(logOR,selogOR,sm="OR")
forest(OR,comb.fixed=TRUE,comb.random=TRUE,digits=2)
However, it does not produce a forest plot. Can someone kindly help? Thank you
in advance.
Best wishes
weiwei
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at stat.math.ethz.ch; bbolker at gmail.com
To: rusers.sh at gmail.com
Date: Mon, 23 Aug 2010 21:43:07 -0700
Subject: Re: [R] generate random numbers from a multivariate distribution with
specified correlation matrix
Hi Jane:
On Mon, Aug 23, 2010 at 8:05 PM, rusers.sh wrote:
> Hi,
> If you see the link http://www.stata.com/help.cgi?drawnorm, and you can
> see an example,
> #draw a sample of 1000 observations from a bivariate standard
> normal distribution, with correlation 0.5.
> #drawnorm x y, n(1000) corr(0.5)
> This is what Stata software did. What i hope to do in R should be similar
> as that.
>
Using an example adapted from package mvtnorm:
library(mvtnorm)
sigma <- matrix(c(1, 0.5, 0.5, 0.5, 1, 0.5, 0.5, 0.5, 1), ncol = 3)
sigma
[,1] [,2] [,3]
[1,] 1.0 0.5 0.5
[2,] 0.5 1.0 0.5
[3,] 0.5 0.5 1.0
x <- rmvnorm(n = 1000, mean = c(1, 5, 10), sigma = sigma)
head(x, 2)
[,1] [,2] [,3]
[1,] 1.1830181 6.730525 10.687912
[2,] 2.2911587 5.978146 9.493432
cov(x)
[,1] [,2] [,3]
[1,] 0.9725893 0.4894247 0.4902096
[2,] 0.4894247 0.9782143 0.4572949
[3,] 0.4902096 0.4572949 0.9656340
colMeans(x)
[1] 0.9901327 5.0008999 10.0162695
# Same example using mvrnorm() from MASS:
library(MASS)
x2 <- mvrnorm(n = 1000, mu = c(1, 5, 10), Sigma = sigma)
head(x2, 2)
[,1] [,2] [,3]
[1,] -0.1559149 3.449327 7.967966
[2,] -0.7961951 4.636752 8.580032
cov(x2)
[,1] [,2] [,3]
[1,] 1.0786150 0.4719868 0.5082440
[2,] 0.4719868 0.9608204 0.4819515
[3,] 0.5082440 0.4819515 1.0264072
colMeans(x2)
[1] 1.042077 5.011792 10.025397
Package mvtnorm also has a function to obtain samples from multivariate-t
distributions (rmvt). See the help pages of these functions for examples and
further details.
For simulating random fields, there are two packages of which I'm aware:
RandomFields and FieldSim. It might also be worth checking out the Spatial
Task View @ CRAN to see if anything else is available to help you.
HTH,
Dennis
It will be better to only need us to specify the correlation matrix,
mean> values and possible variances. One of my aim is to simulate random fields.
> Thanks.
>
>
> 2010/8/23 Ben Bolker
>
> rusers.sh gmail.com> writes:
>>
>>> rmvnorm()can be used to generate the random numbers from a
>> multivariate
>>> normal distribution with specified means and covariance matrix, but
i
>> want
>>> to specify the correlation matrix instead of covariance matrix for
the
>>> multivariate
>>> normal distribution.
>>> Does anybody know how to generate the random numbers from a
multivariate
>>> normal distribution with specified correlation matrix? What about
>>> other non-normal
>>> distribution?
>>
>> What do you want the variances to be? If you don't mind that
they're
>> all equal to 1, then using your correlation matrix as the Sigma
argument
>> to the mvrnorm() [sic] function in MASS should work fine. They have to
>> be defined as *something* ....
>> If you want multivariate distributions with non-normal marginal
>> distributions, consider the 'copula' package, but be prepared
to do
>> some reading -- this is a fairly big/deep topic.
>>
>> good luck.
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> -----------------
> Jane Chang
> Queen's
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: chainsawtiney at gmail.com
CC: r-help at r-project.org
To: weiweizhang56 at hotmail.com
Date: Tue, 24 Aug 2010 12:50:40 +0800
Subject: Re: [R] forest plot
The correct command for forest plot should be "plot" (instead of
"forest") if you are using metagen from meta package.
For help:
?plot.meta
On Tue, Aug 24, 2010 at 11:03 AM, zhangweiwei wrote:>
> Dear Sir or Madam,
>
>
>
> I am trying to plot forest plot. I extracted odds ratio and their
corresponding 95% confidence interval from papers, then I calculated the log(OR)
and standard error using the following command
>
> OR<-metagen(logOR,selogOR,sm="OR")
>
> forest(OR,comb.fixed=TRUE,comb.random=TRUE,digits=2)
>
>
>
> However, it does not produce a forest plot. Can someone kindly help? Thank
you in advance.
>
>
>
> Best wishes
>
> weiwei
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
CH Chan
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at r-project.org
To: jroll at lcog.org
Date: Mon, 23 Aug 2010 21:52:59 -0700
Subject: Re: [R] Plotting multiple histograms on same panel
Hi:
On Mon, Aug 23, 2010 at 9:27 AM, LCOG1 wrote:
>
> Hey everyone,
> So i cant figure this out. when using histogram() from lattice instead
> of hist() i get what i want as far as output. But using histogram i can
> seem to be able to figure out how to get multiple plots on the same panel.
>
> So
> par(mfrow=c(3,2))
> for (i in 1:20) hist(rnorm(100),main="",cex.axis=.8)
>
> gets me about what i want but i want to use histogram() cause it gives me
> the format i want but the identical code does not work for histogram().
>
> par(mfrow=c(3,2))
> for (i in 1:20) histogram(rnorm(100),main="",cex.axis=.8)
>
This is not the syntax to use for histogram() in the lattice package.
Risking the obvious homework theory, try this:
library(lattice)
dd <- data.frame(gp = factor(rep(paste('Group', 1:6, sep =
''), each 100)),
x = rnorm(600))
histogram( ~ x | gp, data = dd)
histogram( ~ x | gp, data = dd, as.table = TRUE)
Notice how the data were constructed in the data frame. This matters. Also
observe that unlike hist(), histogram() uses a formula interface; in this
case, it reads 'generate histograms of x conditioned on groups in variable
gp, from data frame dd'.
HTH,
Dennis
>
> I thought this was simply set in par() but it doesn't seem to do any
good.
> Thoughts.
>
> Thanks
> JR
> --
> View this message in context:
>
http://r.789695.n4.nabble.com/Plotting-multiple-histograms-on-same-panel-tp2335426p2335426.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: djmuser at gmail.com
CC: r-help at r-project.org
To: alice.ly at gmail.com
Date: Mon, 23 Aug 2010 22:05:25 -0700
Subject: Re: [R] Read data in R
Hi:
Here's one way, but there may be better options:
de <- read.table(textConnection("
+
Subject,Sessionblock,LotteryImg,SubjectResp,Pictime,Bidtime,Voltage,ForcedAns
+ 10816,Session1,75_C2.jpg,No,7095,9548,Mid,Yes
+ 10816,Session1,25_C1.jpg,No,16629,18130,Low,Yes
+ 10816,Session1,5_C1.jpg,No,23217,24276,Low,Yes
+ 10816,Session1,75_C1.jpg,NULL,36359,-66179,Low,Yes
+ 10816,Session1,25_C2.jpg,NULL,49468,-66179,Mid,Yes
+ 10816,Session1,75_C3.jpg,Yes,60602,62119,High,Yes"), header = TRUE,
as.is= TRUE,
+ sep = ',')> de
Subject Sessionblock LotteryImg SubjectResp Pictime Bidtime Voltage
ForcedAns
1 10816 Session1 75_C2.jpg No 7095 9548 Mid
Yes
2 10816 Session1 25_C1.jpg No 16629 18130 Low
Yes
3 10816 Session1 5_C1.jpg No 23217 24276 Low
Yes
4 10816 Session1 75_C1.jpg NULL 36359 -66179 Low
Yes
5 10816 Session1 25_C2.jpg NULL 49468 -66179 Mid
Yes
6 10816 Session1 75_C3.jpg Yes 60602 62119 High
Yes> str(de)
'data.frame': 6 obs. of 8 variables:
$ Subject : int 10816 10816 10816 10816 10816 10816
$ Sessionblock: chr "Session1" "Session1"
"Session1" "Session1" ...
$ LotteryImg : chr "75_C2.jpg" "25_C1.jpg"
"5_C1.jpg" "75_C1.jpg" ...
$ SubjectResp : chr "No" "No" "No"
"NULL" ...
$ Pictime : int 7095 16629 23217 36359 49468 60602
$ Bidtime : int 9548 18130 24276 -66179 -66179 62119
$ Voltage : chr "Mid" "Low" "Low"
"Low" ...
$ ForcedAns : chr "Yes" "Yes" "Yes"
"Yes" ...
In the read.table() call, replace all the textConnection("...blah
blah...")
with the file name; everything starting with , header =TRUE, ...
should be kept in the call.
The as.is option reads all the character variables in as character rather
than the default conversion to factor. The colClasses option of read.table()
should give you more options.
HTH,
Dennis
On Mon, Aug 23, 2010 at 6:00 PM, Allie818 wrote:
>
> I have a txt file with column data separated by commas.
>
>
>
Subject,Sessionblock,LotteryImg,SubjectResp,Pictime,Bidtime,Voltage,ForcedAns
> 10816,Session1,75_C2.jpg,No,7095,9548,Mid,Yes
> 10816,Session1,25_C1.jpg,No,16629,18130,Low,Yes
> 10816,Session1,5_C1.jpg,No,23217,24276,Low,Yes
> 10816,Session1,75_C1.jpg,NULL,36359,-66179,Low,Yes
> 10816,Session1,25_C2.jpg,NULL,49468,-66179,Mid,Yes
> 10816,Session1,75_C3.jpg,Yes,60602,62119,High,Yes
>
> I have tried to read the data with this command
>
>
data<-read.table("/mrdata/embodied_val/data/Conditioning_TIM4_082310.txt",sep
> = ",",header=T)
>
> but I get this error
> Error in make.names(col.names, unique = TRUE) :
> invalid multibyte string at 'C'
>
> What am I doing wrong?
>
> Thanks,
> Alice
> --
> View this message in context:
> http://r.789695.n4.nabble.com/Read-data-in-R-tp2336018p2336018.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: jholtman at gmail.com
CC: r-help at r-project.org
To: alice.ly at gmail.com
Date: Tue, 24 Aug 2010 01:26:18 -0400
Subject: Re: [R] Read data in R
I sounds as if your data is encode as UTF-8. You may need to specify
the fileEncoding parameter on the read.table function.
On Mon, Aug 23, 2010 at 9:00 PM, Allie818 wrote:>
> I have a txt file with column data separated by commas.
>
>
Subject,Sessionblock,LotteryImg,SubjectResp,Pictime,Bidtime,Voltage,ForcedAns
> 10816,Session1,75_C2.jpg,No,7095,9548,Mid,Yes
> 10816,Session1,25_C1.jpg,No,16629,18130,Low,Yes
> 10816,Session1,5_C1.jpg,No,23217,24276,Low,Yes
> 10816,Session1,75_C1.jpg,NULL,36359,-66179,Low,Yes
> 10816,Session1,25_C2.jpg,NULL,49468,-66179,Mid,Yes
> 10816,Session1,75_C3.jpg,Yes,60602,62119,High,Yes
>
> I have tried to read the data with this command
>
data<-read.table("/mrdata/embodied_val/data/Conditioning_TIM4_082310.txt",sep
> = ",",header=T)
>
> but I get this error
> Error in make.names(col.names, unique = TRUE) :
> invalid multibyte string at 'C'
>
> What am I doing wrong?
>
> Thanks,
> Alice
> --
> View this message in context:
http://r.789695.n4.nabble.com/Read-data-in-R-tp2336018p2336018.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
--Forwarded Message Attachment--
From: veepsirtt at gmail.com
To: r-help at r-project.org
Date: Mon, 23 Aug 2010 23:18:11 -0700
Subject: Re: [R] sendmailR-package-valid code needed
I could not install Rmail Package .
I got the following errors.
Then how to do.please
library("caTools")>
install.packages("Rmail",contriburl="http://www.math.mcmaster.ca/~bolker/R/src/contrib")
Warning: dependency ?caTools? is not available
trying URL
'http://www.math.mcmaster.ca/~bolker/R/src/contrib/Rmail_1.0.zip'
Error in download.file(url, destfile, method, mode = "wb", ...) :
cannot open URL
'http://www.math.mcmaster.ca/~bolker/R/src/contrib/Rmail_1.0.zip'
In addition: Warning message:
In download.file(url, destfile, method, mode = "wb", ...) :
cannot open: HTTP status was '404 Not Found'
Warning in download.packages(pkgs, destdir = tmpd, available = available, :
download of package 'Rmail' failed
With regards
veepsirtt
--
View this message in context:
http://r.789695.n4.nabble.com/sendmailR-package-valid-code-needed-tp2334921p2336159.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: petr.pikal at precheza.cz
CC: r-help at r-project.org
To: eva.nordstrom at yahoo.com
Date: Tue, 24 Aug 2010 08:52:25 +0200
Subject: [R] Odp: "easiest" way to write an R dataframe to excel?
Hi
r-help-bounces at r-project.org napsal dne 23.08.2010 21:50:01:
> I am using R 2.11.1 in a Microsoft Windows 7 environment.
>
> I tried using WriteXLS, but get the message " In system(cmd) : perl
not
found">
> What is the "easiest" way to write an R dataframe to Excel? (I
am
familiar with> WriteXLS, but I do not have PERL installed, and if not needed, do not
wish to> install it.)
>
> I am also familiar with write.table, but if possible, wish to create an
excel> file form within R.
How big is your table
If it fit to clipboard you can
write.table(some.data.frame, "clipboard", sep="\t",
row.names=FALSE)
and press CTRL-C in Excel
If not you can
write.table(some.data.frame, "mytable.xls", sep="\t",
row.names=FALSE)
and the table is readable by Excel
Regards
Petr
>
> I'm unsure if this is possible, or perhaps i should just go ahead and
install> PERL...?
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
--Forwarded Message Attachment--
From: b.rowlingson at lancaster.ac.uk
CC: r-help at r-project.org
To: veepsirtt at gmail.com
Date: Tue, 24 Aug 2010 07:55:34 +0100
Subject: Re: [R] sendmailR-package-valid code needed
On Tue, Aug 24, 2010 at 7:18 AM, veepsirtt wrote:>
> I could not install Rmail Package .
> I got the following errors.
> Then how to do.please
>
> library("caTools")
>>
install.packages("Rmail",contriburl="http://www.math.mcmaster.ca/~bolker/R/src/contrib")
>
> Warning: dependency ?caTools? is not available
> trying URL
'http://www.math.mcmaster.ca/~bolker/R/src/contrib/Rmail_1.0.zip'
> Error in download.file(url, destfile, method, mode = "wb", ...) :
> cannot open URL
> 'http://www.math.mcmaster.ca/~bolker/R/src/contrib/Rmail_1.0.zip'
> In addition: Warning message:
> In download.file(url, destfile, method, mode = "wb", ...) :
> cannot open: HTTP status was '404 Not Found'
> Warning in download.packages(pkgs, destdir = tmpd, available = available,
:
> download of package 'Rmail' failed
Unless Ben can provide a quick fix for this, you might be better off
taking his advice and using python - it comes with a full smtp
implementation that can easily send emails via gmail. Here it is in
about 12 lines of code, I've tried this and it works for me:
http://www.nixtutor.com/linux/send-mail-through-gmail-with-python/
- just replace the username and password with yours and it should
work, assuming you have enabled POP and IMAP access in your gmail
settings, which seems to be a prerequisite for google's smtp access.
You can run python from R, exactly how may depend on your operating
system, using something like R's system() function.
Here's an example that sends attachments with the email, but I've not
tried or tested this:
http://kutuma.blogspot.com/2007/08/sending-emails-via-gmail-with-python.html
Barry
--Forwarded Message Attachment--
From: gavin.simpson at ucl.ac.uk
CC: r-help at stat.math.ethz.ch
To: ivo.welch at gmail.com
Date: Tue, 24 Aug 2010 08:55:22 +0100
Subject: Re: [R] unexpected subset select results?
On Mon, 2010-08-23 at 17:51 -0400, ivo welch wrote:> quizz---what does this produce?
Henrique has provided an answer to the question, but...
> d=data.frame( a=1:1000, b=2001:3000, z= 5001:6000 )
> attach(d); c <- (a+b)>25; detach(d)
...this is ugly and will potentially catch you out one day if you forget
to detach. These three calls can be achieved using a single with() :
c <- with(d, (a + b)> 25)
And the version you wanted:
attach(d); d$c <- (a+b)>25; detach(d)
can be done using within():
d <- within(d, c <- (a + b)> 25)
and with the latter, the intention is pretty clear.
HTH
G
> d= subset(d, TRUE, select=c( a, b, c ))
>
> yes, I know I have made a mistake, in that the code does not do what I
> presumably would have wanted. it does seem like unexpected behavior,
> though, without an error. there probably is some reason why this does
> not ring an alarm bell...
>
> /iaw
> ----
> Ivo Welch (ivo.welch at brown.edu, ivo.welch at gmail.com)
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
--Forwarded Message Attachment--
From: ripley at stats.ox.ac.uk
CC: r-help at r-project.org; alice.ly at gmail.com
To: jholtman at gmail.com
Date: Tue, 24 Aug 2010 08:17:51 +0100
Subject: Re: [R] Read data in R
On Tue, 24 Aug 2010, jim holtman wrote:
> I sounds as if your data is encode as UTF-8. You may need to specify
> the fileEncoding parameter on the read.table function.
Not UTF-8 ... that's a BOM mark in UCS-2. We don't have the
sessionInfo() output that we asked for and this is an area where OSes
do differ. However, try fileEncoding = "UCS-2LE".
>
> On Mon, Aug 23, 2010 at 9:00 PM, Allie818 wrote:
>>
>> I have a txt file with column data separated by commas.
>>
>>
Subject,Sessionblock,LotteryImg,SubjectResp,Pictime,Bidtime,Voltage,ForcedAns
>> 10816,Session1,75_C2.jpg,No,7095,9548,Mid,Yes
>> 10816,Session1,25_C1.jpg,No,16629,18130,Low,Yes
>> 10816,Session1,5_C1.jpg,No,23217,24276,Low,Yes
>> 10816,Session1,75_C1.jpg,NULL,36359,-66179,Low,Yes
>> 10816,Session1,25_C2.jpg,NULL,49468,-66179,Mid,Yes
>> 10816,Session1,75_C3.jpg,Yes,60602,62119,High,Yes
>>
>> I have tried to read the data with this command
>>
data<-read.table("/mrdata/embodied_val/data/Conditioning_TIM4_082310.txt",sep
>> = ",",header=T)
>>
>> but I get this error
>> Error in make.names(col.names, unique = TRUE) :
>> invalid multibyte string at 'C'
>>
>> What am I doing wrong?
>>
>> Thanks,
>> Alice
>> --
>> View this message in context:
http://r.789695.n4.nabble.com/Read-data-in-R-tp2336018p2336018.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Jim Holtman
> Cincinnati, OH
> +1 513 646 9390
>
> What is the problem that you are trying to solve?
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
--Forwarded Message Attachment--
From: romunov at gmail.com
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 00:04:29 -0700
Subject: Re: [R] How to remove all objects except a few specified objects?
a <- 1
b <- 2
c <- 3
ls()[-a] # set minus to all the objects you want to retain
rm(list = ls()[-a] # will remove all the objects - except a
ls() # presto
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-remove-all-objects-except-a-few-specified-objects-tp2335651p2336200.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: stefanML at collocations.de
To: r-help at stat.math.ethz.ch
Date: Tue, 24 Aug 2010 10:47:58 +0200
Subject: Re: [R] 3D stariway plot
On 24 Aug 2010, at 02:20, Ben Bolker wrote:
>> Please, is there an R function /package that allows for 3D stairway
plots
> like the attached one ?
>> In addition, how can I overlay a parametric grid plot??
>
> Not exactly, that I know of, but maybe you can adapt
>
> library(rgl)
> demo(hist3d)
>
> to do what you want.
The drawback here, of course, is that the resolution of rgl isn't good
enough if you want to include the plot in a paper. For that purpose, I've
recently used panel.3dbars() from the "latticeExtra" package. I
don't have a self-contained example at hand, but the idea is to call the
lattice function cloud() and pass panel.3dbars() as an argument:
library("latticeExtra")
cloud( ... data and options ..., panel.3d.cloud = panel.3dbars)
See ?panel.3dbars for a complete example. I don't know how flexible these
plots are and whether you can exactly reproduce your example.
Best,
Stefan
--Forwarded Message Attachment--
From: b.rowlingson at lancaster.ac.uk
CC: r-help at r-project.org
To: romunov at gmail.com
Date: Tue, 24 Aug 2010 09:55:58 +0100
Subject: Re: [R] How to remove all objects except a few specified objects?
2010/8/24 500600 :>
> a <- 1
> b <- 2
> c <- 3
>
> ls()[-a] # set minus to all the objects you want to retain
>
> rm(list = ls()[-a] # will remove all the objects - except a
>
> ls() # presto
Only because a=1 and a is the first item in the list! Not because you
are doing '-a'! If a is 0 then nothing gets deleted, and if a isn't
numeric vector then it just fails.
If you want to do it by name, use match....
Barry
--Forwarded Message Attachment--
From: bravegag at gmail.com
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 11:08:51 +0200
Subject: [R] update and rebuild all?
Hello,
I upgraded my Mac R version to the newest 2.11.1, then I ran the option to
update all packages but there was an error related to fetching one of those and
the process stopped. I retried updating all packages but nothing happens.
Although all my course project scripts work perfectly is there a way e.g. a
command to manually fetch (most up to date version) and locally build all
installed packages? to make sure it is all ok? I recall there was something like
that but don't remember what command it was.
Thanks in advance,
Best regards,
Giovanni
--Forwarded Message Attachment--
From: gdevitis at xtel.it
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 02:11:06 -0700
Subject: [R] Time series clustering
Hi,
I have 1000 monthly time series (just a year) and I want to cluster them.
for instance (x):
jan 2010 feb 2010 mar 2010 apr 2010 ...
ts 1: 12300 12354550 1233 12312 ...
ts 2: 234 23232 2323 232323 ...
...
My approach is applying clara algorithm to the standardized data:
clara(x,k=10,stand=TRUE)->clarax
Is that a correct approach?
Thanks
Giuseppe
--
View this message in context:
http://r.789695.n4.nabble.com/Time-series-clustering-tp2336343p2336343.html
Sent from the R help mailing list archive at Nabble.com.
--Forwarded Message Attachment--
From: vmikryukov at gmail.com
To: r-help at R-project.org
Date: Tue, 24 Aug 2010 15:29:43 +0600
Subject: [R] Extract rows from a list object
Dear list members,
I need to create a table from a huge list object,
this list consists of matrices of the same size (but with different
content).
The resulting n tables should contain the same rows from all matrices.
For example:
n <- 23
x <- array(1:20, dim=c(n,6))
huge.list <- list()
for (i in 1:1000) {
huge.list[[i]] <- x }
# One of 1000 matrices
huge.list[[1]][1:4, 1:6]
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 1 4 7 10 13 16
[2,] 2 5 8 11 14 17
[3,] 3 6 9 12 15 18
[4,] 4 7 10 13 16 19
...
# The result should be like that:
# One of n tables (with the row 4 from all 1000 matrices):
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 4 7 10 13 16 19
[2,] 4 7 10 13 16 19
[3,] 4 7 10 13 16 19
[4,] 4 7 10 13 16 19
...
[999,] 4 7 10 13 16 19
[1000,] 4 7 10 13 16 19
# I tried to convert a list object to an array
ARR <- array(unlist(huge.list), dim = c(dim(huge.list[[1]]),
length(huge.list)))
# then split it and use abind function, but it didn't work
Thanks in advance!
Vladimir
--
Vladimir Mikryukov
PhD student
Institute of Plant & Animal Ecology UD RAS,
Lab. of Population and Community Ecotoxicology
[8 Marta 202, 620144, Ekaterinburg, Russia]
Tel. +7 343 210 38 58 (ext.290)
Fax: +7 343 260 82 56, +7 343 260 65 00
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: d.rizopoulos at erasmusmc.nl
CC: r-help at r-project.org
To: vmikryukov at gmail.com
Date: Tue, 24 Aug 2010 11:40:40 +0200
Subject: Re: [R] Extract rows from a list object
try something like this:
n <- 23
x <- array(1:20, dim = c(n, 6))
huge.list <- rep(list(x), 1000)
out <- lapply(1:4, function (i) {
t(sapply(huge.list, "[", i = i, j = 1:6))
})
out[[1]]
out[[4]]
I hope it helps.
Best,
Dimitris
On 8/24/2010 11:29 AM, Vladimir Mikryukov wrote:> Dear list members,
>
> I need to create a table from a huge list object,
> this list consists of matrices of the same size (but with different
> content).
>
> The resulting n tables should contain the same rows from all matrices.
>
> For example:
> n<- 23
> x<- array(1:20, dim=c(n,6))
>
> huge.list<- list()
> for (i in 1:1000) {
> huge.list[[i]]<- x }
>
>
> # One of 1000 matrices
> huge.list[[1]][1:4, 1:6]
> [,1] [,2] [,3] [,4] [,5] [,6]
> [1,] 1 4 7 10 13 16
> [2,] 2 5 8 11 14 17
> [3,] 3 6 9 12 15 18
> [4,] 4 7 10 13 16 19
> ...
>
> # The result should be like that:
> # One of n tables (with the row 4 from all 1000 matrices):
> [,1] [,2] [,3] [,4] [,5] [,6]
> [1,] 4 7 10 13 16 19
> [2,] 4 7 10 13 16 19
> [3,] 4 7 10 13 16 19
> [4,] 4 7 10 13 16 19
> ...
> [999,] 4 7 10 13 16 19
> [1000,] 4 7 10 13 16 19
>
>
> # I tried to convert a list object to an array
> ARR<- array(unlist(huge.list), dim = c(dim(huge.list[[1]]),
> length(huge.list)))
> # then split it and use abind function, but it didn't work
>
> Thanks in advance!
> Vladimir
>
> --
> Vladimir Mikryukov
> PhD student
> Institute of Plant& Animal Ecology UD RAS,
> Lab. of Population and Community Ecotoxicology
> [8 Marta 202, 620144, Ekaterinburg, Russia]
> Tel. +7 343 210 38 58 (ext.290)
> Fax: +7 343 260 82 56, +7 343 260 65 00
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center
Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014
--Forwarded Message Attachment--
From: pdalgd at gmail.com
CC: r-help at r-project.org
To: vmikryukov at gmail.com
Date: Tue, 24 Aug 2010 11:44:03 +0200
Subject: Re: [R] Extract rows from a list object
On Aug 24, 2010, at 11:29 AM, Vladimir Mikryukov wrote:
> Dear list members,
>
> I need to create a table from a huge list object,
> this list consists of matrices of the same size (but with different
> content).
>
> The resulting n tables should contain the same rows from all matrices.
>
> For example:
> n <- 23
> x <- array(1:20, dim=c(n,6))
>
> huge.list <- list()
> for (i in 1:1000) {
> huge.list[[i]] <- x }
>
>
> # One of 1000 matrices
> huge.list[[1]][1:4, 1:6]
> [,1] [,2] [,3] [,4] [,5] [,6]
> [1,] 1 4 7 10 13 16
> [2,] 2 5 8 11 14 17
> [3,] 3 6 9 12 15 18
> [4,] 4 7 10 13 16 19
> ...
>
> # The result should be like that:
> # One of n tables (with the row 4 from all 1000 matrices):
> [,1] [,2] [,3] [,4] [,5] [,6]
> [1,] 4 7 10 13 16 19
> [2,] 4 7 10 13 16 19
> [3,] 4 7 10 13 16 19
> [4,] 4 7 10 13 16 19
> ...
> [999,] 4 7 10 13 16 19
> [1000,] 4 7 10 13 16 19
>
>
> # I tried to convert a list object to an array
> ARR <- array(unlist(huge.list), dim = c(dim(huge.list[[1]]),
> length(huge.list)))
> # then split it and use abind function, but it didn't work
You need to look in the direction of lapply() & friends
do.call(rbind,lapply(huge.list,"[",4,))
t(sapply(huge.list,"[",4,TRUE))
both seem to cut the mustard. (Notice that sapply() will cbind() the results
automagically, and that for some odd reason it is more unhappy about missing
arguments than lapply is.)
For more intelligible and generalizable code, also consider
do.call(rbind, lapply(huge.list, function(x)x[4,]))
--
Peter Dalgaard
Center for Statistics, Copenhagen Business School
Solbjerg Plads 3, 2000 Frederiksberg, Denmark
Phone: (+45)38153501
Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com
--Forwarded Message Attachment--
From: ivan.calandra at uni-hamburg.de
To: r-help at r-project.org
Date: Tue, 24 Aug 2010 11:48:03 +0200
Subject: Re: [R] "easiest" way to write an R dataframe to excel?
Hi!
I personally use the function xlsReadWrite::write.xls(), which doesn't
need Perl to be installed. It might have less functionalities than
WriteXLS, but it is easier to use.
HTH,
Ivan
Le 8/24/2010 00:50, Erich Neuwirth a ?crit :> Search for an older message with the subject line
>
> [R] export tables to excel files on multiple sheets with titles for
each
> table
>
>
> On 8/23/2010 11:41 PM, Erik Iverson wrote:
>> In addition to the Wiki already mentioned, the following may be
>> useful:
>>
>>
http://learnr.wordpress.com/2009/10/06/export-data-frames-to-multi-worksheet-excel-file/
>>
>>
>> Eva Nordstrom wrote:
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Ivan CALANDRA
PhD Student
University of Hamburg
Biozentrum Grindel und Zoologisches Museum
Abt. S?ugetiere
Martin-Luther-King-Platz 3
D-20146 Hamburg, GERMANY
+49(0)40 42838 6231
ivan.calandra at uni-hamburg.de
**********
http://www.for771.uni-bonn.de
http://webapp5.rrz.uni-hamburg.de/mammals/eng/mitarbeiter.php
[[alternative HTML version deleted]]
--Forwarded Message Attachment--
From: vmikryukov at gmail.com
CC: r-help at r-project.org
To: d.rizopoulos at erasmusmc.nl
Date: Tue, 24 Aug 2010 15:54:44 +0600
Subject: Re: [R] Extract rows from a list object
Thanks a lot!! It sure helped.
and many thanks to all other repliers!
On Tue, Aug 24, 2010 at 3:40 PM, Dimitris Rizopoulos <
d.rizopoulos at erasmusmc.nl> wrote:
> try something like this:
>
> n <- 23
> x <- array(1:20, dim = c(n, 6))
> huge.list <- rep(list(x), 1000)
>
> out <- lapply(1:4, function (i) {
> t(sapply(huge.list, "[", i = i, j = 1:6))
> })
>
> out[[1]]
> out[[4]]
>
[[alternative HTML version deleted]]