Hi r-users,
I want to use the sn package but I got the following message:
> install.packages(repos=NULL,pkgs="c:\\Tinn-R\\sn_0.4-12.zip")
Warning: package 'sn' is in use and will not be installed
updating HTML package descriptions
I did tried to do it a few times but it gives the same message.
________________________________
From: "r-help-request@r-project.org"
<r-help-request@r-project.org>
To: r-help@r-project.org
Sent: Thursday, May 28, 2009 7:30:06 PM
Subject: R-help Digest, Vol 75, Issue 28
Send R-help mailing list submissions to
r-help@r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request@r-project.org
You can reach the person managing the list at
r-help-owner@r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. Intra-observer reliability (Shreyasee)
2. Re: Constrained fits: y~a+b*x-c*x^2, with a,b,c >=0
(Berwin A Turlach)
3. Multiple ANOVA tests (Imri)
4.. r-plot (durden10)
5. Multivariate Transformations (Hollix)
6. Re: Neural Network resource (Tony Breyal)
7. How to write a loop? (Maithili Shiva)
8. How to exclude a column by name? (Zeljko Vrba)
9. Re: How to write a loop? (Linlin Yan)
10. Re: How to exclude a column by name? (Linlin Yan)
11. Re: How to exclude a column by name? (Paul Hiemstra)
12. Re: How to exclude a column by name? (Zeljko Vrba)
13. Re: How to exclude a column by name? (Peter Dalgaard)
14. Re: Multiple ANOVA tests (Mike Lawrence)
15. Re: r-plot (Gavin Simpson)
16. Re: Intra-observer reliability (Gavin Simpson)
17. Re: Intra-observer reliability (Jim Lemon)
18. file.move? (Stefan Uhmann)
19. Re: Intra-observer reliability (Shreyasee)
20. Re: Intra-observer reliability (Shreyasee)
21. Re: Constrained fits: y~a+b*x-c*x^2, with a,b,c >=0 (Liaw, Andy)
22. Re: How to exclude a column by name? (Henrique Dallazuanna)
23. Re: r-plot (Richard.Cotton@hsl.gov.uk)
24. Re: Harmonic Analysis (Uwe Ligges)
25. Problem with adding labels in ggplot2 (Zeljko Vrba)
26. Full likelihood from survreg (Ullrika Sahlin)
27. Re: Multiple ANOVA tests (Imri)
28. Re: Problem with adding labels in ggplot2 (Zeljko Vrba)
29. Re: split strings (Monica Pisica)
30. Re: Multivariate Transformations (stephen sefick)
31. Sort matrix by column 1 ascending then by column 2 decending
(Paul Geeleher)
32. Re: moving from Windows to Linux - need help (Millo Giovanni)
33. Warning message as a result of logistic regression performed
(Winter, Katherine)
34. Re: C4.5 implementation in R (Lazy Tiger)
35. Re: Multiple ANOVA tests (Mike Lawrence)
36. Re: r-plot (Jim Lemon)
37. Re: Harmonic Analysis (stephen sefick)
38. Re: Sort matrix by column 1 ascending then by column 2
decending (Henrique Dallazuanna)
39. Hierarchical glm with binomial family (Johan Stenberg)
40. Re: optim() question (John C Nash)
41. Re: Harmonic Analysis (Gerard M. Keogh)
42. Re: How to write a loop? (Andrew Dolman)
43. Re: file.move? (Prof Brian Ripley)
44. Re: Sort matrix by column 1 ascending then by column 2
decending (Paul Geeleher)
45. Defining functions - an interesting problem (utkarshsinghal)
46. Re: Hierarchical glm with binomial family (Ben Bolker)
47. Re: Warning message as a result of logistic regression
performed (Gavin Simpson)
48. Re: Multivariate Transformations (Gavin Simpson)
49. Re: How to exclude a column by name? (Stavros Macrakis)
50. Re: Neural Network resource (Indrajit Sengupta)
51. Re: Defining functions - an interesting problem (Thomas Lumley)
52. Re: Defining functions - an interesting problem (Gavin Simpson)
53. Re: Defining functions - an interesting problem (utkarshsinghal)
54. R in Ubunto (R Heberto Ghezzo, Dr)
55. vegan metaMDS question (stephen sefick)
56. Re: Neural Network resource (Max Kuhn)
57. Re: How to exclude a column by name? (Dieter Menne)
58. Re: R in Ubunto (stephen sefick)
59. Re: Defining functions - an interesting problem (Stavros Macrakis)
60. Re: R in Ubunto (Jeff Newmiller)
61. Object-oriented programming in R (Luc Villandre)
62. Changing point color/character in qqmath (Kevin W)
63. Re: Hierarchical glm with binomial family (Douglas Bates)
64. Re: Sort matrix by column 1 ascending then by column 2
decending (Linlin Yan)
65. Deviance explined in GAMM, library mgcv (Berta Ib??ez)
66. How to set a filter during reading tables (guox@ucalgary.ca)
67. Re: How to exclude a column by name? (Wacek Kusnierczyk)
68. Labeling barplot bars by multiple factors (Thomas Levine)
69. Re: vegan metaMDS question (Gavin Simpson)
70. Re: Sort matrix by column 1 ascending then by column 2
decending (Kevin W)
71. Re: R in Ubunto (Jarek Jasiewicz)
72. Re: Sort matrix by column 1 ascending then by column 2
decending (Duncan Murdoch)
73. Re: Labeling barplot bars by multiple factors (Mike Lawrence)
74. alternative to built-in data editor (Jose Quesada)
75. no internal function "int.unzip" in R 2.9.0 for Windows
(Carson, John)
76. "Error: package/namespace load failed" (Rebecca Sela)
77. contour lines on persp plot (Jack Siegrist)
78. Re: Neural Network resource (Indrajit Sengupta)
79. Re: alternative to built-in data editor (Peter Dalgaard)
80. r-plot 2nd attempt (durden10)
81. R-beta: Re:Stats Seminar 18/02/98 (Roland Chariatte)
82. problem with cbind (kayj)
83. interpretation of the likelihood ratio test under *R* GLM
(Michael Menke)
84. how do I get to be a member? (Michael Menke)
85. RWeka weka.core.SerializationHelper.write (Christian)
86. RWeka weka.core.SerializationHelper.write (Christian)
87. Re: Loop avoidance and logical subscripts (retama)
88. Re: Neural Network resource (Tony Breyal)
89. invert axis persp plot (Jack Siegrist)
90. a simple trick to get autoclose parenthesis on windows
(Jose Quesada)
91. Re: no internal function "int.unzip" in R 2.9.0 for Windows
(Romain Francois)
92. reduce size of plot inside window and place legend beside
plot (Wade Wall)
93. ggplot2 adding vertical line at a certain date (stephen sefick)
94. Re: reduce size of plot inside window and place legend beside
plot (baptiste auguie)
95. Factor level with no cases shows up in a plot (Arthur Burke)
96. R Books listing on R-Project (Stavros Macrakis)
97. Re: Factor level with no cases shows up in a plot
(krzysztof.sakrejda@gmail.com)
98. Re: Still can't find missing data (Farley, Robert)
99. Re: "Error: package/namespace load failed" (Martin Morgan)
100. Re: alternative to built-in data editor (Greg Snow)
101. Re: R Books listing on R-Project (G. Jay Kerns)
102. Axis label spanning multiple plots (Andre Nathan)
103. Re: how do I get to be a member? ( (Ted Harding))
104. Re: r-plot 2nd attempt (Gavin Simpson)
105. Re: Axis label spanning multiple plots (Greg Snow)
106. Re: how do I get to be a member? (Gavin Simpson)
107. Re: Factor level with no cases shows up in a plot (Stefan Grosse)
108. R: Harmonic Analysis (mauede@alice.it)
109. Re: Changing point color/character in qqmath (Kevin W)
110. Re: Linear Regression with Constraints (Emmanuel Charpentier)
111. boxplot (amorigandhi@yahoo.de)
112. Re: R: Harmonic Analysis (stephen sefick)
113. Re: boxplot (stephen sefick)
114. lattice::xyplot axis padding with fontfamily="mono"
(Benjamin Tyner)
115. How do I get removed from this mailing list? (Andrey Lyalko)
116. Re: RGoogleDocs: can now see documents but cannot get
content. (Farrel Buchinsky)
117. Re: How do I get removed from this mailing list? (Duncan Murdoch)
118. Replace is leaking? (Zhou Fang)
119. question about using a remote system (Erin Hodgess)
120. Re: Replace is leaking? (Zhou Fang)
121. Re: Replace is leaking? (Rolf Turner)
122. Re: problem with cbind (jim holtman)
123. Re: problem with cbind (Gabor Grothendieck)
124. Re: R Books listing on R-Project (Ben Bolker)
125. R help (mohsin ali)
126. R: R: Harmonic Analysis (mauede@alice.it)
127. Interaction plots as lines or bars? (Michael Kubovy)
128. PBSmapping problems with importGSHHS (Tim Clark)
129. Re: R Books listing on R-Project (G. Jay Kerns)
130. Unable to load R (anupam sinha)
131. Re: Unable to load R (Zeljko Vrba)
132. Re: Interaction plots as lines or bars? (Dieter Menne)
133. Re: question about using a remote system ( (Ted Harding))
134. Re: alternative to built-in data editor (Dieter Menne)
135. Re: question about using a remote system (Mark Wardle)
136. Re: How do I get removed from this mailing list? (Peter Dalgaard)
137. "1L" and "0L" (bogaso.christofer)
138. Re: Object-oriented programming in R (Mark Wardle)
139. Re: How do I get removed from this mailing list? (Peter Dalgaard)
140. Re: How do I get removed from this mailing list?
(Wacek Kusnierczyk)
141. Re: ggplot2 adding vertical line at a certain date (Ivan Alves)
142. optima in unimode (ms.com)
143. Re: "1L" and "0L" (Gavin Simpson)
144. Re: How do I get removed from this mailing list? (Gavin Simpson)
145. Re: optima in unimode (Gavin Simpson)
146. Re: Unable to load R (anupam sinha)
147. Re: Unable to load R (Zeljko Vrba)
148. Re: R help (Richard.Cotton@hsl.gov.uk)
149. Delta Kronecker (FMH)
----------------------------------------------------------------------
Message: 1
Date: Wed, 27 May 2009 18:08:07 +0800
From: Shreyasee <shreyasee.pradhan@gmail.com>
Subject: [R] Intra-observer reliability
To: r-help@r-project.org
Message-ID:
<dd01e5960905270308j7b58a459ua6187eca6f0ba921@mail.gmail.com>
Content-Type: text/plain
Hi,
I searched a lot on the internet but was unable to find the function for
calculating the kappa statistics for intra-observer reliabilty.
Can anybody help me in the this regards.
Thanks,
Shreyasee
[[alternative HTML version deleted]]
------------------------------
Message: 2
Date: Wed, 27 May 2009 18:09:27 +0800
From: Berwin A Turlach <berwin@maths.uwa.edu.au>
Subject: Re: [R] Constrained fits: y~a+b*x-c*x^2, with a,b,c >=0
To: amvds@xs4all.nl
Cc: r-help@r-project.org
Message-ID: <20090527180927.1814b009@berwin-nus1>
Content-Type: text/plain; charset=US-ASCII
G'day Alex,
On Wed, 27 May 2009 11:51:39 +0200
Alex van der Spek <amvds@xs4all.nl> wrote:
> I wonder whether R has methods for constrained fitting of linear
> models.
>
> I am trying fm<-lm(y~x+I(x^2), data=dat) which most of the time gives
> indeed the coefficients of an inverted parabola. I know in advance
> that it has to be an inverted parabola with the maximum constrained to
> positive (or zero) values of x.
>
> The help pages for lm do not contain any info on constrained fitting.
>
> Does anyone know how to?
Look at the package nnls on CRAN.
According to your subject line, you are trying to solve what is known
as a quadratic program, and there are at least two quadratic
programming solvers (ipop in kernlab and solve.qp in quadprog)
available for R.
HTH.
Cheers,
Berwin
=========================== Full address ============================Berwin A
Turlach Tel.: +65 6516 4416 (secr)
Dept of Statistics and Applied Probability +65 6516 6650 (self)
Faculty of Science FAX : +65 6872 3919
National University of Singapore
6 Science Drive 2, Blk S16, Level 7 e-mail: statba@nus.edu.sg
Singapore 117546 http://www.stat.nus.edu.sg/~statba
------------------------------
Message: 3
Date: Wed, 27 May 2009 03:11:08 -0700 (PDT)
From: Imri <bisrael@agri.huji.ac.il>
Subject: [R] Multiple ANOVA tests
To: r-help@r-project.org
Message-ID: <23739615.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
Hi all -
I'm trying to do multiple one-way ANOVA tests of different factors on the
same variable. As a result I have a list with all the ANOVA tables, for
exemple:
$X11_20502
Analysis of Variance Table
Response: MPH
Df Sum Sq Mean Sq F value Pr(>F)
x 3 369.9 123.3 6.475 0.0002547 ***
Residuals 635 12093.2 19.0
---
Signif. codes: 0 ?***? 0.001 ?**? 0..01 ?*? 0.05 ?.? 0.1 ? ? 1
$X11_21067
Analysis of Variance Table
Response: MPH
Df Sum Sq Mean Sq F value Pr(>F)
x 1 26.7 26.7 1.3662 0.2429
Residuals 637 12436.4 19.5
$X11_10419
Analysis of Variance Table
Response: MPH
Df Sum Sq Mean Sq F value Pr(>F)
x 3 527.8 175.9 9.361 4.621e-06 ***
Residuals 635 11935.3 18.8
---
Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
My question is how can I extract from this list, just the Pr(>F) values for
each x ?
--
View this message in context:
http://www.nabble.com/Multiple-ANOVA-tests-tp23739615p23739615.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 4
Date: Wed, 27 May 2009 02:52:35 -0700 (PDT)
From: durden10 <durdantyler@gmx.net>
Subject: [R] r-plot
To: r-help@r-project.org
Message-ID: <23739356.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Dear R-community
I have a grueling problem which appears to be impossible to solve:
I want to make a simple plot, here is my code: http://gist.github.com/118550
Unfortunately, the annotation of both the x- and y-axis are not correct, as
you can see in the following picture:
http://www.nabble.com/file/p23739356/plot.png
I am not an expert of R, so maybe someone can point me to the solution of
this problem, i.e. both of the axes should start and end at the min / max
values of the two vectors.
Thanks in advance!!
Best,
Durden
--
View this message in context:
http://www.nabble.com/r-plot-tp23739356p23739356.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 5
Date: Wed, 27 May 2009 02:26:18 -0700 (PDT)
From: Hollix <Holger.steinmetz@web.de>
Subject: [R] Multivariate Transformations
To: r-help@r-project.org
Message-ID: <23739013.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
Hello folks,
many multivariate anayses (e.g., structural equation modeling) require
multivariate normal distributions.
Real data, however, most often significantly depart from the multinormal
distribution. Some researchers (e.g., Yuan et al., 2000) have proposed a
multivariate transformation of the variables.
Can you tell me, if and how such a transformation can be handeled in R?
Thanks in advance.
With best regards
Holger
---------------
Yuan, K.-H., Chan, W., & Bentler, P. M. (2000). Robust transformation with
applications to structural equation modeling. British Journal of
Mathematical and Statistical Psychology, 53, 31?50.
--
View this message in context:
http://www.nabble.com/Multivariate-Transformations-tp23739013p23739013.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 6
Date: Wed, 27 May 2009 02:40:50 -0700 (PDT)
From: Tony Breyal <tony.breyal@googlemail.com>
Subject: Re: [R] Neural Network resource
To: r-help@r-project.org
Message-ID:
<96e1d9e5-8321-48cd-9dee-dc1e45345530@h11g2000yqb.googlegroups.com>
Content-Type: text/plain; charset=ISO-8859-1
There's a link on the CRAN page for the AMORE package which apears to
have some cool information:
http://wiki.r-project.org/rwiki/doku..php?id=packages:cran:amore
Seems like an interesting package, I hadn't actually heard of it
before your post.
HTH,
Tony
> Hi All,
>
> I am trying to learn Neural Networks. I found that R has packages which can
help build Neural Nets - the popular one being AMORE package. Is there any book
/ resource available which guides us in this subject using the AMORE package?
>
> Any help will be much appreciated.
>
> Thanks,
> Indrajit
>
> ______________________________________________
> R-h...@r-project.org mailing
listhttps://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting
guidehttp://www.R-project.org/posting-guide..html
> and provide commented, minimal, self-contained, reproducible code..
------------------------------
Message: 7
Date: Wed, 27 May 2009 16:04:19 +0530 (IST)
From: Maithili Shiva <maithili_shiva@yahoo..com>
Subject: [R] How to write a loop?
To: r-help@r-project.org
Message-ID: <344168.4384.qm@web94903.mail.in2.yahoo.com>
Content-Type: text/plain
Dear R helpers,
Following is a R script I am using to run the Fast Fourier Transform. The csv
files has 10 columns with titles m1, m2, m3 .....m10.
When I use the following commands, I am getting the required results. The
probelm is if there are 100 columns, it is not wise to define 100 commands as fk
<- ONS$mk and so on. Thus, I need some guidance to write the loop for the
STEP A and STEP B.
Thanking in advance
Regards
Maithili
My R Script
-----------------------------------------------------------------------------------------------
ONS <- read.csv("fast fourier transform.csv", header = TRUE)
# STEP A
f1 <- ONS$m1
f2 <- ONS$m2
f3 <- ONS$m3
f4 <- ONS$m4
f5 <- ONS$m5
f6 <- ONS$m6
f7 <- ONS$m7
f8 <- ONS$m8
f9 <- ONS$m9
f10 <- ONS$m10
#____________________________________________________________________________________________
# STEP B
g1 <- fft(f1)
g2 <- fft(f2)
g3 <- fft(f3)
g4 <- fft(f4)
g5 <- fft(f5)
g6 <- fft(f6)
g7 <- fft(f7)
g8 <- fft(f8)
g9 <- fft(f9)
g10 <- fft(f10)
#____________________________________________________________________________________________
h <- g1*g2*g3*g4*g5*g6*g7*g8*g9*g10
j <- fft((h), inverse = TRUE)/length(h)
#____________________________________________________________________________________________
Cricket on your mind? Visit the ultimate cricket website. Enter
[[alternative HTML version deleted]]
------------------------------
Message: 8
Date: Wed, 27 May 2009 12:37:51 +0200
From: Zeljko Vrba <zvrba@ifi.uio.no>
Subject: [R] How to exclude a column by name?
To: r-help@r-project.org
Message-ID: <20090527103751.GB1197@anakin.ifi.uio.no>
Content-Type: text/plain; charset=us-ascii
Given an arbitrary data frame, it is easy to exclude a column given its index:
df[,-2]. How to do the same thing given the column name? A naive attempt
df[,-"name"] did not work :)
------------------------------
Message: 9
Date: Wed, 27 May 2009 18:51:07 +0800
From: Linlin Yan <yanlinlin82@gmail.com>
Subject: Re: [R] How to write a loop?
Cc: r-help@r-project.org
Message-ID:
<8d4c23b10905270351l5421787bv7a5affdc3d038a02@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Why did you use different variable names rather than index of list/data.frame?
On Wed, May 27, 2009 at 6:34 PM, Maithili Shiva
> Dear R helpers,
>
> Following is a?R script I am using to run the Fast Fourier Transform. The
csv files has 10 columns with titles m1, m2, m3 .....m10.
>
> When I use the following commands, I am getting the required results. The
probelm is if there are 100 columns, it is not wise to define 100 commands as fk
<- ONS$mk and so on.?Thus, I need some guidance to write the loop for the
STEP A and STEP B.
>
> Thanking in advance
>
> Regards
>
> Maithili
>
>
>
> My R Script
>
>
-----------------------------------------------------------------------------------------------
>
> ONS <- read.csv("fast fourier transform.csv", header = TRUE)
>
> ? # STEP A
>
> ? f1 <- ONS$m1
>
> ? f2 <- ONS$m2
>
> ? f3 <- ONS$m3
>
> ? f4 <- ONS$m4
>
> ? f5 <- ONS$m5
>
> ? f6 <- ONS$m6
>
> ? f7 <- ONS$m7
>
> ? f8 <- ONS$m8
>
> ? f9 <- ONS$m9
>
> ? f10 <- ONS$m10
>
>
#____________________________________________________________________________________________
>
>
> ? # STEP B
>
> ? g1 <- fft(f1)
>
> ? g2 <- fft(f2)
>
> ? g3 <- fft(f3)
>
> ? g4 <- fft(f4)
>
> ? g5 <- fft(f5)
>
> ? g6 <- fft(f6)
>
> ? g7 <- fft(f7)
>
> ? g8 <- fft(f8)
>
> ? g9 <- fft(f9)
>
> ? g10 <- fft(f10)
>
>
>
#____________________________________________________________________________________________
>
> ? h <- g1*g2*g3*g4*g5*g6*g7*g8*g9*g10
>
> ? j <- fft((h), inverse = TRUE)/length(h)
>
>
>
#____________________________________________________________________________________________
>
>
>
>
> ? ? ?Cricket on your mind? Visit the ultimate cricket website. Enter
http://beta.cricket.yahoo.com
> ? ? ? ?[[alternative HTML version deleted]]
>
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
------------------------------
Message: 10
Date: Wed, 27 May 2009 18:55:35 +0800
From: Linlin Yan <yanlinlin82@gmail.com>
Subject: Re: [R] How to exclude a column by name?
To: Zeljko Vrba <zvrba@ifi.uio.no>
Cc: r-help@r-project.org
Message-ID:
<8d4c23b10905270355r5a7e6f95ub39ed3613bb66919@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hope this helps:
> df <- data.frame(matrix(1:10,2))
> df
X1 X2 X3 X4 X5
1 1 3 5 7 9
2 2 4 6 8 10> df[,-2]
X1 X3 X4 X5
1 1 5 7 9
2 2 6 8 10> df[,-which(names(df)=="X2")]
X1 X3 X4 X5
1 1 5 7 9
2 2 6 8 10
On Wed, May 27, 2009 at 6:37 PM, Zeljko Vrba <zvrba@ifi.uio.no>
wrote:> Given an arbitrary data frame, it is easy to exclude a column given its
index:
> df[,-2]. ?How to do the same thing given the column name? ?A naive attempt
> df[,-"name"] did not work :)
------------------------------
Message: 11
Date: Wed, 27 May 2009 12:52:41 +0200
From: Paul Hiemstra <p.hiemstra@geo.uu.nl>
Subject: Re: [R] How to exclude a column by name?
To: Zeljko Vrba <zvrba@ifi.uio.no>
Cc: r-help@r-project.org
Message-ID: <4A1D1B79.7060600@geo.uu.nl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Zeljko Vrba wrote:> Given an arbitrary data frame, it is easy to exclude a column given its
index:
> df[,-2]. How to do the same thing given the column name? A naive attempt
> df[,-"name"] did not work :)
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Hi,
This piece of code does the trick. Most important is the which() command:
df = data.frame(a = runif(10), b = runif(10))
df[,-which(names(df) == "a")]
cheers,
Paul
--
Drs. Paul Hiemstra
Department of Physical Geography
Faculty of Geosciences
University of Utrecht
Heidelberglaan 2
P.O. Box 80.115
3508 TC Utrecht
Phone: +3130 274 3113 Mon-Tue
Phone: +3130 253 5773 Wed-Fri
http://intamap.geo.uu.nl/~paul
------------------------------
Message: 12
Date: Wed, 27 May 2009 13:05:37 +0200
From: Zeljko Vrba <zvrba@ifi.uio.no>
Subject: Re: [R] How to exclude a column by name?
To: Paul Hiemstra <p.hiemstra@geo.uu.nl>
Cc: r-help@r-project.org
Message-ID: <20090527110537.GC1197@anakin.ifi.uio.no>
Content-Type: text/plain; charset=us-ascii
On Wed, May 27, 2009 at 12:52:41PM +0200, Paul Hiemstra
wrote:>
> This piece of code does the trick. Most important is the which() command:
>
> df = data.frame(a = runif(10), b = runif(10))
> df[,-which(names(df) == "a")]
>
Thanks to you and Linlin. It did not occur to me to use which(); I thought
that there would be a shorter way to accomplish this since names are
first-class indices for data frames and arrays. (Or are they? What happens
under the hood when I write df[,"a"]?)
------------------------------
Message: 13
Date: Wed, 27 May 2009 13:08:26 +0200
From: Peter Dalgaard <P.Dalgaard@biostat.ku.dk>
Subject: Re: [R] How to exclude a column by name?
To: Paul Hiemstra <p.hiemstra@geo.uu.nl>
Cc: r-help@r-project..org
Message-ID: <4A1D1F2A.2090904@biostat.ku.dk>
Content-Type: text/plain; charset=UTF-8
Paul Hiemstra wrote:> Zeljko Vrba wrote:
>> Given an arbitrary data frame, it is easy to exclude a column given
>> its index:
>> df[,-2]. How to do the same thing given the column name? A naive
>> attempt
>> df[,-"name"] did not work :)
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
> Hi,
>
> This piece of code does the trick. Most important is the which() command:
>
> df = data.frame(a = runif(10), b = runif(10))
> df[,-which(names(df) == "a")]
>
You don't actually need which() (and the approach runs into problems if
"a" isn't there). Just select the others:
df[, names(df) != "a"]
Or, BTW, you can use within()
aq <- within(airquality, rm(Day))
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 14
Date: Wed, 27 May 2009 08:10:36 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] Multiple ANOVA tests
To: Imri <bisrael@agri.huji.ac.il>
Cc: r-help@r-project.org
Message-ID:
<37fda5350905270410h38bb4026g622354c033863083@mail.gmail.com>
Content-Type: text/plain; charset=windows-1252
#create some data
y=rnorm(20)
x=factor(rep(c('A','B'),each=10))
#run the anova
my_aov = aov(y~x)
#summarize the anova
my_aov_summary = summary(my_aov)
#show the anova summary
print(my_aov_summary)
#lets see what's in the summary object
str(my_aov_summary)
#looks like it's a list with 1 element which
#in turn is a data frame with columns.
#The "Pr(>F)" column looks like what we want
my_aov_summary[[1]]$P
#yup, that's it. Grab the first value
p = my_aov_summary[[1]]$P[1]
On Wed, May 27, 2009 at 7:11 AM, Imri <bisrael@agri.huji.ac.il>
wrote:>
> Hi all -
> I'm trying to do multiple one-way ANOVA tests of different factors on
the
> same variable. As a result I have a list with all the ANOVA tables, for
> exemple:
>
> $X11_20502
> Analysis of Variance Table
>
> Response: MPH
> ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
> x ? ? ? ? ? 3 ? 369.9 ? 123.3 ? 6.475 0.0002547 ***
> Residuals 635 12093.2 ? ?19.0
> ---
> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>
> $X11_21067
> Analysis of Variance Table
>
> Response: MPH
> ? ? ? ? ? Df ?Sum Sq Mean Sq F value Pr(>F)
> x ? ? ? ? ? 1 ? ?26.7 ? ?26.7 ?1.3662 0.2429
> Residuals 637 12436.4 ? ?19.5
>
> $X11_10419
> Analysis of Variance Table
>
> Response: MPH
> ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
> x ? ? ? ? ? 3 ? 527.8 ? 175.9 ? 9.361 4.621e-06 ***
> Residuals 635 11935.3 ? ?18.8
> ---
> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>
> My question is how can I extract from this list, just the Pr(>F) values
for
> each x ?
> --
> View this message in context:
http://www.nabble.com/Multiple-ANOVA-tests-tp23739615p23739615.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tr.im/mikes_public_calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 15
Date: Wed, 27 May 2009 12:44:27 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac..uk>
Subject: Re: [R] r-plot
To: durden10 <durdantyler@gmx.net>
Cc: r-help@r-project.org
Message-ID: <1243424667.2975.18.camel@desktop.localhost>
Content-Type: text/plain; charset="us-ascii"
On Wed, 2009-05-27 at 02:52 -0700, durden10 wrote:> Dear R-community
>
> I have a grueling problem which appears to be impossible to solve:
> I want to make a simple plot, here is my code:
http://gist.github.com/118550
> Unfortunately, the annotation of both the x- and y-axis are not correct, as
> you can see in the following picture:
> http://www.nabble.com/file/p23739356/plot.png
> I am not an expert of R, so maybe someone can point me to the solution of
> this problem, i.e. both of the axes should start and end at the min / max
> values of the two vectors.
But you asked it to do that, explicitly:
par(tcl=0.35,xaxs="r")
xaxs = "r" is the default and if you read ?par it will tell you that
this extends the range of the plot by 4%. Pretty labels are then found
within this range.
Try xaxs = "i" and yaxs = "i" in your call instead.
Then you do this:
axis(2, tcl=0.35,at=0:11)
But 11 is outside the range of the plotted data (+4%) so this tick isn't
drawn. The plot() call sets up the region - a subsequent call to axis()
won't change the axis limits. If you want it to extend up to 11, then
add:
ylim = c(0,11) in your call to plot.
Note also that either the tcl in the first par() call or the ones in the
two axis calls is redundant. Use one or the other.
Here is a simplified example:
set.seed(123)
y <- 0:11 + rnorm(12)
x <- runif(12)
## if you want the 4% padding, then xaxs = "r" etc
op <- par(xaxs = "i", yaxs = "i", tcl = 0.35)
plot(x, y, ylim = c(0,11), axes = FALSE)
axis(2, at = 0:11)
axis(1)
box()
par(op)
Finally, as you have your data in a DF, you could make use of this
instead of relying on getting the ordering correct, and also simplify
your lm call:
plot(Calgary ~ Win, data = data_corr, ....)
and
abline(lm(Calgary ~ Win, data = data_corr, ....))
would be a better way to make use of the formula interface, and be
explicit in the plot about which variable is on the x and which is on
the y axis.
HTH
G
>
[[elided Yahoo spam]]>
> Best,
> Durden
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/bc7b0c0a/attachment-0001.bin>
------------------------------
Message: 16
Date: Wed, 27 May 2009 12:47:50 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] Intra-observer reliability
To: Shreyasee <shreyasee.pradhan@gmail.com>
Cc: r-help@r-project.org
Message-ID: <1243424870.2975.22.camel@desktop.localhost>
Content-Type: text/plain; charset="us-ascii"
On Wed, 2009-05-27 at 18:08 +0800, Shreyasee wrote:> Hi,
>
> I searched a lot on the internet but was unable to find the function for
> calculating the kappa statistics for intra-observer reliabilty.
> Can anybody help me in the this regards.
See classAgreement() in package e1071 for example. There are others
You could have found this yourself using the search tools provided. See:
RSiteSearch("Cohen and Kappa", restrict = "functions")
for other related functions.
HTH
G
>
> Thanks,
> Shreyasee
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac..uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/3f32ceef/attachment-0001.bin>
------------------------------
Message: 17
Date: Wed, 27 May 2009 21:55:58 +1000
From: Jim Lemon <jim@bitwrit.com.au>
Subject: Re: [R] Intra-observer reliability
To: Shreyasee <shreyasee.pradhan@gmail.com>
Cc: r-help@r-project.org
Message-ID: <4A1D2A4E.6000303@bitwrit.com.au>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
Shreyasee wrote:> Hi,
>
> I searched a lot on the internet but was unable to find the function for
> calculating the kappa statistics for intra-observer reliabilty.
> Can anybody help me in the this regards.
>
>
Hi Shreyasee,
Thanks for reminding me that I had promised to rewrite Tore
Wentzel-Larsen's relInterIntra function for the irr package. I had
completely lost track of it. Attached is the function.
Jim
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: relInterIntra.R
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/8cc0490c/attachment-0001.pl>
------------------------------
Message: 18
Date: Wed, 27 May 2009 13:43:34 +0200
From: Stefan Uhmann <stefan.uhmann@googlemail.com>
Subject: [R] file.move?
To: r-help@r-project.org
Message-ID: <4A1D2766.7030603@gmail.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Dear list,
I want to move some files that should keep their time stamps, which is
not the case if I use file.copy in combination with file.remove.
file.move would be nice, is there a package providing such a function?
Regards,
Stefan
------------------------------
Message: 19
Date: Wed, 27 May 2009 19:54:11 +0800
From: Shreyasee <shreyasee.pradhan@gmail.com>
Subject: Re: [R] Intra-observer reliability
To: Jim Lemon <jim@bitwrit.com.au>
Cc: r-help@r-project.org
Message-ID:
<dd01e5960905270454y36244d75t79b475d1e479a38c@mail.gmail.com>
Content-Type: text/plain
Thanks a lot
On Wed, May 27, 2009 at 7:55 PM, Jim Lemon <jim@bitwrit.com.au> wrote:
> Shreyasee wrote:
>
>> Hi,
>>
>> I searched a lot on the internet but was unable to find the function
for
>> calculating the kappa statistics for intra-observer reliabilty.
>> Can anybody help me in the this regards.
>>
>>
>>
> Hi Shreyasee,
> Thanks for reminding me that I had promised to rewrite Tore
> Wentzel-Larsen's relInterIntra function for the irr package. I had
> completely lost track of it. Attached is the function.
>
> Jim
>
>
> # relInterIntra
> # gives the reliability coefficients in the article by
> # Eliasziw et. al. 1994; Phys. Therapy 74.8; 777-788.
> # all references in this function are to this article.
> # Arguments
> # x: data frame representing a data structure as in Table 1 (p 779),
> # with consecutive measurements for each rater in adjacent columns.
> # i.e. rater1measure1 rater1measure2 ... rater2measure1 rater2measure2
> # rho0inter: null hypothesis value of the interrater reliability
> coefficient
> # rho0intra: null hypothesis value of the intrarater reliability
> coefficient
> # conf.level: confidence level of the one-sided confidence intervals
> reported
> # for the reliability coefficients
> # output reformatted as an "irrlist" stucture - Jim Lemon
2009-05-27
>
> relInterIntra<-function(x,nrater=1,raterLabels=NULL,
> rho0inter=0.6,rho0intra=0.8,conf.level=.95) {
>
> xdim<-dim(x)
> nsubj<-xdim[1]
> nmeas<-xdim[2]/nrater
> if(is.null(raterLabels)) raterLabels<-letters[1:nrater]
> Frame1<-data.frame(cbind(rep(1:nsubj,nrater*nmeas),
> rep(1:nrater,rep(nsubj*nmeas,nrater)),
> rep(rep(1:nmeas,rep(nsubj,nmeas)),nrater),
> matrix(as.matrix(x),ncol=1)))
>
names(Frame1)<-c('Subject','Rater','Repetition','Result')
> Frame1$Subject<-factor(Frame1$Subject)
> Frame1$Rater<-factor(Frame1$Rater,labels=raterLabels)
> Frame1$Repetition<-factor(Frame1$Repetition)
> # this and following two commands:
> # aliases for compatibility with Eliasziw et. al. notation
> nn<-nsubj
> tt<-nrater
> mm<-nmeas
> aovFull<-aov(Result~Subject*Rater,data=Frame1)
> meanSquares<-summary(aovFull)[[1]][,3]
> for(raterAct in 1:tt) {
> raterActCat<-raterLabels[raterAct]
> aovAct<-aov(Result~Subject,data=Frame1[Frame1$Rater==raterActCat,])
> meanSquares<-c(meanSquares,summary(aovAct)[[1]][2,3])
> }
> names(meanSquares)<-
>
c('MSS','MSR','MSSR','MSE',paste('MSE',levels(Frame1$Rater),sep=''))
> MSS<-meanSquares[1]
> MSR<-meanSquares[2]
> MSSR<-meanSquares[3]
> MSE<-meanSquares[4]
> # the same for random and fixed, see table 2 (p. 780) and 3 (p. 281)
> MSEpart<-meanSquares[-(1:4)]
> sighat2Srandom<-(MSS-MSSR)/(mm*tt)
> sighat2Rrandom<-(MSR-MSSR)/(mm*nn)
> sighat2SRrandom<-(MSSR-MSE)/mm
> # the same for random and fixed, see table 2 (p. 780) and 3 (p. 281)
> sighat2e<-MSE
> sighat2Sfixed<-(MSS-MSE)/(mm*tt)
> sighat2Rfixed<-(MSR-MSSR)/(mm*nn)
> sighat2SRfixed<-(MSSR-MSE)/mm
> # the same for random and fixed, see table 2 (p. 780) and 3 (p. 281)
> sighat2e.part<-MSEpart
> rhohat.inter.random<-sighat2Srandom/
> (sighat2Srandom+sighat2Rrandom+sighat2SRrandom+sighat2e)
> rhohat.inter.fixed<-(sighat2Sfixed-sighat2SRfixed/tt)/
> (sighat2Sfixed+(tt-1)*sighat2SRfixed/tt+sighat2e)
> rhohat.intra.random<-(sighat2Srandom+sighat2Rrandom+sighat2SRrandom)/
> (sighat2Srandom+sighat2Rrandom+sighat2SRrandom+sighat2e)
> rhohat.intra.fixed<-(sighat2Sfixed+(tt-1)*sighat2SRfixed/tt)/
> (sighat2Sfixed+(tt-1)*sighat2SRfixed/tt+sighat2e)
>
rhohat.intra.random.part<-(sighat2Srandom+sighat2Rrandom+sighat2SRrandom)/
> (sighat2Srandom+sighat2Rrandom+sighat2SRrandom+sighat2e.part)
> rhohat.intra.fixed.part<-(sighat2Sfixed+(tt-1)*sighat2SRfixed/tt)/
> (sighat2Sfixed+(tt-1)*sighat2SRfixed/tt+sighat2e.part)
> Finter<-(1-rho0inter)*MSS/((1+(tt-1)*rho0inter)*MSSR)
> Finter.p<-1-pf(Finter,df1=nn-1,df2=(nn-1)*(tt-1))
> alpha<-1-conf.level
> nu1<-(nn-1)*(tt-1)*
> (tt*rhohat..inter.random*(MSR-MSSR)+
> nn*(1+(tt-1)*rhohat.inter.random)*MSSR+
> nn*tt*(mm-1)*rhohat.inter.random*MSE)^2/
> ((nn-1)*(tt*rhohat.inter.random)^2*MSR^2+
> (nn*(1+(tt-1)*rhohat.inter.random)-tt*rhohat.inter.random)^2*MSSR^2+
> (nn-1)*(tt-1)*(nn*tt*(mm-1))*rhohat.inter.random^2*MSE^2)
> nu2<-(nn-1)*(tt-1)*
> (nn*(1+(tt-1)*rhohat.inter.fixed)*MSSR+
> nn*tt*(mm-1)*rhohat.inter.fixed*MSE)^2/
> ((nn*(1+(tt-1)*rhohat.inter.fixed))^2*MSSR^2+
> (nn-1)*(tt-1)*(nn*tt*(mm-1))*rhohat.inter.fixed^2*MSE^2)
> F1<-qf(1-alpha,df1=nn-1,df2=nu1)
> F2<-qf(1-alpha,df1=nn-1,df2=nu2)
> lowinter.random<-nn*(MSS-F1*MSSR)/
> (nn*MSS+F1*(tt*(MSR-MSSR)+nn*(tt-1)*MSSR+nn*tt*(mm-1)*MSE))
> lowinter.random<-min(c(lowinter.random,1))
> lowinter.fixed<-nn*(MSS-F2*MSSR)/
> (nn*MSS+F2*(nn*(tt-1)*MSSR+nn*tt*(mm-1)*MSE))
> lowinter.fixed<-min(c(lowinter.fixed,1))
> Fintra<-(1-rho0intra)*MSS/((1+(mm-1)*rho0intra)*MSE*tt)
> Fintra.p<-1-pf(Fintra,df1=nn-1,df2=nn*(mm-1))
> Fintra.part<-(1-rho0intra)*MSS/((1+(mm-1)*rho0intra)*MSEpart*tt)
> Fintra.part.p<-1-pf(Fintra.part,df1=nn-1,df2=nn*(mm-1))
> F3<-qf(1-alpha,df1=nn-1,df2=nn*(mm-1))
> lowintra<-(MSS/tt-F3*MSE)/(MSS/tt+F3*(mm-1)*MSE)
> lowintra<-min(c(lowintra,1))
> F4<-qf(1-alpha,df1=nn-1,df2=nn*(mm-1))
> lowintra.part<-(MSS/tt-F4*MSEpart)/(MSS/tt+F4*(mm-1)*MSEpart)
> for(raterAct in 1:tt)
> lowintra.part[raterAct]<-min(lowintra.part[raterAct],1)
> SEMintra<-sqrt(MSE)
> SEMintra.part<-sqrt(MSEpart)
> SEMinter.random<-sqrt(sighat2Rrandom+sighat2SRrandom+sighat2e)
> SEMinter.fixed<-sqrt(sighat2SRfixed+sighat2e)
> rels<-list(method="Inter/Intrarater reliability",
> subjects=nsubj,raters=nrater,irr.name="rhohat",
> value=list(rohat=c(rhohat.inter.random,rhohat.intra.random,
> rhohat.inter.fixed,rhohat.intra.fixed,
> rhohat.intra.random.part,rhohat.intra.fixed..part),
> Fs=c(Finter,Fintra,Fintra.part),
> pvalue=c(Finter.p,Fintra.p,Fintra.part.p),
> lowvalue=c(lowinter.random,lowinter.fixed,lowintra,lowintra.part),
> sem=c(SEMintra,SEMintra.part,SEMinter.random,SEMinter.fixed)),
> stat.name="nil",statistic=NULL)
> class(rels)<-"irrlist"
> names(rels$value$rohat)<-
> c('rhohat.inter.random','rhohat.intra.random',
> 'rhohat.inter.fixed','rhohat.intra.fixed',
> paste('rhohat.intra.random.part',raterLabels,sep='.'),
> paste('rhohat.intra.fixed.part',raterLabels,sep='.'))
> names(rels$value$Fs)<-
>
c('Finter','Fintra',paste('Fintra',raterLabels,sep='.'))
> names(rels$value$pvalue)<-
>
>
c("pvalue.Finter","pvalue.Fintra",paste('pvalue.Fintra',raterLabels,sep='.'))
>
>
names(rels$value$lowvalue)<-c('lowinter.random','lowinter.fixed','lowintra',
> paste('lowintra',raterLabels,sep='.'))
> names(rels$value$sem)<-
>
c('SEMintra',paste('SEMintra.part',raterLabels,sep='.'),
> 'SEMinter.random','SEMinter.fixed')
> return(rels)
> }
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 20
Date: Wed, 27 May 2009 19:54:22 +0800
From: Shreyasee <shreyasee.pradhan@gmail.com>
Subject: Re: [R] Intra-observer reliability
To: gavin.simpson@ucl.ac.uk
Cc: r-help@r-project.org
Message-ID:
<dd01e5960905270454g5fc51c77w41a73afd8f26b73@mail.gmail.com>
Content-Type: text/plain
Thanks a lot
On Wed, May 27, 2009 at 7:47 PM, Gavin Simpson
<gavin.simpson@ucl.ac.uk>wrote:
> On Wed, 2009-05-27 at 18:08 +0800, Shreyasee wrote:
> > Hi,
> >
> > I searched a lot on the internet but was unable to find the function
for
> > calculating the kappa statistics for intra-observer reliabilty.
> > Can anybody help me in the this regards.
>
> See classAgreement() in package e1071 for example. There are others
>
> You could have found this yourself using the search tools provided. See:
>
> RSiteSearch("Cohen and Kappa", restrict = "functions")
>
> for other related functions.
>
> HTH
>
> G
>
> >
> > Thanks,
> > Shreyasee
> --
> %~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
> Dr. Gavin Simpson [t] +44 (0)20 7679 0522
> ECRC, UCL Geography, [f] +44 (0)20 7679 0565
> Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
> Gower Street, London [w]
http://www.ucl.ac.uk/~ucfagls/<http://www.ucl.ac.uk/%7Eucfagls/>
> UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
> %~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 21
Date: Wed, 27 May 2009 07:54:55 -0400
From: "Liaw, Andy" <andy_liaw@merck.com>
Subject: Re: [R] Constrained fits: y~a+b*x-c*x^2, with a,b,c >=0
To: "Berwin A Turlach" <berwin@maths.uwa.edu.au>,
<amvds@xs4all.nl>
Cc: r-help@r-project.org
Message-ID:
<39B6DDB9048D0F4DAD42CB26AAFF0AFA074CEEBB@usctmx1106.merck.com>
Content-Type: text/plain; charset="us-ascii"
There's also the "nnls" (non-negative least squares) package on
CRAN
that might be useful, although I'm puzzled by the negative sign in front
of c in Alex post...
Cheers,
Andy
From: Berwin A Turlach>
> G'day Alex,
>
> On Wed, 27 May 2009 11:51:39 +0200
> Alex van der Spek <amvds@xs4all.nl> wrote:
>
> > I wonder whether R has methods for constrained fitting of linear
> > models.
> >
> > I am trying fm<-lm(y~x+I(x^2), data=dat) which most of the
> time gives
> > indeed the coefficients of an inverted parabola. I know in advance
> > that it has to be an inverted parabola with the maximum
> constrained to
> > positive (or zero) values of x.
> >
> > The help pages for lm do not contain any info on
> constrained fitting.
> >
> > Does anyone know how to?
>
> Look at the package nnls on CRAN.
>
> According to your subject line, you are trying to solve what is known
> as a quadratic program, and there are at least two quadratic
> programming solvers (ipop in kernlab and solve.qp in quadprog)
> available for R.
>
> HTH.
>
> Cheers,
>
> Berwin
>
> =========================== Full address ============================>
Berwin A Turlach Tel.: +65 6516 4416 (secr)
> Dept of Statistics and Applied Probability +65 6516 6650 (self)
> Faculty of Science FAX : +65 6872 3919
> National University of Singapore
> 6 Science Drive 2, Blk S16, Level 7 e-mail: statba@nus.edu.sg
> Singapore 117546 http://www.stat.nus.edu.sg/~statba
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
Notice: This e-mail message, together with any attachme...{{dropped:12}}
------------------------------
Message: 22
Date: Wed, 27 May 2009 08:29:47 -0300
From: Henrique Dallazuanna <wwwhsd@gmail.com>
Subject: Re: [R] How to exclude a column by name?
To: Zeljko Vrba <zvrba@ifi.uio.no>
Cc: r-help@r-project.org
Message-ID:
<da79af330905270429t6724d222vce0d42538eab0ca@mail.gmail..com>
Content-Type: text/plain
You can try this:
DF[,"columnName"] <- NULL
On Wed, May 27, 2009 at 7:37 AM, Zeljko Vrba <zvrba@ifi.uio.no> wrote:
> Given an arbitrary data frame, it is easy to exclude a column given its
> index:
> df[,-2]. How to do the same thing given the column name? A naive attempt
> df[,-"name"] did not work :)
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 23
Date: Wed, 27 May 2009 12:55:38 +0100
From: Richard.Cotton@hsl.gov.uk
Subject: Re: [R] r-plot
To: durden10 <durdantyler@gmx.net>
Cc: r-help@r-project.org, r-help-bounces@r-project.org
Message-ID:
<OFBEBB64EC.DF92F239-ON802575C3.0040E407-802575C3.004185C0@hsl.gov.uk>
Content-Type: text/plain; charset="US-ASCII"
> I want to make a simple plot, here is my code:
http://gist.github.com/118550> Unfortunately, the annotation of both the x- and y-axis are not correct,
as> you can see in the following picture:
> http://www.nabble.com/file/p23739356/plot.png
> I am not an expert of R, so maybe someone can point me to the solution
of> this problem, i.e. both of the axes should start and end at the min /
max> values of the two vectors.
>From the help page on par:
'xaxs' The style of axis interval calculation to be used for the
x-axis. Possible values are '"r"',
'"i"', '"e"', '"s"',
'"d"'. The styles are generally controlled by the
range of
data or 'xlim', if given. Style '"r"'
(regular) first extends
the data range by 4 percent at each end and then finds an
axis with pretty labels that fits within the extended range.
Style '"i"' (internal) just finds an axis with
pretty labels
that fits within the original data range.
You've explicitly set xaxs="r", when you really want
xaxs="i". You can
also explicitly set the axis limits using xlim/ylim parameters in the call
to plot.
Compare these examples:
#Ex 1
plot(1:10) #implicitly uses par(xaxs="r", yaxs="r")
unless you've
changed something
#Ex 2
oldpar <- par(xaxs="i", yaxs="i")
plot(1:10)
par(oldpar)
#Ex 3
plot(1:10, xlim=c(-5, 15), ylim=c(-100, 100))
Regards,
Richie.
Mathematical Sciences Unit
HSL
------------------------------------------------------------------------
ATTENTION:
This message contains privileged and confidential inform...{{dropped:20}}
------------------------------
Message: 24
Date: Wed, 27 May 2009 13:47:00 +0200
From: Uwe Ligges <ligges@statistik.tu-dortmund.de>
Subject: Re: [R] Harmonic Analysis
To: Dieter Menne <dieter.menne@menne-biomed.de>
Cc: r-help@stat.math..ethz.ch
Message-ID: <4A1D2834.3040707@statistik.tu-dortmund.de>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Dieter Menne wrote:> <mauede <at> alice.it> writes:
>
>> I am looking for a package to perform harmonic analysis with the goal
of
>> estimating the period of the
>> dominant high frequency component in some mono-channel signals.
>
> You should widen your scope by looking a "time series" instead of
harmonic
> analysis. There is a task view on the subject at
>
> http://cran.at.r-project.org/web/views/TimeSeries.html
.... or take a look at package tuneR.
Uwe Ligges
> Dieter
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 25
Date: Wed, 27 May 2009 13:40:52 +0200
From: Zeljko Vrba <zvrba@ifi.uio.no>
Subject: [R] Problem with adding labels in ggplot2
To: r-help@r-project.org
Message-ID: <20090527114052.GD1197@anakin.ifi.uio.no>
Content-Type: text/plain; charset=us-ascii
I apologize for not pasting a complete example, but the data-set is too large,
so I hope someone can help me just by description of symptoms.
I define a generic plot object name (note the missing y=.. in aes()) to
plot different y-values against the same set of x-values.
p.b4.generic.wg <-
ggplot(subset(b4.all.medians, ncpus==8, TRUE), aes(x=wg)) +
geom_line(aes(linetype=graph, group=interaction(graph,nwrk,ncpus))) +
geom_point(aes(shape=nwrk)) +
scale_shape(name="# of workers") +
scale_linetype(name="Workload") +
xlab("Work division") + ylab("N/A")
Now,
p.b4.stealspins <- p.b4.generic.wg + aes(y=v.stealspins / v.realtime) +
ylab("Steal rate")
draws the correct graph EXCEPT that the y-axis label is wrong. The ylab() is
ignored and the y-label is set to "v.stealspins / v.realtime". What
am I
doing wrong here?
------------------------------
Message: 26
Date: Wed, 27 May 2009 13:42:50 +0200
From: "Ullrika Sahlin" <ullrika.sahlin@ekol.lu.se>
Subject: [R] Full likelihood from survreg
To: <r-help@r-project.org>
Message-ID: <002e01c9dec0$437814c0$ca683e40$@sahlin@ekol.lu.se>
Content-Type: text/plain
R users,
I am making model selection with an accelerated failure time model using the
command survreg within the library survival.
As I want to compare models with different probability distributions I need
to have the full likelihood.
How can I find out what survreg generates: the full likelihood or a
likelihood with "unnecessary" constants dropped?
Example I want to compare the likelihoods in Fit1 and Fit2. Is it
straightforward, or should I e.g. add a syntax asking for the full
likelihood?
Fit1<-survreg(Surv(Time, Event) ~ x, data, dist='weibull')
Fit2<-survreg(Surv(Time, Event) ~ x, data, dist='loglogistic')
Ullrika
[[alternative HTML version deleted]]
------------------------------
Message: 27
Date: Wed, 27 May 2009 05:23:30 -0700 (PDT)
From: Imri <bisrael@agri.huji.ac.il>
Subject: Re: [R] Multiple ANOVA tests
To: r-help@r-project.org
Message-ID: <23741437.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
[[elided Yahoo spam]]
I Know how to extract the Pr(>F) value from single ANOVA table, but I have a
list of many ANOVA tables recived by :
a<-function(x)(aov(MPH~x))
q<-apply(assoc[,18:20],2,a) # just for example, I have more than 3
factors(x)
> print(q)
$X11_20502
Df Sum Sq Mean Sq F value Pr(>F)
x 3 369.9 123.3 6.475 0.0002547 ***
Residuals 635 12093.2 19.0
---
Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
246 observations deleted due to missingness
$X11_21067
Df Sum Sq Mean Sq F value Pr(>F)
x 1 26.7 26.7 1.3662 0.2429
Residuals 637 12436.4 19.5
246 observations deleted due to missingness
$X11_10419
Df Sum Sq Mean Sq F value Pr(>F)
x 3 527.8 175.9 9.361 4.621e-06 ***
Residuals 635 11935.3 18.8
---
Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
246 observations deleted due to missingness
> summary(q)
Length Class Mode
X11_20502 1 summary.aov list
X11_21067 1 summary..aov list
X11_10419 1 summary.aov list
How can I extract all the Pr(>F) values from q (not one by one)?
Thanks
Imri
Mike Lawrence wrote:>
> #create some data
> y=rnorm(20)
> x=factor(rep(c('A','B'),each=10))
>
> #run the anova
> my_aov = aov(y~x)
>
> #summarize the anova
> my_aov_summary = summary(my_aov)
>
> #show the anova summary
> print(my_aov_summary)
>
> #lets see what's in the summary object
> str(my_aov_summary)
>
> #looks like it's a list with 1 element which
> #in turn is a data frame with columns.
> #The "Pr(>F)" column looks like what we want
> my_aov_summary[[1]]$P
>
> #yup, that's it. Grab the first value
> p = my_aov_summary[[1]]$P[1]
>
>
> On Wed, May 27, 2009 at 7:11 AM, Imri <bisrael@agri.huji.ac.il>
wrote:
>>
>> Hi all -
>> I'm trying to do multiple one-way ANOVA tests of different factors
on the
>> same variable. As a result I have a list with all the ANOVA tables, for
>> exemple:
>>
>> $X11_20502
>> Analysis of Variance Table
>>
>> Response: MPH
>> ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
>> x ? ? ? ? ? 3 ? 369.9 ? 123.3 ? 6.475 0.0002547 ***
>> Residuals 635 12093.2 ? ?19.0
>> ---
>> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>>
>> $X11_21067
>> Analysis of Variance Table
>>
>> Response: MPH
>> ? ? ? ? ? Df ?Sum Sq Mean Sq F value Pr(>F)
>> x ? ? ? ? ? 1 ? ?26.7 ? ?26.7 ?1.3662 0.2429
>> Residuals 637 12436.4 ? ?19.5
>>
>> $X11_10419
>> Analysis of Variance Table
>>
>> Response: MPH
>> ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
>> x ? ? ? ? ? 3 ? 527.8 ? 175.9 ? 9.361 4.621e-06 ***
>> Residuals 635 11935.3 ? ?18.8
>> ---
>> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>>
>> My question is how can I extract from this list, just the Pr(>F)
values
>> for
>> each x ?
>> --
>> View this message in context:
>> http://www.nabble.com/Multiple-ANOVA-tests-tp23739615p23739615.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Mike Lawrence
> Graduate Student
> Department of Psychology
> Dalhousie University
>
> Looking to arrange a meeting? Check my public calendar:
> http://tr.im/mikes_public_calendar
>
> ~ Certainty is folly... I think. ~
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
View this message in context:
http://www.nabble.com/Multiple-ANOVA-tests-tp23739615p23741437.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 28
Date: Wed, 27 May 2009 14:24:44 +0200
From: Zeljko Vrba <zvrba@ifi.uio.no>
Subject: Re: [R] Problem with adding labels in ggplot2
To: Zeljko Vrba <zvrba@ifi.uio.no>
Cc: r-help@r-project.org
Message-ID: <20090527122444.GE1197@anakin.ifi.uio.no>
Content-Type: text/plain; charset=us-ascii
On Wed, May 27, 2009 at 01:40:52PM +0200, Zeljko Vrba
wrote:>
> I apologize for not pasting a complete example, but the data-set is too
large,
> so I hope someone can help me just by description of symptoms.
>
-snip-
I have solved the problem by introducing an artificial variable in the original
plot specification and plotting the data using the %+% operator, e.g.:
p.b4.idletime <- p.b4.generic.wg %+%
within(b4.all.medians.8, { y <- v.idletime }) +
ylab("Idle time (s)")
------------------------------
Message: 29
Date: Wed, 27 May 2009 12:33:46 +0000
From: Monica Pisica <pisicandru@hotmail.com>
Subject: Re: [R] split strings
To: <ggrothendieck@gmail.com>,
<waclaw.marcin.kusnierczyk@idi.ntnu.no>,
<gunter.berton@gene.com>
Cc: R help project <r-help@r-project.org>
Message-ID: <BAY130-W249EEA339509BE7B459846C3530@phx.gbl>
Content-Type: text/plain; charset="Windows-1252"
Hi,
Luckily for me - until now i did not have too many times to do these type of
parsing - but who knows??? Up to now i was pretty happy with strsplit
.....Anyway - thanks again for all the help, i really appreciate it.
Monica
----------------------------------------> From: ggrothendieck@gmail.com
> Date: Tue, 26 May 2009 16:40:21 -0400
> Subject: Re: [R] split strings
> To: Waclaw.Marcin.Kusnierczyk@idi.ntnu.no
> CC: pisicandru@hotmail.com; r-help@r-project.org
>
> Although speed is really immaterial here this is likely
> to be faster than all shown so far:
>
> sub(".tif", "", basename(metr_list), fixed = TRUE)
>
> It does not allow file names with .tif in the middle
> of them since it will delete the first occurrence rather
> than the last but such a situation is highly unlikely.
>
>
> On Tue, May 26, 2009 at 4:24 PM, Wacek Kusnierczyk
> wrote:
>> Monica Pisica wrote:
>>> Hi everybody,
>>>
>>> Thank you for the suggestions and especially the explanation Waclaw
provided for his code. Maybe one day i will be able to wrap my head around this.
>>>
>>> Thanks again,
>>>
>>
>> you're welcome. note that if efficiency is an issue, you'd
better have
>> perl=TRUE there:
>>
>> output = sub('.*//(.*)[.]tif$', '\\1', input,
perl=TRUE)
>>
>> with perl=TRUE, the one-pass solution is somewhat faster than the
>> two-pass solution of gabor's -- which, however, is probably easier
to
>> understand; with perl=FALSE (the default), the performance drops:
>>
>> strings = sprintf(
>> 'f:/foo/bar//%s.tif',
>> replicate(1000, paste(sample(letters, 10), collapse='')))
>> library(rbenchmark)
>> benchmark(columns=c('test', 'elapsed'),
replications=1000, order=NULL,
>> 'one-pass, perl'=sub('.*//(.*)[.]tif$', '\\1',
strings, perl=TRUE),
>> 'two-pass, perl'=sub('.tif$', '',
basename(strings), perl=TRUE),
>> 'one-pass, no perl'=sub('.*//(.*)[.]tif$',
'\\1', strings,
>> perl=FALSE),
>> 'two-pass, no perl'=sub('.tif$', '',
basename(strings), perl=FALSE))
>> # 1 one-pass, perl 3.391
>> # 2 two-pass, perl 4.944
>> # 3 one-pass, no perl 18.836
>> # 4 two-pass, no perl 5.191
>>
>> vQ
>>
>>
>>>
>>> Monica
>>>
>>> ----------------------------------------
>>>
>>>> Date: Tue, 26 May 2009 15:46:21 +0200
>>>> From: Waclaw.Marcin.Kusnierczyk@idi.ntnu.no
>>>> To: pisicandru@hotmail.com
>>>> CC: r-help@r-project.org
>>>> Subject: Re: [R] split strings
>>>>
>>>> Monica Pisica wrote:
>>>>
>>>>> Hi everybody,
>>>>>
>>>>> I have a vector of characters and i would like to extract
certain parts. My vector is named metr_list:
>>>>>
>>>>> [1] "F:/Naval_Live_Oaks/2005/data//BE.tif"
>>>>> [2] "F:/Naval_Live_Oaks/2005/data//CH.tif"
>>>>> [3] "F:/Naval_Live_Oaks/2005/data//CRR.tif"
>>>>> [4] "F:/Naval_Live_Oaks/2005/data//HOME.tif"
>>>>>
>>>>> And i would like to extract BE, CH, CRR, and HOME in a
different vector named "names.id"
>>>>>
>>>> one way that seems reasonable is to use sub:
>>>>
>>>> output = sub('.*//(.*)[.]tif$', '\\1', input)
>>>>
>>>> which says 'from each string remember the substring between
the
>>>> rigthmost two slashes and a ..tif extension, exclusive, and
replace the
>>>> whole thing with the captured part'. if the pattern does
not match, you
>>>> get the original input:
>>>>
>>>> sub('.*//(.*)[.]tif$', '\\1',
'f:/foo/bar//buz.tif')
>>>> # buz
>>>>
>>>> vQ
>>>>
>>> _________________________________________________________________
>>
>>
------------------------------
Message: 30
Date: Wed, 27 May 2009 08:39:11 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: Re: [R] Multivariate Transformations
To: Hollix <Holger.steinmetz@web.de>
Cc: r-help@r-project.org
Message-ID:
<c502a9e10905270539w4eead6adhe0ea38aab5c2310c@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
It depends on what you are after. I am by no means a wunderkind when
it comes to transformation, but in the package vegan type
?wisconsin
and that should give you a start, but if you know what
transformations you would like to preform then apply should do what
you need with whatever transformation you are trying to use.
Stephen Sefick
On Wed, May 27, 2009 at 5:26 AM, Hollix <Holger.steinmetz@web.de>
wrote:>
> Hello folks,
>
> many multivariate anayses (e.g., structural equation modeling) require
> multivariate normal distributions.
> Real data, however, most often significantly depart from the multinormal
> distribution. Some researchers (e.g., Yuan et al., 2000) have proposed a
> multivariate transformation of the variables.
>
> Can you tell me, if and how such a transformation can be handeled in R?
>
> Thanks in advance.
> With best regards
> Holger
>
>
> ---------------
> Yuan, K.-H., Chan, W., & Bentler, P. M. (2000). Robust transformation
with
> applications to structural equation modeling. British Journal of
> Mathematical and Statistical Psychology, 53, 31?50.
> --
> View this message in context:
http://www.nabble.com/Multivariate-Transformations-tp23739013p23739013.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz..ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www..R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
------------------------------
Message: 31
Date: Wed, 27 May 2009 13:39:23 +0100
From: Paul Geeleher <paulgeeleher@gmail.com>
Subject: [R] Sort matrix by column 1 ascending then by column 2
decending
To: r-help@r-project.org
Message-ID:
<2402860e0905270539i1dded0b6la619c6d0d31fc2ac@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
I've got a matrix with 2 columns and n rows. I need to sort it first
by the values in column 1 ascending. Then for values which are the
same in column 1, sort by column 2 decending. For example:
2 .5
1 .3
1 .5
3 .2
Goes to:
1 .5
1 .3
2 .5
3 .2
This is easy to do in spreadsheet programs but I can't seem to work
out how to do it in R and haven't been able to find a solution
anywhere.
Thanks!
-Paul.
--
Paul Geeleher
School of Mathematics, Statistics and Applied Mathematics
National University of Ireland
Galway
Ireland
------------------------------
Message: 32
Date: Wed, 27 May 2009 14:43:55 +0200
From: "Millo Giovanni" <Giovanni_Millo@Generali.com>
Subject: Re: [R] moving from Windows to Linux - need help
To: <KINLEY_ROBERT@lilly.com>
Cc: r-help@r-project.org
Message-ID:
<28643F754DDB094D8A875617EC4398B20311E1AF@BEMAILEXTV03.corp.generali.net>
Content-Type: text/plain; charset="US-ASCII"
Dear Robert,
a different option, just to give you one more choice: you should be able
to keep the standard Xandros and install R if you don't feel like
changing the operating system. You just have to add the standard Debian
repositories. I found it easier to have R, Emacs and LaTeX working on
the standard system first, before experimenting with other distros.
Memorandum, just in case: I've been there a few months ago so I know
where a Windows useR is like to stumble ;^) (if you already know this,
just skip): in Linux you don't download "setup.exe" files and
execute
them to install things as you would on Windows: there are different
systems. Programs are supposed to be downloaded from standard
repositories over the Internet and installed by special software, which
may vary across Linux distributions.
Xandros is Debian-like and the wonderful packaging system of Debian (and
Ubuntu, and Mepis...) works there as well, resolving all package
dependencies for you. There are three tools available, two command-line
driven (apt-get and aptitude) and a graphical one (Synaptic). All three
do the same job. These tools already have predefined repositories, which
you may alter.
The Xandros repositories only have old versions of R if at all, so you'd
better add the Debian ones (but be careful to either 1) disable them
afterwards or better 2) to 'pin' them (=assign them different
priorities), else you could damage your system by downloading other
Debian packages instead of the Xandros ones in cases when this does
*not* work). R from the Debian repos. works fine on Xandros but some
other programs might screw your system up.
So all you have to do is just open up a terminal window (CTRL+ALT+T) and
do
sudo apt-get install <yoursoftware>
('sudo' you need to act as administrator)
In particular, quoting from the R-Wiki, "if you just want to be
able to run R, you can get r-base-core and all the recommended packages
by doing:
sudo apt-get install r-base
If you want to be able to build and install R packages (including those
from CRAN), you can get all the common header files, as well as
r-base-core by doing:
sudo apt-get install r-base-dev
If you want to be able to build R from its source code, you can get
build dependencies for R (e.g., compilers, header files) by doing:
sudo apt-get build-dep r-base"
Of course you can download the same packages with Synaptic (but start it
as 'sudo Synaptic', for the above reasons! else you don't have
rights to
install anything).
You can find much more detailed step by step instructions from some
other people put together in this old post of mine:
http://www.nabble.com/R-on-an-ASUS-eee-PC,-continued---installing-packag
es-td17862000.html
The same principles apply, e.g., for LaTeX and Emacs if you need them.
Have fun!
Giovanni
## original message ##
------------------------------
Message: 8
Date: Tue, 26 May 2009 12:56:39 +0200
From: Paul Hiemstra <p.hiemstra@geo.uu.nl>
Subject: Re: [R] moving from Windows to Linux - need help
To: Robert Kinley <KINLEY_ROBERT@lilly.com>
Cc: r-help@r-project.org
Message-ID: <4A1BCAE7.8010307@geo.uu.nl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi Robert,
I had the exact same problem on my eeepc 900. I replaced the
xandros-like linux in this way:
- Download an Ubuntu iso file (I use 8.04, Kubuntu)
- Put the .iso file on a usb stick (use unetbootin)
- Install the ubuntu version
- Install the eeepc specific stuff from http://array.org/ubuntu/ (this
is a repository with an eeepc kernel available and other stuff, the site
provides a lot of info on how to install the eeepc specific things)
Now you have a "normal" linux distro (ubuntu) and you can use the
normal
cran repositories (debian) to install R.
This worked very well for me, it was quite easy to get ubuntu running. I
know that this isn't an exact answer to your question, but I found that
re installing linux was the best option.
cheers and hth,
Paul
Robert Kinley wrote:> hi
>
> I've used R for many years on windows machines, but
> have now acquired an Asus eee 1000 linux machine.
>
> In order to get the best out of the machine, I used the
> 'pimpmyeee.sh' script, to get the full KDE desktop.
>
> The version of Linux is Xandros, which I believe is
> a close relative of Debian, but sadly I have only a
> nodding acquaintance with Linux at present.
>
> Naturally I want to have the current version of R on it,
> and I understand (or possibly misunderstand) that the
> binary for the Debian flavour of Linux should do the trick.
>
> I have tried -
>
> 1. using synaptic to add the appropriate (I think) CRAN
> repository ... but every combination I have tried
> gives a 404 error
>
> 2. downloading from CRAN what I think is a zipped-up version of
> r-base software, and thewn using the eee's file-manager
> 'install DEB package' option ... but this returns 'cannot
load ...
> '.
>
>
> I'm a bit stuck ... can anyone help please ?
>
>
> thanks Bob Kinley
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Drs. Paul Hiemstra
Department of Physical Geography
Faculty of Geosciences
University of Utrecht
Heidelberglaan 2
P.O. Box 80.115
3508 TC Utrecht
Phone: +3130 274 3113 Mon-Tue
Phone: +3130 253 5773 Wed-Fri
http://intamap.geo.uu.nl/~paul
Giovanni Millo
Research Dept.,
Assicurazioni Generali SpA
Via Machiavelli 4,
34132 Trieste (Italy)
tel. +39 040 671184
fax +39 040 671160
------------------------------
Message: 33
Date: Wed, 27 May 2009 11:22:12 +0100
From: "Winter, Katherine" <K.Winter1@liverpool.ac.uk>
Subject: [R] Warning message as a result of logistic regression
performed
To: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<DC62E1323AF0C24A99A7A3FE6019CD4D5CF9F909D7@STAFFMBX1.livad.liv.ac.uk>
Content-Type: text/plain; charset="us-ascii"
I am sorry if this question sounds basic but I am having trouble understanding a
warning message I have been receiving in R after attempting logistic regression.
I have been using the logistic regression function in R to analyse a simulated
data set. The dependent variable "failure" has an outcome of either 0
(success) or 1 (failure). Both the independent variables have been previously
generated in a mathematical model and stored in a data.frame for analysis. I am
currently using a sample size of 1000 and I use the following commands in R:
log.reg.1 <- glm(failure ~ age +weight +init.para.log.value
+k.d1,family=binomial(logit), data=test)
log.reg.1.summary <- summary(log.reg.1); print(log.reg.1.summary)
log.reg..1.exp <- exp(log.reg.1$coef); print(log.reg.1.exp)
When I execute these commands I get the following warning message:
"In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
etastart, :fitted probabilities numerically 0 or 1 occurred"
I am unsure what this warning is referring to. I have tried using google to
answer this question but have had no luck.
I have been on the following website
https://stat.ethz.ch/pipermail/r-sig-ecology/2008-July/000278.html but found it
was not helpful as I when I ran the example given I received no warning message
(I am using R version 2.8.1).
I am working with simulated data so there are no missing values in the data set.
I have also looked at the following website
http://tolstoy.newcastle.edu.au/R/help/05/07/7759.html they suggest that the
warning is as a result of "perfect separation" of the results (a
possibility with simulated data). However, when I added an extra row to my
data.frame of results that I knew to be false and hence to prevent "perfect
separation" subsequent logistic regression still resulted in the same
warning message.
I am still at a loss as to the meaning of this message and any help in
understanding this warning would be much appreciated.
------------------------------
Message: 34
Date: Wed, 27 May 2009 03:55:48 -0700 (PDT)
From: Lazy Tiger <lazytiger7@gmail.com>
Subject: Re: [R] C4.5 implementation in R
To: r-help@r-project.org
Message-ID: <23740177.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thanks Tony. I will look into it.
Tony Breyal wrote:>
> I think Rweka implements the C4.5 (revision 8) algorithm, but it calls
> it J4.8 (because it's written in Java instead of C, and also the
> revision number, and is uses an open source licence, i think).
>
> You might want to look at this paper by Schauerhuber, Zeileis & Hornik
> called 'Benchmarking Open-Source Tree Learners in R/RWeka':
>
>
http://epub.wu-wien.ac.at/dyn/virlib/wp/eng/mediate/epub-wu-01_bd8.pdf?ID=epub-wu-01_bd8
>
> otherwise, try: http://cran.r-project.org/web/views/MachineLearning.html
>
> Hope that helps a little bit, i've been meaning to have a play around
> with that package myself actually, just need to find the time :D
>
> Tony
>
>
>
> On 27 May, 07:39, Lazy Tiger <lazytig...@gmail.com> wrote:
>> Greetings,
>>
>> Does anyone know if the C4.5 algorithm is already implemented in R? If
>> yes,
>> please let me know the package. Thanks.
>> --
>> View this message in
>>
context:http://www.nabble.com/C4.5-implementation-in-R-tp23736785p23736785.html
>> Sent from the R help mailing list archive at Nabble.com..
>>
>> ______________________________________________
>> R-h...@r-project.org mailing
>> listhttps://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting
>> guidehttp://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
View this message in context:
http://www.nabble.com/C4.5-implementation-in-R-tp23736785p23740177.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 35
Date: Wed, 27 May 2009 09:50:24 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] Multiple ANOVA tests
To: Imri <bisrael@agri.huji.ac.il>
Cc: r-help@r-project.org
Message-ID:
<37fda5350905270550o81fae22i164a8f17dadb11a0@mail.gmail.com>
Content-Type: text/plain; charset=windows-1252
you could use ldply from the plyr package:
p = ldply(q,function(x){x$P})
Without you data I can't confirm that works, but something like that
should do it
On Wed, May 27, 2009 at 9:23 AM, Imri <bisrael@agri.huji.ac.il>
wrote:>
[[elided Yahoo spam]]> I Know how to extract the Pr(>F) value from single ANOVA table, but I
have a
> list of many ANOVA tables recived by :
> a<-function(x)(aov(MPH~x))
> q<-apply(assoc[,18:20],2,a) # just for example, I have more than 3
> factors(x)
>
>> print(q)
> $X11_20502
> ? ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
> x ? ? ? ? ? ? 3 ? 369.9 ? 123.3 ? 6.475 0.0002547 ***
> Residuals ? 635 12093.2 ? ?19.0
> ---
> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
> 246 observations deleted due to missingness
>
> $X11_21067
> ? ? ? ? ? ? Df ?Sum Sq Mean Sq F value Pr(>F)
> x ? ? ? ? ? ? 1 ? ?26.7 ? ?26.7 ?1.3662 0.2429
> Residuals ? 637 12436.4 ? ?19.5
> 246 observations deleted due to missingness
>
> $X11_10419
> ? ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
> x ? ? ? ? ? ? 3 ? 527.8 ? 175.9 ? 9.361 4.621e-06 ***
> Residuals ? 635 11935.3 ? ?18.8
> ---
> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
> 246 observations deleted due to missingness
>
>> summary(q)
> ? ? ? ? ?Length Class ? ? ? Mode
> X11_20502 1 ? ? ?summary.aov list
> X11_21067 1 ? ? ?summary.aov list
> X11_10419 1 ? ? ?summary.aov list
> ?How can I extract all the Pr(>F) values from q (not one by one)?
>
> Thanks
> Imri
>
>
>
> Mike Lawrence wrote:
>>
>> #create some data
>> y=rnorm(20)
>> x=factor(rep(c('A','B'),each=10))
>>
>> #run the anova
>> my_aov = aov(y~x)
>>
>> #summarize the anova
>> my_aov_summary = summary(my_aov)
>>
>> #show the anova summary
>> print(my_aov_summary)
>>
>> #lets see what's in the summary object
>> str(my_aov_summary)
>>
>> #looks like it's a list with 1 element which
>> #in turn is a data frame with columns.
>> #The "Pr(>F)" column looks like what we want
>> my_aov_summary[[1]]$P
>>
>> #yup, that's it. Grab the first value
>> p = my_aov_summary[[1]]$P[1]
>>
>>
>> On Wed, May 27, 2009 at 7:11 AM, Imri <bisrael@agri.huji.ac.il>
wrote:
>>>
>>> Hi all -
>>> I'm trying to do multiple one-way ANOVA tests of different
factors on the
>>> same variable. As a result I have a list with all the ANOVA tables,
for
>>> exemple:
>>>
>>> $X11_20502
>>> Analysis of Variance Table
>>>
>>> Response: MPH
>>> ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
>>> x ? ? ? ? ? 3 ? 369.9 ? 123.3 ? 6.475 0.0002547 ***
>>> Residuals 635 12093.2 ? ?19.0
>>> ---
>>> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>>>
>>> $X11_21067
>>> Analysis of Variance Table
>>>
>>> Response: MPH
>>> ? ? ? ? ? Df ?Sum Sq Mean Sq F value Pr(>F)
>>> x ? ? ? ? ? 1 ? ?26.7 ? ?26.7 ?1.3662 0.2429
>>> Residuals 637 12436.4 ? ?19.5
>>>
>>> $X11_10419
>>> Analysis of Variance Table
>>>
>>> Response: MPH
>>> ? ? ? ? ? Df ?Sum Sq Mean Sq F value ? ?Pr(>F)
>>> x ? ? ? ? ? 3 ? 527.8 ? 175.9 ? 9.361 4.621e-06 ***
>>> Residuals 635 11935.3 ? ?18.8
>>> ---
>>> Signif. codes: ?0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>>>
>>> My question is how can I extract from this list, just the Pr(>F)
values
>>> for
>>> each x ?
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Multiple-ANOVA-tests-tp23739615p23739615.html
>>> Sent from the R help mailing list archive at Nabble.com.
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>>
>> --
>> Mike Lawrence
>> Graduate Student
>> Department of Psychology
>> Dalhousie University
>>
>> Looking to arrange a meeting? Check my public calendar:
>> http://tr.im/mikes_public_calendar
>>
>> ~ Certainty is folly... I think. ~
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>
> --
> View this message in context:
http://www.nabble.com/Multiple-ANOVA-tests-tp23739615p23741437.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tr.im/mikes_public_calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 36
Date: Wed, 27 May 2009 21:29:20 +1000
From: Jim Lemon <jim@bitwrit.com.au>
Subject: Re: [R] r-plot
To: durden10 <durdantyler@gmx.net>
Cc: r-help@r-project.org
Message-ID: <4A1D2410.6010600@bitwrit.com.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
durden10 wrote:> Dear R-community
>
> I have a grueling problem which appears to be impossible to solve:
> I want to make a simple plot, here is my code:
http://gist.github.com/118550
> Unfortunately, the annotation of both the x- and y-axis are not correct, as
> you can see in the following picture:
> http://www.nabble.com/file/p23739356/plot.png
> I am not an expert of R, so maybe someone can point me to the solution of
> this problem, i.e. both of the axes should start and end at the min / max
> values of the two vectors.
>
>
Hi Durden,
This example seems to work for me. Is it just the X and Y axis labels
that you want?
data_corr<-data.frame(
Win=c(-0.08,-0.07,-0.01,-0.01,0.03,0.08,0.1,0.13,
0.18,0.19,0.195,0.2,0.28,0.3,0.4),
Calgary=c(11,7,5,4,3,8,6,7,3,2,1,8,0,1,3)
)
par(tcl=0.35,xaxs="r") # Switch tick marks to insides of axes
plot(data_corr, type = "p", xlab="VS signal change",
ylab="Depression scale",axes=FALSE, col = "blue", lwd = 2)
#y-axis
axis(2, tcl=0.35,at=0:11)
#x-axis
test2<-seq(0,0.4,by=0.1)
axis(1, tcl=0.35,at=test2)
box()
abline(lm(data_corr[,2]~data_corr[,1]))
Jim
------------------------------
Message: 37
Date: Wed, 27 May 2009 08:58:34 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: Re: [R] Harmonic Analysis
To: r-help@stat.math.ethz.ch
Message-ID:
<c502a9e10905270558r2f963567r7789e97abbae23d8@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
why will a fourier transform not work?
2009/5/27 Uwe Ligges
<ligges@statistik.tu-dortmund.de>:>
>
> Dieter Menne wrote:
>>
>> ?<mauede <at> alice.it> writes:
>>
>>> I am looking for a package to perform harmonic analysis with the
goal of
>>> estimating the period of the
>>> dominant high frequency component in some mono-channel signals.
>>
>> You should widen your scope by looking a "time series"
instead of harmonic
>> analysis. There is a task view on the subject at
>>
>> http://cran.at.r-project.org/web/views/TimeSeries.html
>
>
> ... or take a look at package tuneR.
>
> Uwe Ligges
>
>
>
>
>> Dieter
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals..
-K. Mullis
------------------------------
Message: 38
Date: Wed, 27 May 2009 10:03:23 -0300
From: Henrique Dallazuanna <wwwhsd@gmail.com>
Subject: Re: [R] Sort matrix by column 1 ascending then by column 2
decending
To: Paul Geeleher <paulgeeleher@gmail.com>
Cc: r-help@r-project.org
Message-ID:
<da79af330905270603w4df6e220ie2e1448e6e31ff7f@mail.gmail.com>
Content-Type: text/plain
Try this:
cbind(sort(x[,1]), unlist(tapply(x[,2], x[,1], sort, decreasing = T)))
On Wed, May 27, 2009 at 9:39 AM, Paul Geeleher
<paulgeeleher@gmail.com>wrote:
> I've got a matrix with 2 columns and n rows. I need to sort it first
> by the values in column 1 ascending. Then for values which are the
> same in column 1, sort by column 2 decending. For example:
>
> 2 .5
> 1 .3
> 1 .5
> 3 .2
>
> Goes to:
>
> 1 .5
> 1 .3
> 2 .5
> 3 .2
>
> This is easy to do in spreadsheet programs but I can't seem to work
> out how to do it in R and haven't been able to find a solution
> anywhere.
>
>
> Thanks!
>
> -Paul.
>
> --
> Paul Geeleher
> School of Mathematics, Statistics and Applied Mathematics
> National University of Ireland
> Galway
> Ireland
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Henrique Dallazuanna
Curitiba-Paraná-Brasil
25° 25' 40" S 49° 16' 22" O
[[alternative HTML version deleted]]
------------------------------
Message: 39
Date: Wed, 27 May 2009 15:13:42 +0200
From: Johan Stenberg <jonstg@gmail.com>
Subject: [R] Hierarchical glm with binomial family
To: r-help@R-project.org
Message-ID:
<ac6b076e0905270613i28f6446fo7770d24a6cb0d995@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear members of the R help list,
I want to do a hierarchical glm with binomial family but am unsure
about how to write the syntax which involves nesting.
I want to test whether the risk of being attacked by Herbivores for
Meadowsweet plants is significantly dependent on the Distance to
heterospecific source plants.
Dependent variable = Herbivory (yes/no)
Explanatory continuous variable = Distance to heterospecific source plant
Distance should be nested within Subpopulation which in turn should be
nested within Population.
The number of replicates per subpopulation varies between 8 and 36.
The number of subpopulations per population varies between 4 and 9.
I haven't figured out how to do nesting, but guessing that nesting is
denoted with brackets I guess the syntax should look something like
this (below). Could you please help me to correct this syntax so that
it becomes useful in R?
model<-glm(Herbivory~Distance(Subpopulation(Population)), family=binomial)
[[elided Yahoo spam]]
Johan
------------------------------
Message: 40
Date: Wed, 27 May 2009 09:16:00 -0400
From: John C Nash <nashjc@uottawa.ca>
Subject: Re: [R] optim() question
To: r-help@r-project.org
Message-ID: <4A1D3D10.6040708@uottawa.ca>
Content-Type: text/plain; charset=us-ascii; format=flowed
Some thought about this overnight led to conclusion that a capability
to follow from one method to another could be quite useful. Moreover,
it should be pretty easy to fit it into our current trial version of
optimx()
as we call the function. More at UseR.
JN
Ravi Varadhan wrote:>
> Stephen,
>
> No. Currently, AFAIK, there is no such switching algorithm for
optimization
> in R. John Nash and I are working on a package for integrating various
> optimization tools (for smooth, box-constrained optimization) in R. This
> will have a function that can run through multiple algorithms. While this
> is not exactly what you are asking for, it can be quite useful for your
> purposes, which I assume is to find a local optimum in a reliable fashion.
>
> Ravi.
>
> ...
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] On
> Behalf Of Stephen Collins
> Sent: Tuesday, May 26, 2009 2:48 PM
> To: r-help@stat.math.ethz.ch
> Subject: [R] optim() question
>
> I've seen with other software the capability for the optimizer to
switch
> algorithms if it is not making progress between iterations. Is this
> capability available in optim()?
>
> Thanks,
>
> Stephen Collins, MPP | Analyst
> Health & Benefits | Aon Consulting
>
------------------------------
Message: 41
Date: Wed, 27 May 2009 14:22:43 +0100
From: "Gerard M. Keogh" <GMKeogh@justice.ie>
Subject: Re: [R] Harmonic Analysis
To: stephen sefick <ssefick@gmail.com>
Cc: r-help-bounces@r-project.org, r-help@stat.math.ethz..ch
Message-ID:
<OF1B1EC7AC.48E181F7-ON802575C3.00493966-802575C3.00497E7B@justice.ie>
Content-Type: text/plain; charset="ISO-8859-1"
My thoughts exactly.
?FFT should do the job.
And define the dominant term - a_n**2 + b_n**2 - the Parseval Relation.
stephen sefick
<ssefick@gmail.co
m> To
Sent by: r-help@stat.math.ethz.ch
r-help-bounces@r- cc
project.org
Subject
Re: [R] Harmonic Analysis
27/05/2009 13:58
why will a fourier transform not work?
2009/5/27 Uwe Ligges
<ligges@statistik.tu-dortmund.de>:>
>
> Dieter Menne wrote:
>>
>> ?<mauede <at> alice.it> writes:
>>
>>> I am looking for a package to perform harmonic analysis with the
goal
of>>> estimating the period of the
>>> dominant high frequency component in some mono-channel signals.
>>
>> You should widen your scope by looking a "time series"
instead of
harmonic>> analysis. There is a task view on the subject at
>>
>> http://cran.at.r-project.org/web/views/TimeSeries.html
>
>
> ... or take a look at package tuneR.
>
> Uwe Ligges
>
>
>
>
>> Dieter
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
**********************************************************************************
The information transmitted is intended only for the person or entity to which
it is addressed and may contain confidential and/or privileged material. Any
review, retransmission, dissemination or other use of, or taking of any action
in reliance upon, this information by persons or entities other than the
intended recipient is prohibited. If you received this in error, please contact
the sender and delete the material from any computer. It is the policy of the
Department of Justice, Equality and Law Reform and the Agencies and Offices
using its IT services to disallow the sending of offensive material.
Should you consider that the material contained in this message is offensive you
should contact the sender immediately and also mailminder[at]justice.ie.
Is le haghaidh an duine n? an eintitis ar a bhfuil s? d?rithe, agus le haghaidh
an duine n? an eintitis sin amh?in, a bhearta?tear an fhaisn?is a tarchuireadh
agus f?adfaidh s? go bhfuil ?bhar faoi r?n agus/n? faoi phribhl?id inti.
Toirmisctear aon athbhreithni?, atarchur n? leathadh a dh?anamh ar an bhfaisn?is
seo, aon ?s?id eile a bhaint aisti n? aon ghn?omh a dh?anamh ar a hiontaoibh, ag
daoine n? ag eintitis seachas an faighteoir beartaithe. M? fuair t? ? seo tr?
dhearmad, t?igh i dteagmh?il leis an seolt?ir, le do thoil, agus scrios an
t-?bhar as aon r?omhaire. Is ? beartas na Roinne Dl? agus Cirt, Comhionannais
agus Athch?irithe Dl?, agus na nOif?g? agus na nGn?omhaireachta? a ?s?ideann
seirbh?s? TF na Roinne, seoladh ?bhair chol?il a dh?chead?.
M?s rud ? go measann t? gur ?bhar col?il at? san ?bhar at? sa teachtaireacht seo
is ceart duit dul i dteagmh?il leis an seolt?ir l?ithreach agus le
mailminder[ag]justice.ie chomh maith.
***********************************************************************************
------------------------------
Message: 42
Date: Wed, 27 May 2009 15:46:31 +0200
From: Andrew Dolman <andydolman@gmail.com>
Subject: Re: [R] How to write a loop?
Cc: r-help@r-project.org
Message-ID:
<951234ac0905270646y297cf12ai748469bf15eed1dc@mail.gmail.com>
Content-Type: text/plain
Try
lapply(ONS, fft)
and take a look here http://cran.r-project.org/doc/manuals/R-intro.html for
the basics of data structures in R and how to apply functions to them.
Andy.
andydolman@gmail.com
2009/5/27 Linlin Yan <yanlinlin82@gmail.com>
> Why did you use different variable names rather than index of
> list/data.frame?
>
> On Wed, May 27, 2009 at 6:34 PM, Maithili Shiva
> > Dear R helpers,
> >
> > Following is a R script I am using to run the Fast Fourier Transform.
The
> csv files has 10 columns with titles m1, m2, m3 .....m10.
> >
> > When I use the following commands, I am getting the required results.
The
> probelm is if there are 100 columns, it is not wise to define 100 commands
> as fk <- ONS$mk and so on. Thus, I need some guidance to write the loop
for
> the STEP A and STEP B.
> >
> > Thanking in advance
> >
> > Regards
> >
> > Maithili
> >
> >
> >
> > My R Script
> >
> >
>
-----------------------------------------------------------------------------------------------
> >
> > ONS <- read.csv("fast fourier transform.csv", header =
TRUE)
> >
> > # STEP A
> >
> > f1 <- ONS$m1
> >
> > f2 <- ONS$m2
> >
> > f3 <- ONS$m3
> >
> > f4 <- ONS$m4
> >
> > f5 <- ONS$m5
> >
> > f6 <- ONS$m6
> >
> > f7 <- ONS$m7
> >
> > f8 <- ONS$m8
> >
> > f9 <- ONS$m9
> >
> > f10 <- ONS$m10
> >
> >
>
#____________________________________________________________________________________________
> >
> >
> > # STEP B
> >
> > g1 <- fft(f1)
> >
> > g2 <- fft(f2)
> >
> > g3 <- fft(f3)
> >
> > g4 <- fft(f4)
> >
> > g5 <- fft(f5)
> >
> > g6 <- fft(f6)
> >
> > g7 <- fft(f7)
> >
> > g8 <- fft(f8)
> >
> > g9 <- fft(f9)
> >
> > g10 <- fft(f10)
> >
> >
> >
>
#____________________________________________________________________________________________
> >
> > h <- g1*g2*g3*g4*g5*g6*g7*g8*g9*g10
> >
> > j <- fft((h), inverse = TRUE)/length(h)
> >
> >
> >
>
#____________________________________________________________________________________________
> >
> >
> >
> >
> > Cricket on your mind? Visit the ultimate cricket website. Enter
> > [[alternative HTML version deleted]]
> >
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code..
> >
> >
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 43
Date: Wed, 27 May 2009 15:02:35 +0100 (BST)
From: Prof Brian Ripley <ripley@stats.ox.ac.uk>
Subject: Re: [R] file.move?
To: Stefan Uhmann <stefan.uhmann@googlemail.com>
Cc: r-help@r-project.org
Message-ID: <alpine.LFD.2.00.0905271457470.9736@gannet.stats.ox.ac.uk>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Are you looking for file.rename?
Moving files is not really a portable concept, and nor is 'time
stamps' (files usually have three or more times associated with them,
and moving does not keep them all in OSes that implement it).
On Wed, 27 May 2009, Stefan Uhmann wrote:
> Dear list,
>
> I want to move some files that should keep their time stamps, which is not
> the case if I use file.copy in combination with file.remove. file.move
would
> be nice, is there a package providing such a function?
>
> Regards,
> Stefan
--
Brian D. Ripley, ripley@stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 44
Date: Wed, 27 May 2009 15:11:15 +0100
From: Paul Geeleher <paulgeeleher@gmail.com>
Subject: Re: [R] Sort matrix by column 1 ascending then by column 2
decending
To: Henrique Dallazuanna <wwwhsd@gmail.com>
Cc: r-help@r-project.org
Message-ID:
<2402860e0905270711k5e4ce6cenff3c714315646934@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Nice. Works perfectly.
On Wed, May 27, 2009 at 2:03 PM, Henrique Dallazuanna <wwwhsd@gmail.com>
wrote:> Try this:
>
> cbind(sort(x[,1]), unlist(tapply(x[,2], x[,1], sort, decreasing = T)))
>
> On Wed, May 27, 2009 at 9:39 AM, Paul Geeleher
<paulgeeleher@gmail.com>
> wrote:
>>
>> I've got a matrix with 2 columns and n rows. I need to sort it
first
>> by the values in column 1 ascending. Then for values which are the
>> same in column 1, sort by column 2 decending. For example:
>>
>> 2 .5
>> 1 .3
>> 1 .5
>> 3 .2
>>
>> Goes to:
>>
>> 1 .5
>> 1 .3
>> 2 .5
>> 3 .2
>>
>> This is easy to do in spreadsheet programs but I can't seem to work
>> out how to do it in R and haven't been able to find a solution
>> anywhere.
>>
>>
>> Thanks!
>>
>> -Paul.
>>
>> --
>> Paul Geeleher
>> School of Mathematics, Statistics and Applied Mathematics
>> National University of Ireland
>> Galway
>> Ireland
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
>
> --
> Henrique Dallazuanna
> Curitiba-Paran?-Brasil
> 25? 25' 40" S 49? 16' 22" O
>
--
Paul Geeleher
School of Mathematics, Statistics and Applied Mathematics
National University of Ireland
Galway
Ireland
------------------------------
Message: 45
Date: Wed, 27 May 2009 19:41:47 +0530
From: utkarshsinghal <utkarsh.singhal@global-analytics.com>
Subject: [R] Defining functions - an interesting problem
To: r help <r-help@R-project.org>
Message-ID: <4A1D4A23.3070006@global-analytics.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
I define the following function:
(Please don't wonder about the use of this function, this is just a
simplified version of my actual function. And please don't spend your
time in finding an alternate way of doing the same as the following does
not exactly represent my function. I am only interested in a good
explanation)
> f1 = function(x,ties.method="average")rank(x,ties.method)
> f1(c(1,1,2,4), ties.method="min")
[1] 1.5 1.5 3.0 4.0
I don't know why it followed ties.method="average".
Anyways I randomly tried the following:
> f2 =
function(x,ties.method="average")rank(x,ties.method=ties.method)
> f2(c(1,1,2,4), ties.method="min")
[1] 1 1 3 4
Now, it follows the ties.method="min"
I don't see any explanation for this, however, I somehow mugged up that
if I define it as in "f1", the ties.method in rank function takes its
default value which is "average" and if I define as in "f2",
it takes
the value which is passed in "f2".
But even all my mugging is wasted when I tested the following:
> h = function(x, a=1)x^a
> g1 = function(x, a=1)h(x,a)
> g1(x=5, a=2)
[1] 25
> g2 = function(x, a=1)h(x,a=a)
> g2(x=5, a=2)
[1] 25
Here in both the cases, "h" is taking the value passed through
"g1", and
"g2".
Any comments/hints can be helpful.
Regards
Utkarsh
------------------------------
Message: 46
Date: Wed, 27 May 2009 07:17:50 -0700 (PDT)
From: Ben Bolker <bolker@ufl.edu>
Subject: Re: [R] Hierarchical glm with binomial family
To: r-help@r-project.org
Message-ID: <23743418.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
Johan Stenberg-2 wrote:>
> Dear members of the R help list,
>
> I want to do a hierarchical glm with binomial family but am unsure
> about how to write the syntax which involves nesting.
>
> I want to test whether the risk of being attacked by Herbivores for
> Meadowsweet plants is significantly dependent on the Distance to
> heterospecific source plants.
>
> Dependent variable = Herbivory (yes/no)
> Explanatory continuous variable = Distance to heterospecific source plant
>
> Distance should be nested within Subpopulation which in turn should be
> nested within Population.
> The number of replicates per subpopulation varies between 8 and 36.
> The number of subpopulations per population varies between 4 and 9.
>
> I haven't figured out how to do nesting, but guessing that nesting is
> denoted with brackets I guess the syntax should look something like
> this (below). Could you please help me to correct this syntax so that
> it becomes useful in R?
>
> model<-glm(Herbivory~Distance(Subpopulation(Population)),
family=binomial)
>
>
You probably need a GLMM (generalized linear mixed model), which is
a little bit of a can of worms. If so, you will need the "glmer"
function
inside the "lmer" package.
I'm not entirely clear about your experimental design: I understand
that subpopulations are nested within populations, but it's not clear
whether covariates (distances to heterospecific plants) differ within
subpopulations or populations.
If they don't differ with subpopulations, I would (strongly) recommend
aggregating the
values within subpopulations and analyzing proportions as a regression
analysis:
see Murtaugh, Paul A. ?SIMPLICITY AND COMPLEXITY IN ECOLOGICAL
DATA ANALYSIS..? Ecology 88, no. 1 (2007): 56-62.
If they do, then your design is
model<-glmer(Herbivory~Distance+(1|Population/Subpopulation),
family=binomial)
See also:
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q2/002320.html
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q2/002335.html
--
View this message in context:
http://www.nabble.com/Hierarchical-glm-with-binomial-family-tp23742335p23743418.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 47
Date: Wed, 27 May 2009 15:24:35 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] Warning message as a result of logistic regression
performed
To: "Winter, Katherine" <K.Winter1@liverpool.ac.uk>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID: <1243434275.2975.65.camel@desktop.localhost>
Content-Type: text/plain; charset="us-ascii"
Try reading this thread:
http://thread.gmane.org/gmane.comp.lang.r.general/134368/focus=134475
especially the posts by I Kosmidis which show you how to diagnose
problems in logit model fits like this.
There is a statement about this warning in ?glm as well and a pointer to
a reference which discusses a source of the warning.
G
On Wed, 2009-05-27 at 11:22 +0100, Winter, Katherine
wrote:> I am sorry if this question sounds basic but I am having trouble
understanding a warning message I have been receiving in R after attempting
logistic regression.
>
> I have been using the logistic regression function in R to analyse a
simulated data set. The dependent variable "failure" has an outcome of
either 0 (success) or 1 (failure). Both the independent variables have been
previously generated in a mathematical model and stored in a data.frame for
analysis. I am currently using a sample size of 1000 and I use the following
commands in R:
>
> log.reg.1 <- glm(failure ~ age +weight +init.para.log.value
+k.d1,family=binomial(logit), data=test)
> log.reg.1.summary <- summary(log.reg.1); print(log.reg.1.summary)
> log.reg.1.exp <- exp(log.reg.1$coef); print(log.reg.1.exp)
>
> When I execute these commands I get the following warning message:
>
> "In glm.fit(x = X, y = Y, weights = weights, start = start, etastart =
etastart, :fitted probabilities numerically 0 or 1 occurred"
>
> I am unsure what this warning is referring to. I have tried using google to
answer this question but have had no luck.
>
> I have been on the following website
https://stat.ethz.ch/pipermail/r-sig-ecology/2008-July/000278..html but found it
was not helpful as I when I ran the example given I received no warning message
(I am using R version 2.8.1).
>
> I am working with simulated data so there are no missing values in the data
set.
>
> I have also looked at the following website
http://tolstoy.newcastle.edu.au/R/help/05/07/7759.html they suggest that the
warning is as a result of "perfect separation" of the results (a
possibility with simulated data). However, when I added an extra row to my
data.frame of results that I knew to be false and hence to prevent "perfect
separation" subsequent logistic regression still resulted in the same
warning message.
>
> I am still at a loss as to the meaning of this message and any help in
understanding this warning would be much appreciated.
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/25eba3a6/attachment-0001.bin>
------------------------------
Message: 48
Date: Wed, 27 May 2009 15:30:27 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] Multivariate Transformations
To: stephen sefick <ssefick@gmail.com>
Cc: r-help@r-project.org, Hollix <Holger.steinmetz@web.de>
Message-ID: <1243434627.2975.70.camel@desktop.localhost>
Content-Type: text/plain; charset="utf-8"
On Wed, 2009-05-27 at 08:39 -0400, stephen sefick wrote:> It depends on what you are after. I am by no means a wunderkind when
> it comes to transformation, but in the package vegan type
> ?wisconsin
> and that should give you a start, but if you know what
> transformations you would like to preform then apply should do what
> you need with whatever transformation you are trying to use.
decostand provides (mostly) standardisations not transformations, it
even says so. What Holger is looking for is something like a Box Cox
transform for bivariate normality but to instead achieve multivariate
normality. That is a different kettle of fish to what decostand tries to
do.
HTH
G
>
> Stephen Sefick
>
> On Wed, May 27, 2009 at 5:26 AM, Hollix <Holger.steinmetz@web.de>
wrote:
> >
> > Hello folks,
> >
> > many multivariate anayses (e.g., structural equation modeling) require
> > multivariate normal distributions.
> > Real data, however, most often significantly depart from the
multinormal
> > distribution. Some researchers (e.g., Yuan et al., 2000) have proposed
a
> > multivariate transformation of the variables.
> >
> > Can you tell me, if and how such a transformation can be handeled in
R?
> >
> > Thanks in advance.
> > With best regards
> > Holger
> >
> >
> > ---------------
> > Yuan, K.-H., Chan, W., & Bentler, P. M. (2000). Robust
transformation with
> > applications to structural equation modeling. British Journal of
> > Mathematical and Statistical Psychology, 53, 31?50.
> > --
> > View this message in context:
http://www.nabble.com/Multivariate-Transformations-tp23739013p23739013.html
> > Sent from the R help mailing list archive at Nabble.com.
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
http://www..R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
>
>
>
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/35bcaec7/attachment-0001.bin>
------------------------------
Message: 49
Date: Wed, 27 May 2009 10:15:04 -0400
From: Stavros Macrakis <macrakis@alum.mit.edu>
Subject: Re: [R] How to exclude a column by name?
To: Zeljko Vrba <zvrba@ifi.uio.no>
Cc: r-help@r-project.org
Message-ID:
<8b356f880905270715y5cda0ea2ofa4b178f9fdbfb7f@mail.gmail.com>
Content-Type: text/plain
On Wed, May 27, 2009 at 6:37 AM, Zeljko Vrba <zvrba@ifi.uio.no> wrote:
> Given an arbitrary data frame, it is easy to exclude a column given its
> index:
> df[,-2]. How to do the same thing given the column name? A naive attempt
> df[,-"name"] did not work :)
>
Various ways:
Boolean index vector:
df[ , names(df) != "name" ]
List of wanted column names:
df[ , setdiff(names(df), "name") ]
Negated list of unwanted column indexes:
df[ , -match("name",names(df)) ]
df[ , -which(names(df) == "name") ]
The special 'subset' hack for column names; beware, I think this is the
only
place in R where you can negate a column name.
subset(df , select = -a )
Hope this helps,
-s
[[alternative HTML version deleted]]
------------------------------
Message: 50
Date: Wed, 27 May 2009 07:36:11 -0700 (PDT)
Subject: Re: [R] Neural Network resource
To: markleeds@verizon.net, R Help <r-help@r-project.org>
Message-ID: <477209.31807.qm@web65415.mail.ac4.yahoo.com>
Content-Type: text/plain
You are right there is a pdf file which describes the function. But let tell you
where I am coming from.
Just to test if a neural network will work better than a ordinary least square
regression, I created a dataset with one dependent variable and 6 other
independent variables. Now I had deliberately created the dataset in such manner
that we have an excellent regression model. Eg: Y = b0 + b1*x1 + b2*x2 + b3*x3..
+ b6*x6 + e
where e is normal random variable. Naturally any statistical analysis system
running regression would easily predict the values of b1, b2, b3, ..., b6 with
around 30-40 observations.
I fed this data into a Neural network (3 hidden layers with 6 neurons in each
layer) and trained the network. When I passed the input dataset and tried to get
the predictions, all the predicted values were identical! This confused me a bit
and was wondering whether my understanding of the Neural Network was wrong.
Have you ever faced anything like it?
Regards,
Indrajit
________________________________
From: "markleeds@verizon.net" <markleeds@verizon.net>
Sent: Wednesday, May 27, 2009 7:54:59 PM
Subject: Re: [R] Neural Network resource
Hi: I've never used that package but most likely there is a AMORE vignette
that shows examples and describes the functions.
it should be on the same cran web page where the package resides, in pdf form.
Hi All,
I am trying to learn Neural Networks. I found that R has packages which can help
build Neural Nets - the popular one being AMORE package. Is there any book /
resource available which guides us in this subject using the AMORE package?
Any help will be much appreciated.
Thanks,
Indrajit
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[alternative HTML version deleted]]
------------------------------
Message: 51
Date: Wed, 27 May 2009 07:36:18 -0700 (PDT)
From: Thomas Lumley <tlumley@u.washington.edu>
Subject: Re: [R] Defining functions - an interesting problem
To: utkarshsinghal <utkarsh.singhal@global-analytics.com>
Cc: r help <r-help@r-project.org>
Message-ID:
<alpine.LRH.2.01.0905270732250.3456@homer22.u.washington.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Wed, 27 May 2009, utkarshsinghal wrote:
> I define the following function:
> (Please don't wonder about the use of this function, this is just a
> simplified version of my actual function. And please don't spend your
time in
> finding an alternate way of doing the same as the following does not
exactly
> represent my function. I am only interested in a good explanation)
>
>> f1 = function(x,ties.method="average")rank(x,ties.method)
>> f1(c(1,1,2,4), ties.method="min")
> [1] 1..5 1.5 3.0 4.0
>
> I don't know why it followed ties.method="average"..
Look at the arguments to rank()> args(rank)
function (x, na.last = TRUE, ties.method = c("average",
"first",
"random", "max", "min"))
When you do rank(x, ties.method) you are passing "min" as the second
argument to rank(), which is the na.last argument, not the ties..method
argument. This didn't give an error message because there weren't any
NAs
in your data.
You want
f1 = function(x,ties.method="average")rank(x,ties.method=ties.method)
which gives> f1(c(1,1,2,4), ties.method="min")
[1] 1 1 3 4
-thomas
Thomas Lumley Assoc. Professor, Biostatistics
tlumley@u.washington.edu University of Washington, Seattle
------------------------------
Message: 52
Date: Wed, 27 May 2009 15:37:41 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] Defining functions - an interesting problem
To: utkarshsinghal <utkarsh.singhal@global-analytics.com>
Cc: r help <r-help@r-project.org>
Message-ID: <1243435061.2975.76.camel@desktop.localhost>
Content-Type: text/plain; charset="us-ascii"
On Wed, 2009-05-27 at 19:41 +0530, utkarshsinghal wrote:> I define the following function:
> (Please don't wonder about the use of this function, this is just a
> simplified version of my actual function. And please don't spend your
> time in finding an alternate way of doing the same as the following does
> not exactly represent my function. I am only interested in a good
> explanation)
>
> > f1 = function(x,ties.method="average")rank(x,ties.method)
> > f1(c(1,1,2,4), ties.method="min")
> [1] 1.5 1.5 3.0 4.0
>
> I don't know why it followed ties.method="average".
What is the second argument of rank? It is not ties.method. You passed
"min" to na.last, not ties.method. You need to name the argument if
you
are not passing in all arguments and in the correct order.
> Anyways I randomly tried the following:
>
> > f2 =
function(x,ties.method="average")rank(x,ties..method=ties.method)
> > f2(c(1,1,2,4), ties.method="min")
> [1] 1 1 3 4
> Now, it follows the ties.method="min"
Why randomly - ?rank tells you the argument is ties.method so you should
set it to ties.method: times.method = ties.method in your call to rank.
>
> I don't see any explanation for this, however, I somehow mugged up that
> if I define it as in "f1", the ties.method in rank function takes
its
> default value which is "average" and if I define as in
"f2", it takes
> the value which is passed in "f2".
Because you aren't passing ties.method as the same argument in f1 and
f2. In f1 you are passing ties.method to na.last, in f2 you do it
correctly.
>
> But even all my mugging is wasted when I tested the following:
>
> > h = function(x, a=1)x^a
> > g1 = function(x, a=1)h(x,a)
> > g1(x=5, a=2)
> [1] 25
>
> > g2 = function(x, a=1)h(x,a=a)
> > g2(x=5, a=2)
> [1] 25
>
> Here in both the cases, "h" is taking the value passed through
"g1", and
> "g2".
Here there are only two arguments and you supplied them in the correct
place when you supplied them un-named.
HTH
G
>
> Any comments/hints can be helpful.
>
> Regards
> Utkarsh
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project..org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/2526a567/attachment-0001.bin>
------------------------------
Message: 53
Date: Wed, 27 May 2009 20:14:08 +0530
From: utkarshsinghal <utkarsh.singhal@global-analytics.com>
Subject: Re: [R] Defining functions - an interesting problem
To: Thomas Lumley <tlumley@u.washington.edu>
Cc: r help <r-help@r-project.org>
Message-ID: <4A1D51B8.7060504@global-analytics.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Yeah, seems so obvious now. What a blunder, poor me.
Perfect explanation. Thanks
Thomas Lumley wrote:> On Wed, 27 May 2009, utkarshsinghal wrote:
>
>> I define the following function:
>> (Please don't wonder about the use of this function, this is just a
>> simplified version of my actual function. And please don't spend
your
>> time in finding an alternate way of doing the same as the following
>> does not exactly represent my function. I am only interested in a
>> good explanation)
>>
>>> f1 = function(x,ties.method="average")rank(x,ties.method)
>>> f1(c(1,1,2,4), ties.method="min")
>> [1] 1.5 1.5 3.0 4.0
>>
>> I don't know why it followed ties.method="average".
>
> Look at the arguments to rank()
>> args(rank)
> function (x, na.last = TRUE, ties.method = c("average",
"first",
> "random", "max", "min"))
>
> When you do rank(x, ties.method) you are passing "min" as the
second
> argument to rank(), which is the na.last argument, not the ties.method
> argument. This didn't give an error message because there weren't
any
> NAs in your data.
>
> You want
> f1 =
function(x,ties.method="average")rank(x,ties.method=ties.method)
> which gives
>> f1(c(1,1,2,4), ties.method="min")
> [1] 1 1 3 4
>
> -thomas
>
> Thomas Lumley Assoc. Professor, Biostatistics
> tlumley@u.washington.edu University of Washington, Seattle
>
>
------------------------------
Message: 54
Date: Wed, 27 May 2009 11:35:38 -0400
From: "R Heberto Ghezzo, Dr" <heberto.ghezzo@mcgill.ca>
Subject: [R] R in Ubunto
To: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<C0F68778F3A5F545A370AECD9CC6BD9D02FD927642@EXMBXVS1.campus.mcgill.ca>
Content-Type: text/plain; charset="iso-8859-1"
Hello , I do not know anything abount Ubunto, but I found a Portable Ubunto for
Windows and since so many people
prefer Linux to Windows I decided to give it a try.
It runs very nicely, so I tried to load R, following Instructions in CRAN I
added the line
deb http://probability.ca/cran/bin/linux/ubuntu hardy/ to /etc/apt/sources.list
and then from a console
I did
sudo apt-get update
sudo apt-get install r-base
[[elided Yahoo spam]]
I got R 2.6.2!! in Windows I have R 2.9.0??
Did I do something wrong or there is another way to get the latest version of R?
Thanks for any help
Heberto Ghezzo Ph.D.
Biostatistique medical
Montreal - Canada
------------------------------
Message: 55
Date: Wed, 27 May 2009 11:31:49 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: [R] vegan metaMDS question
To: "r-help@r-project.org" <r-help@r-project.org>,
gavin.simpson@ucl.ac.uk
Message-ID:
<c502a9e10905270831g5adbd94dk9ef72c488f517658@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
The design decision in metaMDS says that it uses:
Minchin, P.R. (1987) An evaluation of relative robustness of
techniques for ecological ordinations. Vegetatio 71, 145-156.
This is the paper that I found by the same name. Is this the correct reference?
Minchin, Peter R.1987.. An Evaluation of the Relative Robustness of
Techniques for Ecological Ordination. Vegetatio. Vol. 69, No. 1/3:
89-107.
In this paper the double standardization (wisconsin()) is used then a
centering by species is preformed. The centering by species isn't
incorporated in the metaMDS methodology, is it? Is there a reason for
this, or am I missing something?
best regards,
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
------------------------------
Message: 56
Date: Wed, 27 May 2009 11:49:47 -0400
From: Max Kuhn <mxkuhn@gmail.com>
Subject: Re: [R] Neural Network resource
Cc: R Help <r-help@r-project.org>
Message-ID:
<6731304c0905270849k50258856ue384067e1ef3cb07@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
> I fed this data into a Neural network (3 hidden layers with 6 neurons in
each layer) and trained the network. When I passed the input dataset and tried
to get the predictions, all the predicted values were identical! This confused
me a bit and was wondering whether my understanding of the Neural Network was
wrong.
>
> Have you ever faced anything like it?
You should really provide code for us to help. I would initially
suspect that you didn't use a linear function between your hidden
units and the outcomes.
Also, using 3 hidden layers and 6 units per layer is a bit much for
your data set (30-40 samples). You will probably end up overfitting.
--
Max
------------------------------
Message: 57
Date: Wed, 27 May 2009 15:57:52 +0000 (UTC)
From: Dieter Menne <dieter.menne@menne-biomed.de>
Subject: Re: [R] How to exclude a column by name?
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20090527T155625-882@post.gmane..org>
Content-Type: text/plain; charset=us-ascii
Peter Dalgaard <P..Dalgaard <at> biostat.ku.dk> writes:
> Or, BTW, you can use within()
>
> aq <- within(airquality, rm(Day))
Please add this as an example to the docs of within.
Dieter
------------------------------
Message: 58
Date: Wed, 27 May 2009 11:51:25 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: Re: [R] R in Ubunto
To: "R Heberto Ghezzo, Dr" <heberto.ghezzo@mcgill.ca>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<c502a9e10905270851r5633e519ie52bb269fd153f37@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
I don't remember what the version of R in deb repositories is, but
2.6.2 is probably about right. One of the things the Debian project
is focused on is the stability of the operating system, so they do not
update packages as readily as some other distributions. I had this
with Debian 5.0 and just decided to compile R from source after
getting the R development package and some x11 development libraries.
sudo apt-get install r-base-dev
I can help you through this process if you like, or there are good
instructions for this process at the R website.
FAQ 2.5.1 How can R be installed (Unix)
On Wed, May 27, 2009 at 11:35 AM, R Heberto Ghezzo, Dr
<heberto.ghezzo@mcgill.ca> wrote:> Hello , I do not know anything abount Ubunto, but I found a Portable Ubunto
for Windows and since so many people
> prefer Linux to Windows I decided to give it a try.
> It runs very nicely, so I tried to load R, following Instructions in CRAN I
added the line
> deb http://probability.ca/cran/bin/linux/ubuntu hardy/ to
/etc/apt/sources.list and then from a console
> I did
> sudo apt-get update
> sudo apt-get install r-base
> a lot of printout and when it inishes I typed R in the console and
surprise!
> I got R 2.6.2!! in Windows I have R 2.9.0??
> Did I do something wrong or there is another way to get the latest version
of R?
> Thanks for any help
> Heberto Ghezzo Ph.D.
> Biostatistique medical
> Montreal - Canada
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
------------------------------
Message: 59
Date: Wed, 27 May 2009 11:58:37 -0400
From: Stavros Macrakis <macrakis@alum.mit.edu>
Subject: Re: [R] Defining functions - an interesting problem
To: utkarshsinghal <utkarsh.singhal@global-analytics.com>
Cc: r help <r-help@r-project.org>
Message-ID:
<8b356f880905270858y27348713le82db7ca5716f0c2@mail.gmail.com>
Content-Type: text/plain
The 'ties.method' argument to 'rank' is the *third* positional
argument to
'rank', so either you need to put it in the third position or you need
to
use a named argument.
The fact that the variable you're using to represent ties.method is called
ties.method is irrelevant. That is, this:
rank(x,ties.method)
is equivalent to
rank(x, na.last = ties.method)
which is not what you want.
You need to write
rank(x, ties.method = ties.method)
or (more concise but not as clear):
rank(x, , ties.method)
Hope this helps,
-s
On Wed, May 27, 2009 at 10:11 AM, utkarshsinghal <
utkarsh.singhal@global-analytics.com> wrote:
> I define the following function:
> (Please don't wonder about the use of this function, this is just a
> simplified version of my actual function. And please don't spend your
time
> in finding an alternate way of doing the same as the following does not
> exactly represent my function. I am only interested in a good explanation)
>
> > f1 = function(x,ties.method="average")rank(x,ties.method)
> > f1(c(1,1,2,4), ties.method="min")
> [1] 1.5 1.5 3.0 4.0
>
> I don't know why it followed ties.method="average".
> Anyways I randomly tried the following:
>
> > f2 =
function(x,ties.method="average")rank(x,ties.method=ties.method)
> > f2(c(1,1,2,4), ties.method="min")
> [1] 1 1 3 4
> Now, it follows the ties.method="min"
>
> I don't see any explanation for this, however, I somehow mugged up that
if
> I define it as in "f1", the ties.method in rank function takes
its default
> value which is "average" and if I define as in "f2", it
takes the value
> which is passed in "f2".
>
> But even all my mugging is wasted when I tested the following:
>
> > h = function(x, a=1)x^a
> > g1 = function(x, a=1)h(x,a)
> > g1(x=5, a=2)
> [1] 25
>
> > g2 = function(x, a=1)h(x,a=a)
> > g2(x=5, a=2)
> [1] 25
>
> Here in both the cases, "h" is taking the value passed through
"g1", and
> "g2".
>
> Any comments/hints can be helpful.
>
> Regards
> Utkarsh
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 60
Date: Wed, 27 May 2009 09:00:44 -0700
From: Jeff Newmiller <jdnewmil@dcn.davis.ca.us>
Subject: Re: [R] R in Ubunto
To: "R Heberto Ghezzo, Dr" <heberto.ghezzo@mcgill.ca>
Cc: "r-help@r-project.org" <r-help@r-project..org>
Message-ID: <4A1D63AC.9090202@dcn.davis.ca.us>
Content-Type: text/plain; charset=us-ascii; format=flowed
R Heberto Ghezzo, Dr wrote:> Hello , I do not know anything abount Ubunto, but I found a Portable Ubunto
for Windows and since so many people
> prefer Linux to Windows I decided to give it a try.
> It runs very nicely, so I tried to load R, following Instructions in CRAN I
added the line
> deb http://probability.ca/cran/bin/linux/ubuntu hardy/ to
/etc/apt/sources.list and then from a console
> I did
> sudo apt-get update
> sudo apt-get install r-base
> a lot of printout and when it inishes I typed R in the console and
surprise!
> I got R 2.6.2!! in Windows I have R 2.9.0??
> Did I do something wrong or there is another way to get the latest version
of R?
On the web page
http://probability.ca/cran/bin/linux/ubuntu/
it presents instructions for activating this repository. Special
instructions are included for hardy regarding activating backports
also.
--
---------------------------------------------------------------------------
Jeff Newmiller The ...... ..... Go Live...
DCN:<jdnewmil@dcn.davis.ca.us> Basics: ##.#. ##.#. Live Go...
Live: OO#.. Dead: OO#.. Playing
Research Engineer (Solar/Batteries O.O#. #.O#. with
/Software/Embedded Controllers) .OO#. .OO#. rocks...1k
------------------------------
Message: 61
Date: Wed, 27 May 2009 12:00:27 -0400
From: Luc Villandre <villandl@dms.umontreal.ca>
Subject: [R] Object-oriented programming in R
To: R Help <r-help@r-project.org>
Message-ID: <4A1D639B.1090206@dms.umontreal.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Dear R-users,
I have very recently started learning about object-oriented programming
in R. I am far from being an expert in programming, although I do have
an elementary C++ background.
Please take a look at these lines of code.> some.data = data.frame(V1 = 1:5, V2 = 6:10) ;
> p.plot = ggplot(data=some.data,aes(x=V1, y=V2)) ;
> class(p.plot) ;
> [1] "ggplot"
My understanding is that the object p.plot belongs to the "ggplot"
class. However, a new class definition like> setClass("AClass", representation(mFirst = "numeric",
mSecond =
> "ggplot")) ;
yields the warning> Warning message:
> In .completeClassSlots(ClassDef, where) :
> undefined slot classes in definition of "AClass": mSecond(class
> "ggplot")
The ggplot object is also a list :> is.list(p.plot)
> [1] TRUE
So, I guess I could identify mSecond as being a list.
However, I don't understand why "ggplot" is not considered a valid
slot
type. I thought setClass() was analogous to the class declaration in
C++, but I guess I might be wrong. Would anyone care to provide
additional explanations about this?
I decided to explore object-oriented programming in R so that I could
organize the output from my analysis in a more rigorous fashion and then
define custom methods that would yield relevant output. However, I'm
starting to wonder if this aspect is not better suited for package
builders. R lists are already very powerful and convenient templates.
Although it wouldn't be as elegant, I could define functions that would
take lists outputted by the different steps of my analysis and do what I
want with them. I'm wondering what the merits of both approaches in the
context of R would be. If anyone has any thoughts about this, I'd be
most glad to read them.
Cheers,
--
*Luc Villandr?*
/Biostatistician
McGill University Health Center -
Montreal Children's Hospital Research Institute/
------------------------------
Message: 62
Date: Wed, 27 May 2009 11:05:09 -0500
From: Kevin W <kw.statr@gmail.com>
Subject: [R] Changing point color/character in qqmath
To: r-help@r-project.org
Message-ID:
<5c62e0070905270905q7629487xea3dbc4cabaf561@mail.gmail.com>
Content-Type: text/plain
Having solved this problem, I am posting this so that the next time I search
for how to do this I will find an answer...
Using qqmath(..., groups=num) creates a separate qq distribution for each
group (within a panel). Using the 'col' or 'pch' argument does
not
(usually) work because panel.qqmath sorts the data (but not 'col' or
'pch')
before plotting. Sorting the data before calling qqmath will ensure that
the sorting does not change the order of the data.
For example, to obtain one distribution per voice part and color the point
by part 1 or part 2:
library(lattice)
singer <- singer
singer <- singer[order(singer$height),]
singer$part <- factor(sapply(strsplit(as.character(singer$voice.part), split
= " "), "[", 1),
levels = c("Bass", "Tenor", "Alto",
"Soprano"))
singer$num <- factor(sapply(strsplit(as.character(singer$voice.part), split
= " "), "[", 2))
qqmath(~ height | part, data = singer,
col=singer$num,
layout=c(4,1))
Kevin
[[alternative HTML version deleted]]
------------------------------
Message: 63
Date: Wed, 27 May 2009 11:17:35 -0500
From: Douglas Bates <bates@stat.wisc.edu>
Subject: Re: [R] Hierarchical glm with binomial family
To: Ben Bolker <bolker@ufl.edu>
Cc: r-help@r-project.org
Message-ID:
<40e66e0b0905270917u7e20441bq6cae7bce9dcdfdf8@mail.gmail.com>
Content-Type: text/plain; charset=windows-1252
On Wed, May 27, 2009 at 9:17 AM, Ben Bolker <bolker@ufl.edu>
wrote:>
>
>
> Johan Stenberg-2 wrote:
>>
>> Dear members of the R help list,
>>
>> I want to do a hierarchical glm with binomial family but am unsure
>> about how to write the syntax which involves nesting.
>>
>> I want to test whether the risk of being attacked by Herbivores for
>> Meadowsweet plants is significantly dependent on the Distance to
>> heterospecific source plants.
>>
>> Dependent variable = Herbivory (yes/no)
>> Explanatory continuous variable = Distance to heterospecific source
plant
>>
>> Distance should be nested within Subpopulation which in turn should be
>> nested within Population.
>> The number of replicates per subpopulation varies between 8 and 36.
>> The number of subpopulations per population varies between 4 and 9.
>>
>> I haven't figured out how to do nesting, but guessing that nesting
is
>> denoted with brackets I guess the syntax should look something like
>> this (below). Could you please help me to correct this syntax so that
>> it becomes useful in R?
>>
>> model<-glm(Herbivory~Distance(Subpopulation(Population)),
family=binomial)
>>
>>
>
> You probably need a GLMM (generalized linear mixed model), which is
> a little bit of a can of worms. ?If so, you will need the "glmer"
function
> inside the "lmer" package.
I think you mean the lme4 package.
> ?I'm not entirely clear about your experimental design: I understand
> that subpopulations are nested within populations, but it's not clear
> whether covariates (distances to heterospecific plants) differ within
> subpopulations or populations.
>
> ?If they don't differ with subpopulations, I would (strongly) recommend
> aggregating the
> values within subpopulations and analyzing proportions as a regression
> analysis:
> see Murtaugh, Paul A. ?SIMPLICITY AND COMPLEXITY IN ECOLOGICAL
> ?DATA ANALYSIS.? Ecology 88, no. 1 (2007): 56-62.
>
> ?If they do, then your design is
>
> model<-glmer(Herbivory~Distance+(1|Population/Subpopulation),
> family=binomial)
>
> ?See also:
>
> https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q2/002320.html
> https://stat.ethz.ch/pipermail/r-sig-mixed-models/2009q2/002335.html
> --
> View this message in context:
http://www.nabble.com/Hierarchical-glm-with-binomial-family-tp23742335p23743418.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 64
Date: Thu, 28 May 2009 00:19:28 +0800
From: Linlin Yan <yanlinlin82@gmail.com>
Subject: Re: [R] Sort matrix by column 1 ascending then by column 2
decending
To: Paul Geeleher <paulgeeleher@gmail.com>, r-help@r-project.org
Message-ID:
<8d4c23b10905270919w54a1e6cbred3ab67fe0f0347f@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
It's a very interesting problem. I just wrote a function for it:
order.matrix <- function(m, columnsDecreasing = c('1'=FALSE), rows =
1:nrow(m))
{
if (length(columnsDecreasing) > 0)
{
col <- as.integer(names(columnsDecreasing[1]));
values <- sort(unique(m[rows, col]), decreasing=columnsDecreasing[1]);
unlist(sapply(values, function(x) order.matrix(m,
columnsDecreasing[-1], which((1:nrow(m) %in% rows) & (m[,
col]==x)))));
}
else
{
rows;
}
}
For instance:> m <- matrix( c(2, 1, 1, 3, .5, .3, .5, .2), 4)
> m
[,1] [,2]
[1,] 2 0.5
[2,] 1 0.3
[3,] 1 0.5
[4,] 3 0.2> m[order.matrix(m),]
[,1] [,2]
[1,] 1 0.3
[2,] 1 0.5
[3,] 2 0.5
[4,] 3 0.2> m[order.matrix(m, c("1"=FALSE, "2"=TRUE)),]
[,1] [,2]
[1,] 1 0.5
[2,] 1 0.3
[3,] 2 0.5
[4,] 3 0.2
Any comment is welcome! ;)
On Wed, May 27, 2009 at 11:04 PM, Linlin Yan <yanlinlin82@gmail.com>
wrote:>> m <- matrix( c(2, 1, 1, 3, .5, .3, .5, .2), 4)
>> m
> ? ? [,1] [,2]
> [1,] ? ?2 ?0.5
> [2,] ? ?1 ?0.3
> [3,] ? ?1 ?0.5
> [4,] ? ?3 ?0.2
>> m[unlist(sapply(sort(unique(m[,1])), function(x)
which(m[,1]==x)[order(m[(m[,1]==x),2], decreasing=TRUE)])),]
> ? ? [,1] [,2]
> [1,] ? ?1 ?0.5
> [2,] ? ?1 ?0.3
> [3,] ? ?2 ?0.5
> [4,] ? ?3 ?0.2
>
> On Wed, May 27, 2009 at 8:39 PM, Paul Geeleher
<paulgeeleher@gmail.com> wrote:
>> I've got a matrix with 2 columns and n rows. I need to sort it
first
>> by the values in column 1 ascending. Then for values which are the
>> same in column 1, sort by column 2 decending. For example:
>>
>> 2 .5
>> 1 .3
>> 1 .5
>> 3 .2
>>
>> Goes to:
>>
>> 1 .5
>> 1 .3
>> 2 .5
>> 3 .2
>>
>> This is easy to do in spreadsheet programs but I can't seem to work
>> out how to do it in R and haven't been able to find a solution
>> anywhere.
>>
>>
>> Thanks!
>>
>> -Paul.
>>
>> --
>> Paul Geeleher
>> School of Mathematics, Statistics and Applied Mathematics
>> National University of Ireland
>> Galway
>> Ireland
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
------------------------------
Message: 65
Date: Wed, 27 May 2009 18:24:46 +0200
From: Berta Ib??ez <bertuki6@hotmail.com>
Subject: [R] Deviance explined in GAMM, library mgcv
To: <r-help@r-project.org>
Message-ID: <COL107-W162A6C65D1F0B83A1A06C28F530@phx.gbl>
Content-Type: text/plain
Dear R-users,
To obtain the percentage of deviance explained when fitting a gam model using
the mgcv library is straightforward:
summary(object.gam) $dev.expl
or alternatively, using the deviance (deviance(object.gam)) of the null and the
fitted models, and then using 1 minus the quotient of deviances.
However, when a gamm (generalizad aditive mixed model) is fitted, the deviance
is not displayed, and only the logLik of the underlying lme model can be derived
(logLik(objetct.gamm$lme)), which is not enough to derive the percentage
deviance explained because the logLik for the saturated model is not available.
Any suggestions on how to obtain the deviance explained when a gamm is fitted
when the typical default gauusian model is fitted? Or alternavely, are the R^2
derived from a gam model and a gamm model comparable?
Thanks a lot in advance,
Berta
_________________________________________________________________
Descárgate ahora el nuevo Internet Explorer 8 y ten a tu alcance todos lo
[[alternative HTML version deleted]]
------------------------------
Message: 66
Date: Wed, 27 May 2009 10:27:10 -0600 (MDT)
From: guox@ucalgary.ca
Subject: [R] How to set a filter during reading tables
To: r-help@r-project.org
Message-ID: <4784.68.145.107.33.1243441630.squirrel@68.145.107.33>
Content-Type: text/plain;charset=iso-8859-1
We are reading big tables, such as,
Chemicals <-
read.table('ftp://ftp.bls.gov/pub/time.series/wp/wp.data.7.Chemicals',header
= TRUE, sep = '\t', as.is =T)
I was wondering if it is possible to set a filter during loading so that
we just load what we want not the whole table each time. Thanks,
-james
------------------------------
Message: 67
Date: Wed, 27 May 2009 18:28:09 +0200
From: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Subject: Re: [R] How to exclude a column by name?
To: Dieter Menne <dieter.menne@menne-biomed.de>
Cc: r-help@stat.math.ethz.ch
Message-ID: <4A1D6A19.9000501@idi.ntnu.no>
Content-Type: text/plain; charset=ISO-8859-1
Dieter Menne wrote:> Peter Dalgaard <P.Dalgaard <at> biostat.ku.dk> writes:
>
>
>> Or, BTW, you can use within()
>>
>> aq <- within(airquality, rm(Day))
>>
>
> Please add this as an example to the docs of within.
>
possibly with the slightly more generic
unwanted <- 'Day'
aq <- within(airquality, rm(list=unwanted))
vQ
------------------------------
Message: 68
Date: Wed, 27 May 2009 12:01:08 -0400
From: Thomas Levine <thomas.levine@gmail.com>
Subject: [R] Labeling barplot bars by multiple factors
To: r-help@r-project.org
Message-ID:
<677ee07e0905270901x4906a7edh4f2a05df8e5d657f@mail.gmail.com>
Content-Type: text/plain
I want to plot quantitative data as a function of three two-level factors.
How do I group the bars on a barplot by level through labeling and spacing?
Here
<http://www.thomaslevine.org/sample_multiple-factor_barplot.png>'s
what
I'm thinking of. Also, I'm pretty sure that I want a barplot, but there
may
be something better.
Tom
[[alternative HTML version deleted]]
------------------------------
Message: 69
Date: Wed, 27 May 2009 17:32:46 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] vegan metaMDS question
To: stephen sefick <ssefick@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID: <1243441967.2975.102.camel@desktop.localhost>
Content-Type: text/plain; charset="us-ascii"
On Wed, 2009-05-27 at 11:31 -0400, stephen sefick wrote:> The design decision in metaMDS says that it uses:
>
> Minchin, P.R. (1987) An evaluation of relative robustness of
> techniques for ecological ordinations. Vegetatio 71, 145-156.
>
> This is the paper that I found by the same name. Is this the correct
reference?
>
> Minchin, Peter R.1987. An Evaluation of the Relative Robustness of
> Techniques for Ecological Ordination. Vegetatio. Vol. 69, No. 1/3:
> 89-107.
Yes, I suspect so - the other volume/pages refers to another paper of
Peter Minchin's. Jari has now fixed this in the sources and the change
will be made in the next version of Vegan released to CRAN. Thanks for
pointing this out.
>
> In this paper the double standardization (wisconsin()) is used then a
> centering by species is preformed. The centering by species isn't
> incorporated in the metaMDS methodology, is it?
No it isn't.
> Is there a reason for
> this, or am I missing something?
Yes - the only mention of centring by species is in reference to PCA. If
you centre species data, you'd have negative numbers, which can't be
handled in most dissimilarity coefficients and hence doesn't make sense
for nMDS.. If I've overlooked something in that paper let me know and
I'll take a closer look.
I forwarded your email to Jari Oksanen, who wrote the metaMDS code in
vegan. If he has anything more to add, I'm sure he'll reply to you
directly.
Why did you send this to R-Help and me? This is a specific
package-related question which should go to the maintainer (Jari).. We
also have a forum for asking such questions on the vegan R-Forge pages.
There is no need to bother this list with such questions.
HTH
G
> best regards,
>
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/e1512943/attachment-0001.bin>
------------------------------
Message: 70
Date: Wed, 27 May 2009 11:36:17 -0500
From: Kevin W <kw.statr@gmail.com>
Subject: Re: [R] Sort matrix by column 1 ascending then by column 2
decending
To: Linlin Yan <yanlinlin82@gmail.com>
Cc: r-help@r-project.org, Paul Geeleher <paulgeeleher@gmail.com>
Message-ID:
<5c62e0070905270936s41d29926g9cb5185bc5ecc799@mail.gmail.com>
Content-Type: text/plain
See also this tip on the R wiki:
http://wiki.r-project.org/rwiki/doku.php?id=tips:data-frames:sort
Also available as the orderBy function in the doBy package.
Kevin Wright
On Wed, May 27, 2009 at 11:19 AM, Linlin Yan <yanlinlin82@gmail.com>
wrote:
> It's a very interesting problem. I just wrote a function for it:
>
> order.matrix <- function(m, columnsDecreasing = c('1'=FALSE),
rows > 1:nrow(m))
> {
> if (length(columnsDecreasing) > 0)
> {
> col <- as.integer(names(columnsDecreasing[1]));
> values <- sort(unique(m[rows, col]),
decreasing=columnsDecreasing[1]);
> unlist(sapply(values, function(x) order.matrix(m,
> columnsDecreasing[-1], which((1:nrow(m) %in% rows) & (m[,
> col]==x)))));
> }
> else
> {
> rows;
> }
> }
>
> For instance:
> > m <- matrix( c(2, 1, 1, 3, .5, .3, .5, .2), 4)
> > m
> [,1] [,2]
> [1,] 2 0.5
> [2,] 1 0.3
> [3,] 1 0.5
> [4,] 3 0.2
> > m[order.matrix(m),]
> [,1] [,2]
> [1,] 1 0.3
> [2,] 1 0.5
> [3,] 2 0.5
> [4,] 3 0.2
> > m[order.matrix(m, c("1"=FALSE, "2"=TRUE)),]
> [,1] [,2]
> [1,] 1 0.5
> [2,] 1 0.3
> [3,] 2 0.5
> [4,] 3 0.2
>
> Any comment is welcome! ;)
>
> On Wed, May 27, 2009 at 11:04 PM, Linlin Yan <yanlinlin82@gmail.com>
> wrote:
> >> m <- matrix( c(2, 1, 1, 3, .5, .3, .5, .2), 4)
> >> m
> > [,1] [,2]
> > [1,] 2 0.5
> > [2,] 1 0.3
> > [3,] 1 0.5
> > [4,] 3 0.2
> >> m[unlist(sapply(sort(unique(m[,1])), function(x)
> which(m[,1]==x)[order(m[(m[,1]==x),2], decreasing=TRUE)])),]
> > [,1] [,2]
> > [1,] 1 0.5
> > [2,] 1 0.3
> > [3,] 2 0.5
> > [4,] 3 0.2
> >
> > On Wed, May 27, 2009 at 8:39 PM, Paul Geeleher
<paulgeeleher@gmail.com>
> wrote:
> >> I've got a matrix with 2 columns and n rows. I need to sort it
first
> >> by the values in column 1 ascending. Then for values which are the
> >> same in column 1, sort by column 2 decending. For example:
> >>
> >> 2 .5
> >> 1 .3
> >> 1 .5
> >> 3 .2
> >>
> >> Goes to:
> >>
> >> 1 .5
> >> 1 .3
> >> 2 .5
> >> 3 .2
> >>
> >> This is easy to do in spreadsheet programs but I can't seem to
work
> >> out how to do it in R and haven't been able to find a solution
> >> anywhere.
> >>
> >>
> >> Thanks!
> >>
> >> -Paul.
> >>
> >> --
> >> Paul Geeleher
> >> School of Mathematics, Statistics and Applied Mathematics
> >> National University of Ireland
> >> Galway
> >> Ireland
> >>
> >> ______________________________________________
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
> >
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 71
Date: Wed, 27 May 2009 18:46:02 +0200
From: Jarek Jasiewicz <jarekj@amu.edu.pl>
Subject: Re: [R] R in Ubunto
To: "R Heberto Ghezzo, Dr" <heberto.ghezzo@mcgill.ca>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID: <4A1D6E4A.5000709@amu.edu.pl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
R Heberto Ghezzo, Dr pisze:> Hello , I do not know anything abount Ubunto, but I found a Portable Ubunto
for Windows and since so many people
> prefer Linux to Windows I decided to give it a try.
> It runs very nicely, so I tried to load R, following Instructions in CRAN I
added the line
> deb http://probability.ca/cran/bin/linux/ubuntu hardy/ to
/etc/apt/sources.list and then from a console
> I did
> sudo apt-get update
> sudo apt-get install r-base
> a lot of printout and when it inishes I typed R in the console and
surprise!
> I got R 2.6.2!! in Windows I have R 2.9.0??
> Did I do something wrong or there is another way to get the latest version
of R?
> Thanks for any help
> Heberto Ghezzo Ph.D.
> Biostatistique medical
> Montreal - Canada
>
use cran mirror:
for canada it could be:
http://cran.stat.sfu.ca/
deb http://cran.stat.sfu.ca/bin/linux/ubuntu
<http://%3Cmy.favorite.cran.mirror%3E/bin/linux/ubuntu> hardy/
you will have 2.9.0> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 72
Date: Wed, 27 May 2009 12:48:34 -0400
From: Duncan Murdoch <murdoch@stats.uwo.ca>
Subject: Re: [R] Sort matrix by column 1 ascending then by column 2
decending
To: Paul Geeleher <paulgeeleher@gmail.com>
Cc: r-help@r-project.org
Message-ID: <4A1D6EE2.30102@stats.uwo.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 5/27/2009 8:39 AM, Paul Geeleher wrote:> I've got a matrix with 2 columns and n rows. I need to sort it first
> by the values in column 1 ascending. Then for values which are the
> same in column 1, sort by column 2 decending. For example:
You've seen a few ways. Here are some more:
1. Use the fact that order() uses a stable sort algorithm, so just sort
by the second column then the first:
x <- matrix(c(2,1,1,3,.5,.3,.5,.2), ncol=2)
x1 <- x[order(x[,2], decreasing=TRUE),]
x2 <- x1[order(x1[,1]),]
x2
2. Use the fact that your values are numeric, so negatives sort in the
reverse order of positives:
x[order(x[,1], -x[,2]),]
3. If the values aren't known to be numeric, convert them to numeric
before using them as sort keys:
x[order(xtfrm(x[,1]), -xtfrm(x[,2])),]
In any of these, watch out for NA handling. My methods all put NA
values last, but that might not be what you want.
Duncan Murdoch
>
> 2 .5
> 1 .3
> 1 ..5
> 3 .2
>
> Goes to:
>
> 1 .5
> 1 .3
> 2 .5
> 3 .2
>
> This is easy to do in spreadsheet programs but I can't seem to work
> out how to do it in R and haven't been able to find a solution
> anywhere.
>
>
> Thanks!
>
> -Paul.
>
------------------------------
Message: 73
Date: Wed, 27 May 2009 13:54:37 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] Labeling barplot bars by multiple factors
To: Thomas Levine <thomas.levine@gmail.com>
Cc: r-help@r-project.org
Message-ID:
<37fda5350905270954w42d99ee1nd11f211eff4c4ddd@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
You can get something close with ggplot2:
library(ggplot2)
my_data = expand.grid(
A = factor(c('A1','A2'))
, B = factor(c('B1','B2'))
, C = factor(c('C1','C2'))
)
my_data$DV = rnorm(8,mean=10,sd=1)
p = ggplot()
p = p + layer(
geom = 'bar'
, stat = 'identity'
, data = my_data
, mapping = aes(
x = C
, y = DV
, fill = B
)
, position = 'dodge'
)
p = p + facet_grid(
A ~ .
)
p = p + coord_flip()
print(p)
On Wed, May 27, 2009 at 1:01 PM, Thomas Levine <thomas.levine@gmail.com>
wrote:> I want to plot quantitative data as a function of three two-level factors.
> How do I group the bars on a barplot by level through labeling and spacing?
> Here
<http://www.thomaslevine.org/sample_multiple-factor_barplot.png>'s
what
> I'm thinking of. Also, I'm pretty sure that I want a barplot, but
there may
> be something better.
>
> Tom
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tr.im/mikes_public_calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 74
Date: Wed, 27 May 2009 19:24:52 +0200
From: Jose Quesada <quesada@gmail.com>
Subject: [R] alternative to built-in data editor
To: r-help@r-project.org
Message-ID: <4A1D7764.1010700@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi all,
I often have to peek at large data.
While head and tail are convenient, at times I'd like some more
comprehensive.
I guess I debug better in a more visual way?
I was wondering if there's a way to override the default data editor.
I could of course dump to a txt file, and look at it with an
editor/spreadsheet, but after doing it a few times, it gets boring.
Maybe it's time for me to write a function to automatize the process?
I'd ask first in case there's an easier way.
Thanks!
-Jose
--
Jose Quesada, PhD.
Max Planck Institute,
Center for Adaptive Behavior and Cognition -ABC-,
Lentzeallee 94, office 224, 14195 Berlin
http://www.josequesada.name/
------------------------------
Message: 75
Date: Wed, 27 May 2009 12:27:04 -0500
From: "Carson, John" <John.Carson@shawgrp.com>
Subject: [R] no internal function "int.unzip" in R 2.9.0 for Windows
To: <r-help@r-project.org>
Message-ID:
<79911294E2D6B14087F5AC36345A50BB02106806@entbtrxmb01.shawgrp.com>
Content-Type: text/plain
> library(R2HTML)
Loading required package: R2HTML
Error in .Internal(int.unzip(zipname, NULL, dest)) :
no internal function "int.unzip"
Error : .onLoad failed in 'loadNamespace' for 'R2HTML'
Error: package 'R2HTML' could not be loaded
Version: R 2.9.0 for Windows
****Internet Email Confidentiality Footer****
Privileged/Confidential Information may be contained in this
message. If you are not the addressee indicated in this message (or
responsible for delivery of the message to such person), you may
not copy or deliver this message to anyone. In such case, you
should destroy this message and notify the sender by reply email.
Please advise immediately if you or your employer do not consent to
Internet email for messages of this kind. Opinions, conclusions and
other information in this message that do not relate to the
official business of The Shaw Group Inc. or its subsidiaries shall
be understood as neither given nor endorsed by it.
______________________________________ The Shaw Group Inc.
http://www.shawgrp.com
[[alternative HTML version deleted]]
------------------------------
Message: 76
Date: Wed, 27 May 2009 13:36:41 -0400 (EDT)
From: Rebecca Sela <rsela@stern.nyu.edu>
Subject: [R] "Error: package/namespace load failed"
To: r-help <r-help@r-project.org>
Message-ID:
<24291478.7471501243445801339.JavaMail.root@calliope.stern.nyu.edu>
Content-Type: text/plain; charset="utf-8"
I am writing my first R package, and I have been getting the following series of
errors when I run R CMD check:
* checking S3 generic/method consistency ... WARNING
Error: package/namespace load failed for 'REEMtree'
Call sequence:
2: stop(gettextf("package/namespace load failed for '%s'",
libraryPkgName(package)),
call. = FALSE, domain = NA)
1: library(package, lib.loc = lib.loc, character.only = TRUE, verbose = FALSE)
Execution halted
See section 'Generic functions and methods' of the 'Writing R
Extensions'
manual.
* checking replacement functions ... WARNING
Error: package/namespace load failed for 'REEMtree'
Call sequence:
2: stop(gettextf("package/namespace load failed for '%s'",
libraryPkgName(package)),
call. = FALSE, domain = NA)
1: library(package, lib.loc = lib.loc, character.only = TRUE, verbose = FALSE)
Execution halted
In R, the argument of a replacement function which corresponds to the right
hand side must be named 'value'.
* checking foreign function calls ... WARNING
Error: package/namespace load failed for 'REEMtree'
Call sequence:
2: stop(gettextf("package/namespace load failed for '%s'",
libraryPkgName(package)),
call. = FALSE, domain = NA)
1: library(package, lib.loc = lib.loc, character.only = TRUE, verbose = FALSE)
Execution halted
See section 'System and foreign language interfaces' of the 'Writing
R
Extensions' manual.
* checking Rd files ... OK
* checking for missing documentation entries ... ERROR
Error: package/namespace load failed for 'REEMtree'
(Everything is OK up to this point.)
Looking around online, I have found references to this error when there is
compiled C or Fortran code, but I have none of that in my code. I imagine this
is a simple problem (perhaps with my NAMESPACE file), but I don't know what
it is. (The text of the NAMESPACE file is at the bottom of this e-mail.)
[[elided Yahoo spam]]
Rebecca
NAMESPACE file:
useDynLib(REEMtree)
export(AutoCorrelationLRtest, FixedEffectsTree, RandomEffectsTree,
LMEpredict, PredictionTest, RandomEffectsTree, RMSE, simpleREEMdata,
REEMtree, FEEMtree)
import(nlme)
import(rpart)
S3method(is,REEMtree)
S3method(logLik,REEMtree)
S3method(plot,REEMtree)
S3method(predict,REEMtree)
S3method(print, REEMtree)
S3method(ranef,REEMtree)
S3method(tree,REEMtree)
S3method(is,FEEMtree)
S3method(logLik,FEEMtree)
S3method(plot,FEEMtree)
S3method(predict,FEEMtree)
S3method(print, FEEMtree)
S3method(tree,FEEMtree)
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: NAMESPACE
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090527/6cb0ce2f/attachment-0001.pl>
------------------------------
Message: 77
Date: Wed, 27 May 2009 13:41:54 -0400 (EDT)
From: "Jack Siegrist" <jacksie@eden.rutgers.edu>
Subject: [R] contour lines on persp plot
To: r-help@r-project.org
Message-ID:
<3cf9f70cc9df289f6cfdb868581fd38d.squirrel@webmail.eden.rutgers.edu>
Content-Type: text/plain;charset=iso-8859-1
Hello folks,
I am a beginner R user. I have been able to make a 3D surface plot using
'persp'. The surface is made by a grid of lines emanating
perpendicularly
from each of the x and y axes at regular intervals.
I can get rid of that grid by setting 'border=NA'.
Can anyone suggest some ways to replace the grid with contour lines, to
create a 3-dimensional contour map?
Thanks for any help.
Here is an example of what I have so far:
#to create a perspective plot; plots funct. across all combos of x and y
fn<-function(x,y){sin(x)+2*y} #this looks like a corrugated tin roof
x<-seq(from=1,to=100,by=1) #generates a list of x values to sample
y<-seq(from=1,to=100,by=1) #generates a list of y values to sample
z<-outer(x,y,FUN=fn) #applies the funct. across the combos of x and y
persp(z,col='lightgray',shade=.5,border=NA,) #plots without gridlines
------------------------------
Message: 78
Date: Wed, 27 May 2009 10:46:31 -0700 (PDT)
Subject: Re: [R] Neural Network resource
To: R Help <r-help@r-project.org>
Cc: mxkuhn@gmail.com
Message-ID: <822764.22697.qm@web65414.mail.ac4.yahoo.com>
Content-Type: text/plain; charset=iso-8859-1
Here is the code that i had used:
#########################################
## Read in the raw data
fitness <- c(44,89.47,44.609,11.37,62,178,182,
40,75.07,45.313,10.07,62,185,185,
44,85.84,54.297,8.65,45,156,168,
42,68.15,59.571,8.17,40,166,172,
38,89.02,49.874,9.22,55,178,180,
47,77.45,44.811,11.63,58,176,176,
40,75.98,45.681,11.95,70,176,180,
43,81.19,49.091,10.85,64,162,170,
44,81.42,39.442,13.08,63,174,176,
38,81.87,60.055,8.63,48,170,186,
44,73.03,50.541,10.13,45,168,168,
45,87.66,37.388,14.03,56,186,192,
45,66.45,44.754,11.12,51,176,176,
47,79.15,47.273,10.6,47,162,164,
54,83.12,51.855,10.33,50,166,170,
49,81.42,49..156,8.95,44,180,185,
51,69.63,40.836,10.95,57,168,172,
51,77.91,46.672,10,48,162,168,
48,91.63,46.774,10.25,48,162,164,
49,73.37,50.388,10.08,67,168,168,
57,73.37,39.407,12.63,58,174,176,
54,79.38,46.08,11.17,62,156,165,
52,76.32,45.441,9.63,48,164,166,
50,70.87,54.625,8.92,48,146,155,
51,67.25,45.118,11.08,48,172,172,
54,91.63,39.203,12.88,44,168,172,
51,73.71,45.79,10.47,59,186,188,
57,59.08,50.545,9.93,49,148,155,
49,76.32,48.673,9.4,56,186,188,
48,61.24,47.92,11.5,52,170,176,
52,82.78,47.467,10.5,53,170,172
)
fitness2 <- data.frame(matrix(fitness,nrow = 31, byrow = TRUE))
colnames(fitness2) <-
c("Age","Weight","Oxygen","RunTime","RestPulse","RunPulse","MaxPulse")
attach(fitness2)
## Create the input dataset
indep <- fitness2[,-3]
## Create the neural network structure
net.start <- newff(n.neurons=c(6,6,6,1),?????
???????????? learning.rate.global=1e-2,???????
???????????? momentum.global=0.5,?????????????
???????????? error.criterium="LMS",??????????
???????????? Stao=NA, hidden.layer="tansig",??
???????????? output.layer="purelin",??????????
???????????? method="ADAPTgdwm")
## Train the net
result <- train(net.start, indep, Oxygen, error.criterium="LMS",
report=TRUE, show.step=100, n.shows=5 )
## Predict
pred <- sim(result$net, indep)
pred????????????
###########################################????????????
Here?I am trying to predict Oxygen levels using the 6 independent variables.?But
whenever I am trying to run a prediction - I am getting constant values
throughout (In the above example - the values of pred).
Thanks & Regards,
Indrajit
?
----- Original Message ----
From: Max Kuhn <mxkuhn@gmail.com>
Cc: markleeds@verizon.net; R Help <r-help@r-project.org>
Sent: Wednesday, May 27, 2009 9:19:47 PM
Subject: Re: [R] Neural Network resource
> I fed this data into a Neural network (3 hidden layers with 6 neurons in
each layer) and trained the network. When I passed the input dataset and tried
to get the predictions, all the predicted values were identical! This confused
me a bit and was wondering whether my understanding of the Neural Network was
wrong.
>
> Have you ever faced anything like it?
You should really provide code for us to help. I would initially
suspect that you didn't use a linear function between your hidden
units and the outcomes.
Also, using 3 hidden layers and 6 units per layer is a bit much for
your data set (30-40 samples). You will probably end up overfitting.
--
Max
------------------------------
Message: 79
Date: Wed, 27 May 2009 20:03:30 +0200
From: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Subject: Re: [R] alternative to built-in data editor
To: Jose Quesada <quesada@gmail.com>
Cc: r-help@r-project.org
Message-ID: <4A1D8072.6020001@biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Jose Quesada wrote:> Hi all,
>
> I often have to peek at large data.
> While head and tail are convenient, at times I'd like some more
> comprehensive.
> I guess I debug better in a more visual way?
> I was wondering if there's a way to override the default data editor.
> I could of course dump to a txt file, and look at it with an
> editor/spreadsheet, but after doing it a few times, it gets boring.
> Maybe it's time for me to write a function to automatize the process?
> I'd ask first in case there's an easier way.
>
> Thanks!
> -Jose
>
There's a tcltk-based data viewer in John Fox' Rcmdr package. Not sure
it does what you want, but check it out.
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 80
Date: Wed, 27 May 2009 06:06:09 -0700 (PDT)
From: durden10 <durdantyler@gmx.net>
Subject: [R] r-plot 2nd attempt
To: r-help@r-project.org
Message-ID: <23742121.post@talk..nabble.com>
Content-Type: text/plain; charset=us-ascii
[[elided Yahoo spam]]
I have come down to this:
Win<- c(-0.005276404, 0.081894394, -0.073461539, 0.184371967,
0.133189670, -0.006239016, -0.063616699, 0.196754234, 0.402148743,
0.104408425,
0.036910154, 0.195227863, 0.212743723, 0.280889666, 0.300277802)
Calgary<- c(5, 8, 11, 3, 7, 4, 7, 1, 3, 6, 3, 2, 8, 0, 1)
data_corr <- data.frame(Win,Calgary)
plot(data_corr, type = "p", axes=FALSE, col = "blue", lwd =
2)
#y-axis
axis(2, tcl=0.35,seq(1,11,by=2))
#x-axis
axis(1, tcl=0.35,seq(-0.1,0.5,by=0.1))
box()
abline(lm(data_corr[,2]~data_corr[,1]))
It works for the y-axis, but unfortunately, the x-axis is still not working:
It starts at 0 and end at 0.4, but it should start at -0.1, as mentioned in
the code (cf picture) :confused:
http://www.nabble.com/file/p23742121/Rplots_2.png
--
View this message in context:
http://www.nabble.com/r-plot-tp23739356p23742121.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 81
Date: Wed, 27 May 2009 19:13:14 +0200
From: "Roland Chariatte" <info@rcline.ch>
Subject: [R] R-beta: Re:Stats Seminar 18/02/98
To: <r-help@stat.math.ethz.ch>
Message-ID: <E2DFF7E1FBC744BDB0D264859834A2E5@portable>
Content-Type: text/plain
Bonjour,
Je recherche une ancienne amie qui porte le nom de Marylin Gabriel originaire
des Seychelles et que j'ai perdu de vue il y à environ 20 ans.
Si cette adresse e-mail est la tienne, tu te souviendra très bien de moi,
j'aimerais beaucoup te revoir car j'ai de superbes souvenirs de toi.
Peut être à bientôt.
Je suis toujours le même Roland Chariatte de Delémont que tu as connu en 1986 ou
87.
RCline
Roland CHARIATTE
rue de la Croix 27
2822 COURROUX
Switzerland
Mobile +41 (0)78 648 19 68
info@rcline.ch
www.rcline.ch
[[alternative HTML version deleted]]
------------------------------
Message: 82
Date: Wed, 27 May 2009 10:27:09 -0700 (PDT)
Subject: [R] problem with cbind
To: r-help@r-project.org
Message-ID: <23747075.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi All,
I have a file with two columns, the first column has the names of the
patients and the second column has the age. I am looking into creating an
output file that looks like
1-10 10-20 etc
Eric Chris
Bob mat
Andrew
Suzan
Where each column has the name of the patients in a given age category that
is displayed in the header. For example in the output, the first column has
the name of the patients with age between 1 to 10.
The problem that I am having is that I can not use cbind since the length of
the vectors is different. Is there a way to create such a file?
Thanks for your help
--
View this message in context:
http://www.nabble.com/problem-with-cbind-tp23747075p23747075.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 83
Date: Wed, 27 May 2009 11:04:12 -0700
From: Michael Menke <menke@email.arizona.edu>
Subject: [R] interpretation of the likelihood ratio test under *R* GLM
To: r-help@r-project.org
Message-ID: <4A1D809C.6010604@email.arizona.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Can anyone tell me how the LRT is to be interpreted under the glm
package when using glm for logistic regression with mutiple predictors?
family=binomial("logit"))
drop1(Confidence.glm, test="Chisq")
The summary z-table suggests a direction of the effect, and notably the
large LRT statistics are the significant ones. I am used to thinking of
extremely small LRTs as significant (negative natural logarithms of
LRTs). I must assume that the LRT in *R* is alternative hypothesis over
null hypothesis, rather than the convention I learned of
null/alternative, where a small number (negative logLR) represents
strong evidence and zero/zed evidence is represented by an LR of 1
(logLR = 0). Comments? Am I interpreting this correctly?
J. Michael Menke
University of Arizona
------------------------------
Message: 84
Date: Wed, 27 May 2009 11:09:30 -0700
From: Michael Menke <menke@email.arizona.edu>
Subject: [R] how do I get to be a member?
To: r-help@r-project.org
Message-ID: <4A1D81DA.9070104@email.arizona.edu>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Information, please.
------------------------------
Message: 85
Date: Wed, 27 May 2009 15:16:59 +0200
From: Christian <ozric@web.de>
Subject: [R] RWeka weka.core.SerializationHelper.write
To: r-help@r-project.org
Message-ID: <4A1D3D4B.8040306@web.de>
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Hi,
is it possible to use the writer method from the
weka.core.SerializationHelper class in R?
What could be wrong in my trial.
many thanks
Christian
.jmethods("weka/core/SerializationHelper")
[2] "public static void
weka.core.SerializationHelper.write(java.lang.String,java.lang.Object)
throws java.lang.Exception"
> NB <-
make_Weka_classifier("weka/classifiers/bayes/NaiveBayes")
> data("HouseVotes84", package = "mlbench")
> model <- NB(Class ~ ., data = HouseVotes84)
>
>
.jcall("weka/core/SerializationHelper",returnSig="V","write","nb.model",model)
Fehler in .jcall("weka/core/SerializationHelper", returnSig =
"V",
"write", :
method write with signature (Ljava/lang/String;)V not found
>
.jcall("weka/core/SerializationHelper",returnSig="V","write","nb.model",model$classifier)
Fehler in .jcall("weka/core/SerializationHelper", returnSig =
"V",
"write", :
method write with signature
(Ljava/lang/String;Lweka/classifiers/bayes/NaiveBayes;)V not found
------------------------------
Message: 86
Date: Wed, 27 May 2009 19:26:44 +0200
From: Christian <ozric@web.de>
Subject: [R] RWeka weka.core.SerializationHelper.write
To: r-help@r-project.org
Message-ID: <4A1D77D4.6000109@web.de>
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Hi,
is it possible to use the writer method from the
weka.core.SerializationHelper class in R?
When yes, what could be wrong in my trial.
many thanks
Christian
.jmethods("weka/core/SerializationHelper")
[2] "public static void
weka.core.SerializationHelper.write(java.lang.String,java.lang.Object)
throws java.lang.Exception" > NB <-
make_Weka_classifier("weka/classifiers/bayes/NaiveBayes")
> data("HouseVotes84", package = "mlbench")
> model <- NB(Class ~ ., data = HouseVotes84)
>
>
.jcall("weka/core/SerializationHelper",returnSig="V","write","nb.model",model)
Fehler in .jcall("weka/core/SerializationHelper", returnSig =
"V",
"write", :
method write with signature (Ljava/lang/String;)V not found
>
.jcall("weka/core/SerializationHelper",returnSig="V","write","nb.model",model$classifier)
Fehler in .jcall("weka/core/SerializationHelper", returnSig =
"V",
"write", :
method write with signature
(Ljava/lang/String;Lweka/classifiers/bayes/NaiveBayes;)V not found
------------------------------
Message: 87
Date: Wed, 27 May 2009 09:16:41 -0700 (PDT)
From: retama <retama1745@gmail.com>
Subject: Re: [R] Loop avoidance and logical subscripts
To: r-help@r-project.org
Message-ID: <23745814.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thank you! The script is now adapted to Biostrings and it is really fast! For
example, it does:
alph_sequence <- alphabetFrequency(data$sequence, baseOnly=TRUE)
data$GCsequence <- rowSums(alph_sequence[,c("G", "C")])
/
rowSums(alph_sequence)
in the G+C computation. It also works amazingly fast in substring extraction
(substring), reverse complement (reverseComplement sequences), palindromes
search (findComplementedPalindromes) and so on.
Now, my bottleneck is conventional string handling, because I have not found
yet how to convert DNAStringSets to vector of chars. Now, I'm doing it by:
dna <- vector()
for (i in 1:length(dnaset)) {
c(dna, toString(data$dnaset[[i]])) -> dna
}
Regards,
Retama
--
View this message in context:
http://www.nabble.com/Loop-avoidance-and-logical-subscripts-tp23652935p23745814.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 88
Date: Wed, 27 May 2009 08:47:00 -0700 (PDT)
From: Tony Breyal <tony.breyal@googlemail.com>
Subject: Re: [R] Neural Network resource
To: r-help@r-project.org
Message-ID:
<0ecde604-4408-4d2a-84a3-f9a5ebb58074@o30g2000vbc.googlegroups.com>
Content-Type: text/plain; charset=ISO-8859-1
I haven't used the AMORE package before, but it sounds like you
haven't set linear output units or something. Here's an example using
the nnet package of what you're doing i think:
### R START###> # set random seed to a cool number
> set.seed(42)
>
> # set up data
> x1<-rnorm(100); x2<-rnorm(100); x3<-rnorm(100)
> x4<-rnorm(100); x5<-rnorm(100); x6<-rnorm(100)
> b1<-1; b2<-2; b3<-3
> b4<-4; b5<-5; b6<-6
> y<-b1*x1 + b2*x2 + b3*x3 + b4*x4 + b5*x5 + b6*x6
> my.df <- data.frame(cbind(y, x1, x2, x3, x4, x5, x6))
>
> # 1. linear regression
> my.lm <- lm(y~., data=my.df)
>
> # look at correlation
> my.lm.predictions<-predict(my.lm)
> cor(my.df["y"], my.lm.predictions)
[,1]
y 1>
> # 2. nnet
> library(nnet)
> my.nnet<-nnet(y~., data=my.df, size=3,
linout=TRUE, skip=TRUE,
trace=FALSE, maxit=1000)>
> my.nnet.predictions<-predict(my.nnet, my.df)
> # look at correlation
> cor(my.df["y"], my.nnet.predictions)
[,1]
y 1>
> # to look at the values side by side
> cbind(my.df["y"], my.nnet.predictions)
y my.nnet.predictions
1 10.60102566 10.59958907
2 6.70939465 6.70956529
3 2.28934732 2.28928930
4 14.51012458 14.51043732
5 -12.85845371 -12.85849345
[..etc]
### R END ###
Hope that helps a wee bit mate,
Tony Breyal
> You are right there is a pdf file which describes the function. But let
tell you where I am coming from.
>
> Just to test if a neural network will work better than a ordinary least
square regression, I created a dataset with one dependent variable and 6 other
independent variables. Now I had deliberately created the dataset in such manner
that we have an excellent regression model. Eg: Y = b0 + b1*x1 + b2*x2 + b3*x3..
+ b6*x6 + e
> where e is normal random variable. Naturally any statistical analysis
system running regression would easily predict the values of b1, b2, b3, ..., b6
with around 30-40 observations.
>
> I fed this data into a Neural network (3 hidden layers with 6 neurons in
each layer) and trained the network. When I passed the input dataset and tried
to get the predictions, all the predicted values were identical! This confused
me a bit and was wondering whether my understanding of the Neural Network was
wrong.
>
> Have you ever faced anything like it?
>
> Regards,
> Indrajit
>
> ________________________________
> From: "markle...@verizon.net" <markle...@verizon.net>
>
> Sent: Wednesday, May 27, 2009 7:54:59 PM
> Subject: Re: [R] Neural Network resource
>
> Hi: I've never used that package but most likely there is a? AMORE
vignette that shows examples and describes the functions.
> it should be on the same cran? web page where the package resides, in pdf
form.
>
> Hi All,
>
> I am trying to learn Neural Networks. I found that R has packages which can
help build Neural Nets - the popular one being AMORE package. Is there any book
/ resource available which guides us in this subject using the AMORE package?
>
> Any help will be much appreciated.
>
> Thanks,
> Indrajit
>
> ______________________________________________
> R-h...@r-project.org mailing
listhttps://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> ? ? ? ? [[alternative HTML version deleted]]
>
> ______________________________________________
> R-h...@r-project.org mailing
listhttps://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 89
Date: Wed, 27 May 2009 14:14:58 -0400 (EDT)
From: "Jack Siegrist" <jacksie@eden.rutgers.edu>
Subject: [R] invert axis persp plot
To: R-help@r-project.org
Message-ID:
<021a2d210960933eefcfd9c6f09180d8.squirrel@webmail.eden.rutgers.edu>
Content-Type: text/plain;charset=iso-8859-1
Hello folks,
Is there a way to invert the z axis in a 'persp' plot?
I tried using 'zlim=rev(range(z))', which would work with 'plot'
but does
not work in this case.
Thank you for your help.
------------------------------
Message: 90
Date: Wed, 27 May 2009 20:19:25 +0200
From: Jose Quesada <quesada@gmail.com>
Subject: [R] a simple trick to get autoclose parenthesis on windows
To: r-help@r-project.org
Message-ID: <4A1D842D.1020606@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi,
This is a simple trick to get autoclose parenthesis on windows.
If you like how StatET autocloses parens, but like to use the lighter
Vanilla R, you can use autohotkey (http://autohotkey.net) to provide
this functionality.
Simply put the below code in a text file, rename extension as .ahk and
doubleclick on it to execute.
------------------ code starts here 8< ------------------
; Vanilla R
; ============================================================#IfWinActive,
ahk_class Rgui
; autoclose parens
(::Send (){left}
"::Send ""{left}
return
------------------ code ends here 8< ------------------
Silly, but I find it very convenient.
Best,
-Jose
--
Jose Quesada, PhD.
Max Planck Institute,
Center for Adaptive Behavior and Cognition -ABC-,
Lentzeallee 94, office 224, 14195 Berlin
http://www.josequesada.name/
------------------------------
Message: 91
Date: Wed, 27 May 2009 20:25:42 +0200
From: Romain Francois <romain.francois@dbmail.com>
Subject: Re: [R] no internal function "int.unzip" in R 2.9.0 for
Windows
To: "Carson, John" <John.Carson@shawgrp.com>
Cc: r-help@r-project.org, EricLecoutre@gmail.com, Fernando Henrique
Ferraz Pereira da Rosa <mentus@gmail.com>
Message-ID: <4A1D85A6.5090808@dbmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi,
I'll try to fix this soon. Could you log a bug request here:
http://r-forge.r-project.org/tracker/?atid=1643&group_id=405&func=browse
Regards,
Romain
Carson, John wrote:>> library(R2HTML)
>>
>
> Loading required package: R2HTML
>
> Error in .Internal(int.unzip(zipname, NULL, dest)) :
>
> no internal function "int.unzip"
>
> Error : .onLoad failed in 'loadNamespace' for 'R2HTML'
>
> Error: package 'R2HTML' could not be loaded
>
> Version: R 2.9.0 for Windows
>
--
Romain Francois
Independent R Consultant
+33(0) 6 28 91 30 30
http://romainfrancois.blog.free.fr
------------------------------
Message: 92
Date: Wed, 27 May 2009 14:31:42 -0400
From: Wade Wall <wade.wall@gmail.com>
Subject: [R] reduce size of plot inside window and place legend beside
plot
To: r-help@stat.math.ethz.ch
Message-ID:
<e23082be0905271131k628ae086p78ed4e49df764213@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi all,
I have been trying to figure out how to place a legend beside a plot,
rather than within the space to no avail. I am assuming that I need
to resize the plot relative to the window.
If anyone has any guidance on how to do this, I would greatly appreciate it.
Wade
------------------------------
Message: 93
Date: Wed, 27 May 2009 14:31:23 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: [R] ggplot2 adding vertical line at a certain date
To: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<c502a9e10905271131y7741b1bfq4fdb0911f013e0b8@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
library(ggplot2)
melt.updn <- (structure(list(date = structure(c(11808, 11869, 11961, 11992,
12084, 12173, 12265, 12418, 12600, 12631, 12753, 12996, 13057,
13149, 11808, 11869, 11961, 11992, 12084, 12173, 12265, 12418,
12600, 12631, 12753, 12996, 13057, 13149), class = "Date"), variable
structure(c(1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("unrestored",
"restored"), class = "factor"), value = c(1.1080259671261,
0.732188576856918,
0.410334408061265, 0.458980396410056, 0.429867902470711, 0.83126337241925,
0.602008712602784, 0.818751283264408, 1.12606382402475, 0.246174719479079,
0.941043753226865, 0.986511619794787, 0.291074883642735, 0.346361775752625,
1.36209038621623, 0.878561166753624, 0.525156715576168, 0.80305564765846,
1.08084449441812, 1.24906568558731, 0.970954515841768, 0.936838439269239,
1.26970090246036, 0..337831520417547, 0.909204325710795, 0.951009811036613,
0.290735620653709, 0.426683515714219)), .Names = c("date",
"variable",
"value"), row.names = c(NA, -28L), class = "data.frame"))
qplot(date, value, data=melt.updn, shape=variable)+geom_smooth()
#I would like to add a line at November 1, 2002
#thanks for the help
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
------------------------------
Message: 94
Date: Wed, 27 May 2009 20:39:51 +0200
From: baptiste auguie <ba208@exeter.ac.uk>
Subject: Re: [R] reduce size of plot inside window and place legend
beside plot
To: Wade Wall <wade.wall@gmail.com>
Cc: "r-help@stat.math.ethz..ch" <r-help@stat.math.ethz.ch>
Message-ID: <4DEB650D-C277-499C-B564-E7340225EFCE@exeter.ac.uk>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Try this and see if it helps (if not, please help improving it),
http://wiki.r-project.org/rwiki/doku.php?id=tips:graphics-misc:legendoutside
HTH,
baptiste
On 27 May 2009, at 20:31, Wade Wall wrote:
> Hi all,
>
> I have been trying to figure out how to place a legend beside a plot,
> rather than within the space to no avail. I am assuming that I need
> to resize the plot relative to the window.
>
> If anyone has any guidance on how to do this, I would greatly
> appreciate it.
>
> Wade
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 95
Date: Wed, 27 May 2009 11:46:12 -0700
From: "Arthur Burke" <burkea@nwrel.org>
Subject: [R] Factor level with no cases shows up in a plot
To: <r-help@r-project.org>
Message-ID:
<CF736B65E2E03F42A197473E43E074A20529DDCA@w23-7928.nwrel.org>
Content-Type: text/plain
Consider this data structure (df1) ...
Group Year PctProf FullYr
1 Never RF 2004 87 88
2 Cohort 1 2004 83 84
3 Cohort 2 2004 84 86
4 Cohort 3 2004 87 87
5 Cohort 4 2004 73 74
6 Never RF 2005 85 86
7 Cohort 1 2005 81 82
8 Cohort 2 2005 81 81
9 Cohort 3 2005 78 79
10 Cohort 4 2005 72 74
11 Never RF 2006 83 84
12 Cohort 1 2006 78 78
13 Cohort 2 2006 78 79
14 Cohort 3 2006 70 71
15 Cohort 4 2006 80 82
16 Never RF 2007 82 83
17 Cohort 1 2007 75 76
18 Cohort 2 2007 73 74
19 Cohort 3 2007 79 80
20 Cohort 4 2007 75 77
21 Never RF 2008 83 84
22 Cohort 1 2008 81 81
23 Cohort 2 2008 81 81
24 Cohort 3 2008 76 77
25 Cohort 4 2008 62 63
... which I subsetted to omit all cases for Cohort 4 and some cases for
Cohorts 2 & 3 ...
df2 <- subset(df1,
((Group == "Cohort 1" | Group == "Never RF") | (Group ==
"Cohort 2"
& Year != 2004) |
(Group == "Cohort 3" & Year > 2006)))
> df2
Group Year PctProf FullYr
1 Never RF 2004 87 88
2 Cohort 1 2004 83 84
6 Never RF 2005 85 86
7 Cohort 1 2005 81 82
8 Cohort 2 2005 81 81
11 Never RF 2006 83 84
12 Cohort 1 2006 78 78
13 Cohort 2 2006 78 79
16 Never RF 2007 82 83
17 Cohort 1 2007 75 76
18 Cohort 2 2007 73 74
19 Cohort 3 2007 79 80
21 Never RF 2008 83 84
22 Cohort 1 2008 81 81
23 Cohort 2 2008 81 81
24 Cohort 3 2008 76 77
Now,
> table(df2$Group)
... properly shows 0 cases for the Group level "cohort 4" ...
Cohort 1 Cohort 2 Cohort 3 Cohort 4 Never RF
5 4 2 0 5
But when I plot ...
coll = c("violet","blue","green","red")
with(df2, interaction.plot(Year, Group, FullYr,
lwd=3,col=coll, bty="l", lty=1, las=1,
ylab="Percent Proficient", xlab="",
main = "Proficiency Trends for RF and Non-RF Schools"))
... I get the four lines that I expected but the legend includes the
Group level "cohort 4" .
How can I get rid of "cohort 4" in Group?
Thanks!
Art
------------------------------------------------------------------
Art Burke
Northwest Regional Educational Laboratory
101 SW Main St, Suite 500
Portland, OR 97204-3213
Phone: 503-275-9592 / 800-547-6339
Fax: 503-275-0450
burkea@nwrel.org
[[alternative HTML version deleted]]
------------------------------
Message: 96
Date: Wed, 27 May 2009 14:53:22 -0400
From: Stavros Macrakis <macrakis@alum.mit.edu>
Subject: [R] R Books listing on R-Project
To: r-help <r-help@r-project.org>
Message-ID:
<8b356f880905271153o69c078f7q821db334d0470a60@mail.gmail.com>
Content-Type: text/plain
I was wondering what the criteria were for including books on the Books
Related to R page <http://www.r-project.org/doc/bib/R-books.html>. (There
is
no maintainer listed on this page.)
In particular, I was wondering why the following two books are not listed:
* Andrew Gelman, Jennifer Hill, *Data Analysis Using Regression and
Multilevel/Hierarchical Models*. (CRAN package 'arm')
* Michael J. Crawley, *The R Book*. (reviewed, rather negatively, in *R News
* *7*:2)
Is the list more or less arbitrary? Does it reflect some editorial judgment
about the value of these books? If so, it might be more useful to include
the books, but with critical reviews. It doesn't seem to be a matter of
up-to-dateness, because 38/87 of the listed books were published in a more
recent year than Gelman or Crawley.
The list is currently in reverse chronological order. I wonder if it would
be useful to group the entries thematically -- I'd be happy to help on that
project.
-s
[[alternative HTML version deleted]]
------------------------------
Message: 97
Date: Wed, 27 May 2009 20:54:01 +0000
From: krzysztof.sakrejda@gmail.com
Subject: Re: [R] Factor level with no cases shows up in a plot
To: "Arthur Burke" <burkea@nwrel.org>, r-help@r-project.org
Message-ID:
<1933834465-1243450505-cardhu_decombobulator_blackberry.rim.net-1668483860-@bxe1124.bisx.prod.on.blackberry>
Content-Type: text/plain
If you have a vector of factors with empty levels you can get rid of them by
rerunning the vector through the factor function:
vec <- factor(vec)
Don't know if this can be done at plotting time...
Sent via BlackBerry by AT&T
-----Original Message-----
From: "Arthur Burke" <burkea@nwrel.org>
Date: Wed, 27 May 2009 11:46:12
To: <r-help@r-project.org>
Subject: [R] Factor level with no cases shows up in a plot
Consider this data structure (df1) ...
Group Year PctProf FullYr
1 Never RF 2004 87 88
2 Cohort 1 2004 83 84
3 Cohort 2 2004 84 86
4 Cohort 3 2004 87 87
5 Cohort 4 2004 73 74
6 Never RF 2005 85 86
7 Cohort 1 2005 81 82
8 Cohort 2 2005 81 81
9 Cohort 3 2005 78 79
10 Cohort 4 2005 72 74
11 Never RF 2006 83 84
12 Cohort 1 2006 78 78
13 Cohort 2 2006 78 79
14 Cohort 3 2006 70 71
15 Cohort 4 2006 80 82
16 Never RF 2007 82 83
17 Cohort 1 2007 75 76
18 Cohort 2 2007 73 74
19 Cohort 3 2007 79 80
20 Cohort 4 2007 75 77
21 Never RF 2008 83 84
22 Cohort 1 2008 81 81
23 Cohort 2 2008 81 81
24 Cohort 3 2008 76 77
25 Cohort 4 2008 62 63
.... which I subsetted to omit all cases for Cohort 4 and some cases for
Cohorts 2 & 3 ...
df2 <- subset(df1,
((Group == "Cohort 1" | Group == "Never RF") | (Group ==
"Cohort 2"
& Year != 2004) |
(Group == "Cohort 3" & Year > 2006)))
> df2
Group Year PctProf FullYr
1 Never RF 2004 87 88
2 Cohort 1 2004 83 84
6 Never RF 2005 85 86
7 Cohort 1 2005 81 82
8 Cohort 2 2005 81 81
11 Never RF 2006 83 84
12 Cohort 1 2006 78 78
13 Cohort 2 2006 78 79
16 Never RF 2007 82 83
17 Cohort 1 2007 75 76
18 Cohort 2 2007 73 74
19 Cohort 3 2007 79 80
21 Never RF 2008 83 84
22 Cohort 1 2008 81 81
23 Cohort 2 2008 81 81
24 Cohort 3 2008 76 77
Now,
> table(df2$Group)
... properly shows 0 cases for the Group level "cohort 4" ...
Cohort 1 Cohort 2 Cohort 3 Cohort 4 Never RF
5 4 2 0 5
But when I plot ...
coll = c("violet","blue","green","red")
with(df2, interaction.plot(Year, Group, FullYr,
lwd=3,col=coll, bty="l", lty=1, las=1,
ylab="Percent Proficient", xlab="",
main = "Proficiency Trends for RF and Non-RF Schools"))
... I get the four lines that I expected but the legend includes the
Group level "cohort 4" .
How can I get rid of "cohort 4" in Group?
Thanks!
Art
------------------------------------------------------------------
Art Burke
Northwest Regional Educational Laboratory
101 SW Main St, Suite 500
Portland, OR 97204-3213
Phone: 503-275-9592 / 800-547-6339
Fax: 503-275-0450
burkea@nwrel.org
[[alternative HTML version deleted]]
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 98
Date: Wed, 27 May 2009 11:54:28 -0700
From: "Farley, Robert" <FarleyR@metro.net>
Subject: Re: [R] Still can't find missing data
To: R-help <r-help@r-project.org>
Message-ID:
<8452CFD6AC58614FA9F87C8ADC2E418904CED8C739@exchange01.lacmta.net>
Content-Type: text/plain; charset="us-ascii"
I can't get the syntax that will allow me to show NA values (rows) in the
xtabs.
> # unfortunatly I don't see how to get that to run
> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector, exclude=NULL,
na.action(na.pass), LAWAData)
Error in eval(expr, envir, enclos) : object "wt_annual" not
found> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector, exclude=NULL,
na.action(na.pass), drop.unused.levels = FALSE, LAWAData)
Error in eval(expr, envir, enclos) : object "wt_annual" not
found> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector,
na.action(na.pass), LAWAData)
Error in eval(expr, envir, enclos) : object "wt_annual" not
found> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector,
na.action(na.pass),drop.unused.levels = FALSE, LAWAData)
Error in eval(expr, envir, enclos) : object "wt_annual" not
found> #
> ####### Those combinations that run w/o error give misleading results
> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector, exclude=NULL,
LAWAData)
> XTTable
connector
Mode_orig_only OD Passenger
Connector
Walked/Biked 17.814338
0.000000
I flew in from another a place/connected 0.000000
0.000000
Amtrak 49..128982
0.000000
Bus - Chartered bus or van 525.978899
0.000000
Bus - Hotel Courtesy van 913.295370
0.000000
Bus - MTA (Metro) or other public transit bus 114.302764
0.000000
Bus - Scheduled airport bus or van (e.g. Airport bus or Disn 298.151438
0.000000
Bus - Union Station Flyaway 93.088049
0.000000
Bus - Van Nuys Flyaway 233.794168
0.000000
Green line/light rail 20.764539
0.000000
Limousine/town car 424.120506
0.000000
Metrolink 8.054528
0.000000
Motorcycle 6.010790
0.000000
On-call shuttle/van (e.g. Super Shuttle, Prime Time) 1832.748525
0.000000
Car/truck/van - Private 10191.284139
0.000000
Car/truck/van - Rental 2099.771923
0.000000
Taxi 1630.148576
0.000000
..Refused 0.000000
0.000000> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector,
drop.unused.levels = FALSE, LAWAData)
> XTTable
connector
Mode_orig_only OD Passenger
Connector
Walked/Biked 17.814338
0.000000
I flew in from another a place/connected 0.000000
0.000000
Amtrak 49.128982
0.000000
Bus - Chartered bus or van 525.978899
0.000000
Bus - Hotel Courtesy van 913.295370
0.000000
Bus - MTA (Metro) or other public transit bus 114.302764
0.000000
Bus - Scheduled airport bus or van (e.g. Airport bus or Disn 298.151438
0.000000
Bus - Union Station Flyaway 93.088049
0.000000
Bus - Van Nuys Flyaway 233.794168
0.000000
Green line/light rail 20.764539
0.000000
Limousine/town car 424.120506
0.000000
Metrolink 8.054528
0.000000
Motorcycle 6.010790
0.000000
On-call shuttle/van (e.g. Super Shuttle, Prime Time) 1832.748525
0.000000
Car/truck/van - Private 10191.284139
0.000000
Car/truck/van - Rental 2099.771923
0.000000
Taxi 1630.148576
0.000000
..Refused 0.000000
0.000000> XTTable <- xtabs(wt_annual ~ Mode_orig_only + connector, exclude=NULL,
drop.unused.levels = FALSE, LAWAData)
> XTTable
connector
Mode_orig_only OD Passenger
Connector
Walked/Biked 17.814338
0.000000
I flew in from another a place/connected 0.000000
0.000000
Amtrak 49.128982
0.000000
Bus - Chartered bus or van 525.978899
0.000000
Bus - Hotel Courtesy van 913.295370
0.000000
Bus - MTA (Metro) or other public transit bus 114.302764
0.000000
Bus - Scheduled airport bus or van (e.g. Airport bus or Disn 298.151438
0.000000
Bus - Union Station Flyaway 93.088049
0.000000
Bus - Van Nuys Flyaway 233.794168
0.000000
Green line/light rail 20.764539
0.000000
Limousine/town car 424.120506
0.000000
Metrolink 8.054528
0.000000
Motorcycle 6.010790
0.000000
On-call shuttle/van (e.g. Super Shuttle, Prime Time) 1832.748525
0.000000
Car/truck/van - Private 10191.284139
0.000000
Car/truck/van - Rental 2099.771923
0.000000
Taxi 1630.148576
0.000000
..Refused 0.000000
0.000000>
> version
_
platform i386-pc-mingw32
arch i386
os mingw32
system i386, mingw32
status
major 2
minor 8.1
year 2008
month 12
day 22
svn rev 47281
language R
version.string R version 2.8.1 (2008-12-22)> sessionInfo() # List loaded packages
R version 2.8.1 (2008-12-22)
i386-pc-mingw32
locale:
LC_COLLATE=English_United States..1252;LC_CTYPE=English_United
States.1252;LC_MONETARY=English_United
States.1252;LC_NUMERIC=C;LC_TIME=English_United States.1252
attached base packages:
[1] graphics grDevices utils datasets stats methods base
other attached packages:
[1] fortunes_1.3-6 prettyR_1.4 survey_3.10-1
foreign_0.8-29>
Robert Farley
Metro
www.Metro.net
-----Original Message-----
From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] On
Behalf Of Dieter Menne
Sent: Wednesday, May 27, 2009 02:52
To: r-help@stat.math.ethz.ch
Subject: Re: [R] Still can't find missing data
Farley, Robert <FarleyR <at> metro.net> writes:
>
> What is wrong? I've looked into the na commands and the ?xtabs entry,
but I
haven't found anything that works.>
I never understood the logic that exclude=NULL needs na.action in addition.
test <- c(1,2,3,1,2,3,NA,NA,1,2,3)
xtabs(~test,exclude=NULL,na.action=na.pass)
Dieter
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 99
Date: Wed, 27 May 2009 12:14:34 -0700
From: Martin Morgan <mtmorgan@fhcrc.org>
Subject: Re: [R] "Error: package/namespace load failed"
To: Rebecca Sela <rsela@stern.nyu.edu>
Cc: r-help <r-help@r-project.org>
Message-ID: <4A1D911A.2060905@fhcrc.org>
Content-Type: text/plain; charset=UTF-8
Rebecca Sela wrote:> I am writing my first R package, and I have been getting the following
series of errors when I run R CMD check:
>
> * checking S3 generic/method consistency ... WARNING
> Error: package/namespace load failed for 'REEMtree'
> Call sequence:
> 2: stop(gettextf("package/namespace load failed for
'%s'", libraryPkgName(package)),
> call. = FALSE, domain = NA)
> 1: library(package, lib.loc = lib.loc, character.only = TRUE, verbose =
FALSE)
> Execution halted
> See section 'Generic functions and methods' of the 'Writing R
Extensions'
> manual.
> * checking replacement functions ... WARNING
> Error: package/namespace load failed for 'REEMtree'
> Call sequence:
> 2: stop(gettextf("package/namespace load failed for
'%s'", libraryPkgName(package)),
> call. = FALSE, domain = NA)
> 1: library(package, lib.loc = lib.loc, character.only = TRUE, verbose =
FALSE)
> Execution halted
> In R, the argument of a replacement function which corresponds to the right
> hand side must be named 'value'.
> * checking foreign function calls ... WARNING
> Error: package/namespace load failed for 'REEMtree'
> Call sequence:
> 2: stop(gettextf("package/namespace load failed for
'%s'", libraryPkgName(package)),
> call. = FALSE, domain = NA)
> 1: library(package, lib..loc = lib.loc, character.only = TRUE, verbose =
FALSE)
> Execution halted
> See section 'System and foreign language interfaces' of the
'Writing R
> Extensions' manual.
> * checking Rd files ... OK
> * checking for missing documentation entries ... ERROR
> Error: package/namespace load failed for 'REEMtree'
>
> (Everything is OK up to this point.)
>
> Looking around online, I have found references to this error when there is
compiled C or Fortran code, but I have none of that in my code. I imagine this
is a simple problem (perhaps with my NAMESPACE file), but I don't know what
it is. (The text of the NAMESPACE file is at the bottom of this e-mail.)
Hi Rebecca -- useDynLib() is to load the dynamic library associated with
C or Fortran code. You say you have none of this, so you don't need
useDynLib(REEMtree).
Martin
[[elided Yahoo spam]]>
> Rebecca
>
> NAMESPACE file:
>
> useDynLib(REEMtree)
>
> export(AutoCorrelationLRtest, FixedEffectsTree, RandomEffectsTree,
> LMEpredict, PredictionTest, RandomEffectsTree, RMSE, simpleREEMdata,
> REEMtree, FEEMtree)
>
> import(nlme)
> import(rpart)
>
> S3method(is,REEMtree)
> S3method(logLik,REEMtree)
> S3method(plot,REEMtree)
> S3method(predict,REEMtree)
> S3method(print, REEMtree)
> S3method(ranef,REEMtree)
> S3method(tree,REEMtree)
> S3method(is,FEEMtree)
> S3method(logLik,FEEMtree)
> S3method(plot,FEEMtree)
> S3method(predict,FEEMtree)
> S3method(print, FEEMtree)
> S3method(tree,FEEMtree)
>
>
>
> ------------------------------------------------------------------------
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 100
Date: Wed, 27 May 2009 13:16:03 -0600
From: Greg Snow <Greg.Snow@imail.org>
Subject: Re: [R] alternative to built-in data editor
To: Jose Quesada <quesada@gmail.com>, "r-help@r-project.org"
<r-help@r-project.org>
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC61D1FC10B2@LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
Have you tried the View function (note the uppercase V).
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow@imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Jose Quesada
> Sent: Wednesday, May 27, 2009 11:25 AM
> To: r-help@r-project.org
> Subject: [R] alternative to built-in data editor
>
> Hi all,
>
> I often have to peek at large data.
> While head and tail are convenient, at times I'd like some more
> comprehensive.
> I guess I debug better in a more visual way?
> I was wondering if there's a way to override the default data editor.
> I could of course dump to a txt file, and look at it with an
> editor/spreadsheet, but after doing it a few times, it gets boring..
> Maybe it's time for me to write a function to automatize the process?
> I'd ask first in case there's an easier way.
>
> Thanks!
> -Jose
>
> --
> Jose Quesada, PhD.
> Max Planck Institute,
> Center for Adaptive Behavior and Cognition -ABC-,
> Lentzeallee 94, office 224, 14195 Berlin
> http://www.josequesada.name/
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 101
Date: Wed, 27 May 2009 15:18:52 -0400
From: "G. Jay Kerns" <gkerns@ysu.edu>
Subject: Re: [R] R Books listing on R-Project
To: Stavros Macrakis <macrakis@alum.mit.edu>
Cc: r-help <r-help@r-project.org>
Message-ID:
<a695148b0905271218ifd895a2wbc6936363fcdd3a4@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hello Stavros,
On Wed, May 27, 2009 at 2:53 PM, Stavros Macrakis <macrakis@alum.mit.edu>
wrote:> I was wondering what the criteria were for including books on the Books
> Related to R page <http://www.r-project.org/doc/bib/R-books.html>.
(There is
> no maintainer listed on this page.)
>
> In particular, I was wondering why the following two books are not listed:
>
> * Andrew Gelman, Jennifer Hill, *Data Analysis Using Regression and
> Multilevel/Hierarchical Models*. (CRAN package 'arm')
>
> * Michael J. Crawley, *The R Book*. (reviewed, rather negatively, in *R
News
> * *7*:2)
>
> Is the list more or less arbitrary? ?Does it reflect some editorial
judgment
> about the value of these books? If so, it might be more useful to include
> the books, but with critical reviews. ?It doesn't seem to be a matter
of
> up-to-dateness, because 38/87 of the listed books were published in a more
> recent year than Gelman or Crawley.
>
> The list is currently in reverse chronological order. ?I wonder if it would
> be useful to group the entries thematically -- I'd be happy to help on
that
> project.
>
I had a similar idea in 2008 for the R-wiki:
http://tolstoy.newcastle.edu.au/R/e5/devel/08/10/0481.html
There were no responses and I ran out of time to continue working on
it myself. If you are interested in proceeding along these lines then
I have some more ideas and would be willing to help... or, perhaps
you (or somebody else) knows of an even better approach.
Cheers
Jay
***************************************************
G. Jay Kerns, Ph.D.
Associate Professor
Department of Mathematics & Statistics
Youngstown State University
Youngstown, OH 44555-0002 USA
Office: 1035 Cushwa Hall
Phone: (330) 941-3310 Office (voice mail)
-3302 Department
-3170 FAX
E-mail: gkerns@ysu.edu
http://www.cc.ysu.edu/~gjkerns/
------------------------------
Message: 102
Date: Wed, 27 May 2009 16:39:47 -0300
From: Andre Nathan <andre@digirati.com.br>
Subject: [R] Axis label spanning multiple plots
To: r-help@r-project.org
Message-ID: <1243453187.6027.41.camel@homesick>
Content-Type: text/plain
Hello
I need to plot 3 graphs in a single column; the top two plots have the
same title, and I would like it to be written only once, centered
horizontally and spanning the two plots. Something like
t +------------+
| |
i | |
| |
t +------------+
l +------------+
| |
e | |
| |
1 +------------+
t +------------+
i | |
t | |
l | |
e | |
2 +------------+
x-title
Is there a parameter which allows me to do that automatically, or should
I use text() and position the title manually?
Thanks in advance,
Andre
------------------------------
Message: 103
Date: Wed, 27 May 2009 20:35:06 +0100 (BST)
From: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Subject: Re: [R] how do I get to be a member?
To: Michael Menke <menke@email.arizona.edu>
Cc: r-help@r-project.org
Message-ID: <XFMail.090527203506.Ted.Harding@manchester.ac.uk>
Content-Type: text/plain; charset=iso-8859-1
On 27-May-09 18:09:30, Michael Menke wrote:> Information, please.
By subscribing your email address to the list.
Visit the R-help info page at:
https://stat.ethz.ch/mailman/listinfo/r-help
and read what it has to say in general about R-help. Near the bottom
is a section "Subscribing to R-help" which tells you what to do.
Please note that it is *email addresses*, not persons, which are
subscribed, so (if you have more than one email address ) you should
use the one which you have subscribed when you want to post to the list.
Hoping this helps,
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 27-May-09 Time: 20:35:02
------------------------------ XFMail ------------------------------
------------------------------
Message: 104
Date: Wed, 27 May 2009 21:35:53 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] r-plot 2nd attempt
To: durden10 <durdantyler@gmx.net>
Cc: r-help@r-project.org
Message-ID: <1243456553.2935.8.camel@localhost.localdomain>
Content-Type: text/plain
On Wed, 2009-05-27 at 06:06 -0700, durden10 wrote:
[[elided Yahoo spam]]>
> I have come down to this:
>
> Win<- c(-0.005276404, 0.081894394, -0.073461539, 0.184371967,
> 0.133189670, -0.006239016, -0.063616699, 0.196754234, 0.402148743,
> 0.104408425,
> 0.036910154, 0.195227863, 0.212743723, 0.280889666, 0.300277802)
> Calgary<- c(5, 8, 11, 3, 7, 4, 7, 1, 3, 6, 3, 2, 8, 0, 1)
>
> data_corr <- data.frame(Win,Calgary)
> plot(data_corr, type = "p", axes=FALSE, col = "blue",
lwd = 2)
>
> #y-axis
> axis(2, tcl=0.35,seq(1,11,by=2))
>
> #x-axis
> axis(1, tcl=0.35,seq(-0.1,0.5,by=0.1))
> box()
>
> abline(lm(data_corr[,2]~data_corr[,1]))
>
> It works for the y-axis, but unfortunately, the x-axis is still not
working:
> It starts at 0 and end at 0.4, but it should start at -0.1
Why? Your data extend to ~-0.07.
> , as mentioned in
> the code (cf picture) :confused:
> http://www.nabble.com/file/p23742121/Rplots_2.png
You didn't adjust the x-axis limits to tell R that you wanted -0.1 to be
[[elided Yahoo spam]]
I've tidied your script and altered it to extend the x-axis a bit so
that -0.1 is included:
Win <- c(-0.005276404, 0.081894394, -0..073461539, 0.184371967,
0.133189670, -0.006239016, -0.063616699, 0.196754234,
0.402148743, 0.104408425, 0.036910154, 0.195227863,
0.212743723, 0.280889666, 0.300277802)
Calgary<- c(5, 8, 11, 3, 7, 4, 7, 1, 3, 6, 3, 2, 8, 0, 1)
data_corr <- data.frame(Win, Calgary)
plot(Calgary ~ Win, data = data_corr, type = "p", axes = FALSE,
col = "blue", lwd = 2, xlim = c(-0.1,0.5))
## ^^^^^^^^^^^^^^^^^^
axis(2, tcl = 0.35, seq(1, 11, by=2))
axis(1, tcl = 0.35, seq(-0.1, 0.5, by=0.1))
box()
abline(lm(Calgary ~ Win, data = data_corr))
HTH
G
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
------------------------------
Message: 105
Date: Wed, 27 May 2009 13:38:35 -0600
From: Greg Snow <Greg.Snow@imail.org>
Subject: Re: [R] Axis label spanning multiple plots
To: Andre Nathan <andre@digirati.com.br>, "r-help@r-project.org"
<r-help@r-project.org>
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC61D1FC10DC@LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
Create an outer margin (see ?par), then use mtext to put the title in the outer
margin.
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow@imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Andre Nathan
> Sent: Wednesday, May 27, 2009 1:40 PM
> To: r-help@r-project.org
> Subject: [R] Axis label spanning multiple plots
>
> Hello
>
> I need to plot 3 graphs in a single column; the top two plots have the
> same title, and I would like it to be written only once, centered
> horizontally and spanning the two plots. Something like
>
>
> t +------------+
> | |
> i | |
> | |
> t +------------+
>
> l +------------+
> | |
> e | |
> | |
> 1 +------------+
>
> t +------------+
> i | |
> t | |
> l | |
> e | |
> 2 +------------+
>
> x-title
>
> Is there a parameter which allows me to do that automatically, or
> should
> I use text() and position the title manually?
>
> Thanks in advance,
> Andre
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 106
Date: Wed, 27 May 2009 21:40:37 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] how do I get to be a member?
To: Michael Menke <menke@email.arizona.edu>
Cc: r-help@r-project.org
Message-ID: <1243456837.2935.11.camel@localhost.localdomain>
Content-Type: text/plain
On Wed, 2009-05-27 at 11:09 -0700, Michael Menke wrote:> Information, please.
On what? Member of what?
If you mean a member of the R Foundation, see:
http://www.r-project.org/foundation/membership.html
If you mean this list, you already are, otherwise you wouldn't be able
to post (IIRC).
If this is not what you want, perhaps you could be slightly more
explicit about what it is you *do* want to become a member of?
HTH
G
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
------------------------------
Message: 107
Date: Wed, 27 May 2009 22:19:46 +0200
From: Stefan Grosse <singularitaet@gmx.net>
Subject: Re: [R] Factor level with no cases shows up in a plot
To: Arthur Burke <burkea@nwrel.org>
Cc: r-help@r-project.org
Message-ID: <4A1DA062.3000506@gmx.net>
Content-Type: text/plain; charset=ISO-8859-1
Arthur Burke wrote:>
> ... I get the four lines that I expected but the legend includes the
> Group level "cohort 4" .
>
> How can I get rid of "cohort 4" in Group?
>
This link might be of interest for you:
http://wiki.r-project.org/rwiki/doku.php?id=tips:data-manip:drop_unused_levels
hth
Stefan
------------------------------
Message: 108
Date: Wed, 27 May 2009 22:59:39 +0200
From: <mauede@alice.it>
Subject: [R] R: Harmonic Analysis
To: "stephen sefick" <ssefick@gmail.com>,
<r-help@stat.math.ethz.ch>
Message-ID:
<6B32C438581E5D4C8A34C377C3B334A403B1ED89@FBCMST11V04.fbc.local>
Content-Type: text/plain
Well, the time series I am dealing with are non-linear and not-stationary.
Maura
-----Messaggio originale-----
Da: r-help-bounces@r-project.org per conto di stephen sefick
Inviato: mer 27/05/2009 14.58
A: r-help@stat.math.ethz.ch
Oggetto: Re: [R] Harmonic Analysis
why will a fourier transform not work?
2009/5/27 Uwe Ligges
<ligges@statistik.tu-dortmund.de>:>
>
> Dieter Menne wrote:
>>
>> <mauede <at> alice.it> writes:
>>
>>> I am looking for a package to perform harmonic analysis with the
goal of
>>> estimating the period of the
>>> dominant high frequency component in some mono-channel signals.
>>
>> You should widen your scope by looking a "time series"
instead of harmonic
>> analysis. There is a task view on the subject at
>>
>> http://cran.at.r-project.org/web/views/TimeSeries.html
>
>
> ... or take a look at package tuneR.
>
> Uwe Ligges
>
>
>
>
>> Dieter
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
______________________________________________
R-help@r-project.org mailing list
https://stat..ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
[[elided Yahoo spam]]
[[alternative HTML version deleted]]
------------------------------
Message: 109
Date: Wed, 27 May 2009 16:39:18 -0500
From: Kevin W <kw.statr@gmail.com>
Subject: Re: [R] Changing point color/character in qqmath
To: r-help@r-project.org
Message-ID:
<5c62e0070905271439p47167eb8je7f840435f41de70@mail.gmail.com>
Content-Type: text/plain
Thanks to Deepayan, I have a corrected version of how to color points in a
qqmath plot according to another variable. Using the barley data for a more
concise example::
qqmath(~ yield | variety, data = barley, groups=year,
auto.key=TRUE,
prepanel = function(x, ...) {
list(xlim = range(qnorm(ppoints(length(x)))))
},
panel = function(x, ...) {
xx <- qnorm(ppoints(length(x)))[order(x)]
panel.xyplot(x = xx, y = x, ...)
})
The example I posted previously (and shown below) is not correct. My
apologies.
Kevin
On Wed, May 27, 2009 at 11:05 AM, Kevin W <kw.statr@gmail.com> wrote:
> Having solved this problem, I am posting this so that the next time I
> search for how to do this I will find an answer...
>
> Using qqmath(..., groups=num) creates a separate qq distribution for each
> group (within a panel). Using the 'col' or 'pch' argument
does not
> (usually) work because panel.qqmath sorts the data (but not 'col'
or 'pch')
> before plotting. Sorting the data before calling qqmath will ensure that
> the sorting does not change the order of the data.
>
> For example, to obtain one distribution per voice part and color the point
> by part 1 or part 2:
>
> library(lattice)
> singer <- singer
> singer <- singer[order(singer$height),]
> singer$part <- factor(sapply(strsplit(as.character(singer$voice.part),
> split = " "), "[", 1),
> levels = c("Bass", "Tenor", "Alto",
"Soprano"))
> singer$num <- factor(sapply(strsplit(as.character(singer$voice.part),
split
> = " "), "[", 2))
> qqmath(~ height | part, data = singer,
> col=singer$num,
> layout=c(4,1))
>
>
>
> Kevin
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 110
Date: Thu, 28 May 2009 00:08:11 +0200
From: Emmanuel Charpentier <charpent@bacbuc.dyndns.org>
Subject: Re: [R] Linear Regression with Constraints
To: r-help@stat.math.ethz.ch
Message-ID: <1243462090.6898.32.camel@yod>
Content-Type: text/plain; charset="UTF-8"
Le mercredi 27 mai 2009 ? 17:28 +1000, Bill.Venables@csiro.au a ?crit
:> You can accommodate the constraints by, e.g., putting
>
> c2 = pnorm(theta2)
> c3 = pnorm(theta3)
Nice. I'd have tried invlogit(), but I'm seriously biased...
> x1 has a known coefficient (unity) so it becomes an offset.
> Essentially your problem can be written
>
> y1 = y-x1 = c1 + pnorm(theta2)*x2 - pnorm(theta3)*x3 + error
>
> This is then a (pretty simple) non-linear regression which could be
> fitted using, e.g. nls
>
> If you could not rule out the possibility that the solution is on the
> boundary, you could put c2 = (cos(theta2))^2, and the fitting
> procedure could take you there. The solution is not unique, but the
> original coefficients, c2,c3, would be unique, of course.
Better than invlogit...
> With just 6 observations and 4 parameters to estimate, you will need
> the model to be an exceptionally close fitting one for the fit to have
> any credibility at all.
Seconded ! Except that "an exceptionally good fit" might also be an
accident, therefore with not much more credentials that an average
one...
Now to get back to the original formulation of the question, it turns
out that, for those particular bounds, you don't even need
constrOptim() : optim() will do, as exemplified in the following
script :
>
#######################################################################> # Constrained optimization demo
>
> # It turns out that, for "vertical" bounds (i. e. bounds for a
coefficient> # not depend on other coefficients), optim does the job.
> # constrOptim does the job for any polyhedral domain, but uses a more
> # intricate form of bounds expression (hyperplanes in p dimensions).
> # compare ?optim and ?constrOptim...
>
> # Pseudodata for the question asked by "Stu @ AGS"
<stu@agstechnet.com>> # in r-help (Linear Regression with Constraints)
> # Begin quote
> # y = c1 + x1 + (c2 * x2) - (c3 * x3)
> #
> # Where c1, c2, and c3 are the desired regression coefficients that
are> # subject to the following constraints:
> #
> # 0.0 < c2 < 1.0, and
> # 0.0 < c3 < 1.0
> # End quote
>
> # y1 : "Okay" choice of parameters : c2=0.3, c3=0.7
> # y2 : (Vicious) choice of "true" parameters : c2=0.5, c3=-0.5
> # optimum out of bounds
> # c1=1 in both cases
>
> set.seed(1453)
>
> PseudoData<-data.frame(x1=runif(6, -5, 5),
+ x2=runif(6, -5, 5),
+ x3=runif(6, -5, 5))>
> PseudoData<-within(PseudoData, {
+ y1=1+x1+0.3*x2+0.7*x3+rnorm(6, 0, 3)
+ y2=1+x1+0.5*x2-0.5*x3+rnorm(6, 0, 3)
+ })>
> Data1<-with(PseudoData,data.frame(y=y1, x1=x1, x2=x2, x3=x3))
> Data2<-with(PseudoData,data.frame(y=y2, x1=x1, x2=x2, x3=x3))
>
> # The objective function : least squares..
> # I"m lazy....
>
> # Squared residuals : vector function, to be summed in objective,
> # and needs data in its eval environment.
> # R really needs some forme of lispish (let) or (let*)...
>
> e<-expression((y-(c1+x1+c2*x2+c3*x3))^2) # Least squares (needs data
in env.)>
> # Use expression form of deriv(), to allow easy evaluation in a
constructed> # environment
>
> foo<-deriv(e, nam=c("c1","c2","c3"))
>
> # Objective
>
> objfun<-function(coefs, data) {
+ return(sum(eval(foo,env=c(as.list(coefs), as.list(data)))))
+ }>
> # Objective's gradient
>
> objgrad<-function(coefs, data) {
+ return(apply(attr(eval(foo,env=c(as.list(coefs), as.list(data))),
+ "gradient"),2,sum))
+ }>
> # raincheck : unbounded optimization, but using a form palatable to
> # bounded optimization
>
> system.time(D1.unbound<-optim(par=c(c1=0.5, c2=0.5, c3=0.5),
+ fn=objfun,
+ gr=objgrad,
+ data=Data1,
+ method="L-BFGS-B",
+ lower=rep(-Inf, 3),
+ upper=rep(Inf, 3)))
utilisateur syst?me ?coul?
0.004 0.000 0.004 >
> D1.unbound
$par
c1 c2 c3
-0.5052088 0.2478704 0.7626611
# c2 and c3 are sort of okay (larger range of variation for x2 and x3)
# but c1 is a bit out of whack (residual=3, we were unlucky...)
$value
[1] 12.87678
# keep in mind for future comparison
$counts
function gradient
9 9
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
>
> # Another rainchech : does bound optimization reach the same objective
> # when the "true" value lies in the bound region ?
>
> system.time(D2.unbound<-optim(par=c(c1=0.5, c2=0.5, c3=0.5),
+ fn=objfun,
+ gr=objgrad,
+ data=Data2,
+ method="L-BFGS-B",
+ lower=rep(-Inf, 3),
+ upper=rep(Inf, 3)))
utilisateur syst?me ?coul?
0.008 0.000 0.006 >
> D2.unbound
$par
c1 c2 c3
2.3988370 0.1983261 -0.3141863
# Not very close to the "real" values
$value
[1] 8.213914
# keep in mind
$counts
function gradient
12 12
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
>
> system.time(D1.bound<-optim(par=c(c1=0.5, c2=0.5, c3=0.5),
+ fn=objfun,
+ gr=objgrad,
+ data=Data1,
+ method="L-BFGS-B",
+ lower=c(-Inf,0,0),
+ upper=c(Inf,1,1)))
utilisateur syst?me ?coul?
0..004 0.000 0.004 >
> D1.bound
$par
c1 c2 c3
-0.5052117 0.2478706 0.7626612
$value
[1] 12.87678
# About the same values as for the unbounded case. Okay
$counts
function gradient
9 9
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
>
> # The real test : what will be found when the "real" objective is
out
of> # bounds ?
>
> system.time(D2.bound<-optim(par=c(c1=0.5, c2=0.5, c3=0.5),
+ fn=objfun,
+ gr=objgrad,
+ data=Data2,
+ method="L-BFGS-B",
+ lower=c(-Inf,0,0),
+ upper=c(Inf,1,1)))
utilisateur syst?me ?coul?
0.004 0.000 0.004 >
> D2.bound
$par
c1 c2 c3
2.0995442 0..2192566 0.0000000
# The optimizer bangs is pretty little head on the c3 wall. c1 is out of
# the picture, but c2 happens to be not so bad...
$value
[1] 14.4411
# The residual is (as expected) quite larger than in the unbound case...
$counts
function gradient
8 8
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
# ... and that's not a computing anomaly.
This also illustrates the hubris necessary to fit 3 coefficients on 6
data points...
HTH,
Emmanuel Charpentier
------------------------------
Message: 111
Date: Wed, 27 May 2009 22:52:22 +0000 (GMT)
From: amorigandhi@yahoo.de
Subject: [R] boxplot
To: r-help@stat.math.ethz.ch
Message-ID: <837578.35350.qm@web26003.mail.ukl.yahoo.com>
Content-Type: text/plain
Hi gues,
Is there any function in R for boxplot with different time points?
t1 <- c(rep(1,20),rep(2,20))
t2 <- c(rep(1,10),rep(2,10),rep(1,10),rep(2,10))
x <- rnorm(40,5,1)
dat <- data.frame(t1,t2,x)
boxplot(x~t1,t2)
Many thanks,
Amor
[[alternative HTML version deleted]]
------------------------------
Message: 112
Date: Wed, 27 May 2009 18:56:59 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: Re: [R] R: Harmonic Analysis
To: mauede@alice.it
Cc: r-help@stat.math.ethz.ch
Message-ID:
<c502a9e10905271556j1f96b9e1j86898964ae082e36@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Do you need time localization, or are you only interested in the
period of the high frequency? If you do need time localization why
not use a CWT to look at the signal?
Stephen
On Wed, May 27, 2009 at 4:59 PM, <mauede@alice.it>
wrote:> Well, the time series I am? dealing with are non-linear and not-stationary.
> Maura
>
> -----Messaggio originale-----
> Da: r-help-bounces@r-project.org per conto di stephen sefick
> Inviato: mer 27/05/2009 14.58
> A: r-help@stat.math.ethz.ch
> Oggetto: Re: [R] Harmonic Analysis
>
> why will a fourier transform not work?
> 2009/5/27 Uwe Ligges <ligges@statistik.tu-dortmund.de>:
>>
>>
>> Dieter Menne wrote:
>>>
>>> ?<mauede <at> alice.it> writes:
>>>
>>>> I am looking for a package to perform harmonic analysis with
the goal of
>>>> estimating the period of the
>>>> dominant high frequency component in some mono-channel signals.
>>>
>>> You should widen your scope by looking a "time series"
instead of
>>> harmonic
>>> analysis. There is a task view on the subject at
>>>
>>> http://cran.at.r-project.org/web/views/TimeSeries.html
>>
>>
>> ... or take a look at package tuneR.
>>
>> Uwe Ligges
>>
>>
>>
>>
>>> Dieter
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Stephen Sefick
>
> Let's not spend our time and resources thinking about things that are
> so little or so large that all they really do for us is puff us up and
> make us feel like gods.? We are mammals, and have not exhausted the
> annoying little problems of being mammals.
>
> ??????? ??????? ??????? ??????? ??????? ??????? ??????? ??????? -K. Mullis
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
>
> Alice Messenger ;-) chatti anche con gli amici di Windows Live Messenger e
[[elided Yahoo spam]]> Vai su http://maileservizi.alice.it/alice_messenger/index.html?pmk=footer
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
------------------------------
Message: 113
Date: Wed, 27 May 2009 19:24:22 -0400
From: stephen sefick <ssefick@gmail.com>
Subject: Re: [R] boxplot
To: amorigandhi@yahoo.de
Cc: r-help@stat.math.ethz.ch
Message-ID:
<c502a9e10905271624m5530567dyf4503c30fd150435@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
I don't understand what you want.
On Wed, May 27, 2009 at 6:52 PM, <amorigandhi@yahoo.de>
wrote:> Hi gues,
>
> Is there any function in R for boxplot with different time points?
> t1 <- c(rep(1,20),rep(2,20))
> t2 <- c(rep(1,10),rep(2,10),rep(1,10),rep(2,10))
> x <- rnorm(40,5,1)
> dat <- data.frame(t1,t2,x)
>
> boxplot(x~t1,t2)
>
>
> Many thanks,
> Amor
>
>
>
> ? ? ? ?[[alternative HTML version deleted]]
>
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
------------------------------
Message: 114
Date: Wed, 27 May 2009 19:38:12 -0400
From: Benjamin Tyner <btyner@gmail.com>
Subject: [R] lattice::xyplot axis padding with fontfamily="mono"
To: r-help@r-project.org
Message-ID: <4A1DCEE4.4040408@gmail.com>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
Hello,
Say I have a predictor taking a very wide value:
Data <-
data.frame(pred="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",resp=1)
print(xyplot(pred~resp, data=Data)) # enough y-axis padding to
accommodate the wide label
print(xyplot(pred~resp, data=Data,scales=list(fontfamily="mono"))) #
not enough padding
What's the recommended way to have enough padding allocated?
Thank you
Ben
------------------------------
Message: 115
Date: Wed, 27 May 2009 20:25:29 -0400
From: Andrey Lyalko <alyalko@gmail.com>
Subject: [R] How do I get removed from this mailing list?
To: R-help@r-project.org
Message-ID:
<e68c632e0905271725h20e9d304m5dac60b4c76003e3@mail.gmail.com>
Content-Type: text/plain
How do I get removed from this mailing list?
[[alternative HTML version deleted]]
------------------------------
Message: 116
Date: Wed, 27 May 2009 20:26:41 -0400
From: Farrel Buchinsky <fjbuch@gmail.com>
Subject: Re: [R] RGoogleDocs: can now see documents but cannot get
content.
To: Duncan Temple Lang <duncan@wald.ucdavis.edu>
Cc: R <r-help@stat..math.ethz.ch>
Message-ID:
<bd93cdad0905271726w4a426459t1c3e42ec35fba818@mail.gmail.com>
Content-Type: text/plain
I already downloaded 0.2-2 -If my memory serves me correctly. If we just go
by date, have you updated the files on the server since May 18?
Farrel Buchinsky
Google Voice Tel: (412) 567-7870
Sent from Pittsburgh, Pennsylvania, United States
On Wed, May 20, 2009 at 12:28, Duncan Temple Lang
<duncan@wald.ucdavis.edu>wrote:
>
> Hi Farrel
>
> This particular problem is a trivial issue of an argument out
> of place due to a change in the function definition during the
> development. There is a new version of the package (0.2-2)
> and it also uses a slightly different approach (and function)
> to pull the values into the form of an R data frame.
>
> Please try that and hopefully it will work.
>
> The code in the run.pdf (or run.html) file on the Web page
> and in the package works and is the best and shortest
> example of sheetAsMatrix().
>
> Let me know if there are still problems.
>
>
> D.
>
> Farrel Buchinsky wrote:
>
>> The author of the package, Duncan Temple Lang posted an update. I have
>> installed it and now can list my spreadsheets but alas I cannot read
the
>> data within any of them.
>> Has anybody been able to get it to work.
>> I would love to see a real live example of sheetAsMatrix
>> I am not sure how to specify sheet and con = sheet@connection. I have
>> tried
>> many ways but just get:
>> Error in !includeEmpty : invalid argument type
>>
>> Windows Vista (with UAC disabled)
>> R 2.9.0
>>
>> Farrel Buchinsky
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 117
Date: Wed, 27 May 2009 20:29:22 -0400
From: Duncan Murdoch <murdoch@stats.uwo.ca>
Subject: Re: [R] How do I get removed from this mailing list?
To: Andrey Lyalko <alyalko@gmail.com>
Cc: R-help@r-project.org
Message-ID: <4A1DDAE2.3010105@stats.uwo.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 27/05/2009 8:25 PM, Andrey Lyalko wrote:> How do I get removed from this mailing list?
>
Like most lists nowadays, it gives the instructions in each message header:
List-Unsubscribe: <https://stat.ethz.ch/mailman/options/r-help>,
<mailto:r-help-request@r-project.org?subject=unsubscribe>
Duncan Murdoch
------------------------------
Message: 118
Date: Thu, 28 May 2009 01:36:13 +0100
From: Zhou Fang <zhou.zfang@gmail.com>
Subject: [R] Replace is leaking?
To: r-help@r-project.org
Message-ID:
<b19557450905271736g559a6142w37259d0d23a44e35@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Okay, someone explain this behaviour to me:
Browse[1]> replace(rep(0, 4000), temp1[12] , temp2[12])[3925]
[1] 0.4462404
Browse[1]> temp1[12]
[1] 3926
Browse[1]> temp2[12]
[1] 0.4462404
Browse[1]> replace(rep(0, 4000), 3926 , temp2[12])[3925]
[1] 0
For some reason, R seems to shift indices along when doing this replacement.
Has anyone encountered this bug before? It seems to crop up from time
to time, seemingly at random. Any idea for a fix? Reassigning the
variables seems to preserve the magicness of the numbers. It all seems
very bizarre and worrying.
If anyone is interested in a R workspace to reproduce this, email me.
This is running in R 2..9.
Zhou Fang
------------------------------
Message: 119
Date: Wed, 27 May 2009 19:58:17 -0500
From: Erin Hodgess <erinm.hodgess@gmail.com>
Subject: [R] question about using a remote system
To: R help <r-help@stat.math.ethz.ch>
Message-ID:
<7acc7a990905271758x2fbf2d2fw7c8b3f5275ef8f32@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear R People:
I would like to set up a plug-in for Rcmdr to do the following:
I would start on a Linux laptop. Then I would log into another
outside system and run a some commands.
Now, when I tried to do
system("ssh erin@xxx.edu")
password xxxxxx
It goes to the remote system. how do I continue to issue commands
from the Linux laptop please?
(hope this makes sense)
thanks,
Erin
--
Erin Hodgess
Associate Professor
Department of Computer and Mathematical Sciences
University of Houston - Downtown
mailto: erinm.hodgess@gmail.com
------------------------------
Message: 120
Date: Thu, 28 May 2009 02:26:21 +0100
From: Zhou Fang <zhou..zfang@gmail.com>
Subject: Re: [R] Replace is leaking?
To: r-help@r-project.org
Message-ID: <4A1DE83D.30908@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Oh hang on, I've figured it out.
Rounding error, doh. Somewhere along the line I got lazy and took the
weighted average of two values that are equal. as.integer truncates, so,
yeah. Never mind.
Zhou Fang
------------------------------
Message: 121
Date: Thu, 28 May 2009 13:28:56 +1200
From: Rolf Turner <r.turner@auckland.ac.nz>
Subject: Re: [R] Replace is leaking?
To: Zhou Fang <zhou.zfang@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID: <14396079-087D-4ABE-814A-F0C1E37B12B2@auckland.ac.nz>
Content-Type: text/plain; charset="US-ASCII"; delsp=yes; format=flowed
On 28/05/2009, at 12:36 PM, Zhou Fang wrote:
> Okay, someone explain this behaviour to me:
>
> Browse[1]> replace(rep(0, 4000), temp1[12] , temp2[12])[3925]
> [1] 0.4462404
> Browse[1]> temp1[12]
> [1] 3926
> Browse[1]> temp2[12]
> [1] 0.4462404
> Browse[1]> replace(rep(0, 4000), 3926 , temp2[12])[3925]
> [1] 0
>
> For some reason, R seems to shift indices along when doing this
> replacement.
>
> Has anyone encountered this bug before? It seems to crop up from time
> to time, seemingly at random. Any idea for a fix? Reassigning the
> variables seems to preserve the magicness of the numbers. It all seems
> very bizarre and worrying.
>
> If anyone is interested in a R workspace to reproduce this, email me.
> This is running in R 2.9.
Doesn't happen to me:
> temp1 <- sample(1:10000,20)
> temp2 <- runif(20)
> temp1[12] <- 3926
> temp2[12] <- 0.4462404
> temp1[12]
[1] 3926
> temp2[12]
[1] 0.4462404
> xxx <- replace(rep(0, 4000), temp1[12] , temp2[12])
> xxx[3925]
[1] 0
> xxx[3926]
[1] 0.4462404
OMMMMMMMMMMMMMMMMMM!
cheers,
Rolf Turner
######################################################################
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}
------------------------------
Message: 122
Date: Wed, 27 May 2009 21:38:41 -0400
From: jim holtman <jholtman@gmail.com>
Subject: Re: [R] problem with cbind
Cc: r-help@r-project.org
Message-ID:
<644e1f320905271838h1777c16i155af117ada2eec1@mail.gmail.com>
Content-Type: text/plain
Here is one way of doing it by splitting the data and then padding
everything to the same length:
> x <- data.frame(pat=LETTERS, age=sample(60, 26))
> x.cut <- split(x, cut(x$age, breaks=c(1,seq(10,60,10))))
> # determine maximum number in a group and then pad the rest out to that
size> x.max <- max(sapply(x.cut, nrow))
> x..pad <- lapply(x.cut, function(.grp){
+ c(as.character(.grp$pat), rep("", x.max - nrow(.grp)))
+ })> # now you can do the cbind
> print(do.call(cbind, x.pad), quote=FALSE)
(1,10] (10,20] (20,30] (30,40] (40,50] (50,60]
[1,] H E C B A K
[2,] L J D F G P
[3,] T V I O N R
[4,] U W M X S
[5,] Y Q
[6,] Z>
>
>
>
> Hi All,
>
> I have a file with two columns, the first column has the names of the
> patients and the second column has the age.. I am looking into creating an
> output file that looks like
>
> 1-10 10-20 etc
> Eric Chris
> Bob mat
> Andrew
> Suzan
>
>
> Where each column has the name of the patients in a given age category that
> is displayed in the header. For example in the output, the first column has
> the name of the patients with age between 1 to 10.
>
> The problem that I am having is that I can not use cbind since the length
> of
> the vectors is different. Is there a way to create such a file?
>
> Thanks for your help
>
>
>
> --
> View this message in context:
> http://www.nabble.com/problem-with-cbind-tp23747075p23747075.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project..org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
[[alternative HTML version deleted]]
------------------------------
Message: 123
Date: Wed, 27 May 2009 21:58:37 -0400
From: Gabor Grothendieck <ggrothendieck@gmail.com>
Subject: Re: [R] problem with cbind
Cc: r-help@r-project.org
Message-ID:
<971536df0905271858u6fbc4456wfa931635867bd4ea@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Try this where Age.Group is a factor whose
levels represent the columns of out and Seq is
a sequence number labeling the first Name
in each Age.Group 1, the second 2 and
so on.
> DF <- data.frame(Name = LETTERS, Age = 1:26)
> DF$Age.Group <- cut(DF$Age, seq(0, 30, 10))
> DF$Seq <- with(DF, ave(seq_along(Name), Age.Group, FUN = seq_along))
> out <- tapply(DF$Name, DF[c("Seq", "Age.Group")],
paste)
> out[is.na(out)] <- ""
> out
Age.Group
Seq (0,10] (10,20] (20,30]
1 "A" "K" "U"
2 "B" "L" "V"
3 "C" "M" "W"
4 "D" "N" "X"
5 "E" "O" "Y"
6 "F" "P" "Z"
7 "G" "Q" ""
8 "H" "R" ""
9 "I" "S" ""
10 "J" "T" ""
>
> Hi All,
>
> I have a file with two columns, the first column has the names of the
> patients and the second column has the age. I am looking into creating an
> output file that looks like
>
> 1-10 ? ?10-20 ? ?etc
> Eric ? ?Chris
> Bob ? ? mat
> ? ? ? ? ? ?Andrew
> ? ? ? ? ? ?Suzan
>
>
> Where each column has the name of the patients in a given age category that
> is displayed in the header. For example in the output, the first column has
> the name of the patients with age between 1 to 10.
>
> The problem that I am having is that I can not use cbind since the length
of
> the vectors is different. Is there a way to create such a file?
>
> Thanks for your help
>
>
>
> --
> View this message in context:
http://www.nabble.com/problem-with-cbind-tp23747075p23747075.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 124
Date: Wed, 27 May 2009 19:43:36 -0700 (PDT)
From: Ben Bolker <bolker@ufl.edu>
Subject: Re: [R] R Books listing on R-Project
To: r-help@r-project.org
Message-ID: <23754282.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
G. Jay Kerns wrote:>
> Hello Stavros,
>
> On Wed, May 27, 2009 at 2:53 PM, Stavros Macrakis
<macrakis@alum.mit.edu>
> wrote:
>> I was wondering what the criteria were for including books on the Books
>> Related to R page
<http://www.r-project.org/doc/bib/R-books.html>. (There
>> is
>> no maintainer listed on this page.)
>>
>> In particular, I was wondering why the following two books are not
>> listed:
>>
>> * Andrew Gelman, Jennifer Hill, *Data Analysis Using Regression and
>> Multilevel/Hierarchical Models*. (CRAN package 'arm')
>>
>> * Michael J. Crawley, *The R Book*. (reviewed, rather negatively, in *R
>> News
>> * *7*:2)
>>
>> Is the list more or less arbitrary? ?Does it reflect some editorial
>> judgment
>> about the value of these books? If so, it might be more useful to
include
>> the books, but with critical reviews. ?It doesn't seem to be a
matter of
>> up-to-dateness, because 38/87 of the listed books were published in a
>> more
>> recent year than Gelman or Crawley.
>>
>> The list is currently in reverse chronological order. ?I wonder if it
>> would
>> be useful to group the entries thematically -- I'd be happy to help
on
>> that
>> project.
>>
>
> I had a similar idea in 2008 for the R-wiki:
>
> http://tolstoy.newcastle.edu.au/R/e5/devel/08/10/0481.html
>
> There were no responses and I ran out of time to continue working on
> it myself. If you are interested in proceeding along these lines then
> I have some more ideas and would be willing to help... or, perhaps
> you (or somebody else) knows of an even better approach.
>
> Cheers
> Jay
>
>
I don't know what the editorial policy is, but Kurt Hornik put my book
up when I asked him to. (The BibTeX entries for both books are at
the bottom of this message, in case that's useful.)
Jay: why not post your R-books how to on the wiki itself???
I wrote some R code to wikify the R-books list from the
R web site -- it won't deal with LaTeX code in the abstract,
but otherwise should convert automatically.
w <- readLines(url("http://www.r-project.org/doc/bib/R-books.bib"))
g <- c(grep("^@",w),length(w)+1)
gd <- diff(g)
gd2 <- rep(1:length(gd),gd)
w2 <- split(w,gd2)
w2 <- w2[-na.omit(match(g,grep("^@comment",w)))]
w3 <- lapply(w2,
function(x) {
c("<bibtex>",x,"</bibtex>") })
w4 <- lapply(w3,
function(x) {
abs.start.token <- "^ *abstract *= *{"
abs.end.token <- "} *, *$"
abs.start <- grep(abs.start.token,x)
abs.end <- grep(abs.end.token,x[-(1:abs.start)])+abs.start
abstr <- x[abs.start:abs.end]
n <- length(abstr)
abstr[1] <- gsub(abs.start.token,"",abstr[1])
abstr[n] <- gsub(abs.end.token,"",abstr[n])
c(x[-(abs.start:abs.end)],"",abstr)
})
@book{crawley_r_2007,
edition = {1},
title = {The R Book},
isbn = {0470510242},
publisher = {Wiley},
author = {Michael J. Crawley},
month = jun,
year = {2007}
}
@book{gelman_data_2006,
address = {Cambridge, England},
title = {Data Analysis Using Regression and {Multilevel/Hierarchical}
Models},
url = {http://www.stat.columbia.edu/~gelman/arm/},
publisher = {Cambridge University Press},
author = {Andrew Gelman and Jennifer Hill},
year = {2006},
keywords = {uploaded}
}
--
View this message in context:
http://www.nabble.com/R-Books-listing-on-R-Project-tp23748687p23754282.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 125
Date: Thu, 28 May 2009 02:54:11 +0000
From: mohsin ali <ali.mohsin@hotmail.com>
Subject: [R] R help
To: <r-help@r-project.org>
Message-ID: <COL104-W47989B904D41B74D322C939B500@phx.gbl>
Content-Type: text/plain
Dear Sir
I am new user of R.
I am interested in modeling hydrological extreme events. I found MSClaio2008
very interesting function. In this function four criterions for choosing
distributions. Can we call these criterions as model selection techniques or
goodness of fit techniques or both? Because goodness of fit techniques are
usually performed after modle selection.
Can I found chi-square, kolmogrov-sminov and cramer-von mises tests for testing
goodness of fit for proposed distributions?
Please help
_________________________________________________________________
Show them the way! Add maps and directions to your party invites.
[[alternative HTML version deleted]]
------------------------------
Message: 126
Date: Thu, 28 May 2009 06:00:25 +0200
From: <mauede@alice.it>
Subject: [R] R: R: Harmonic Analysis
To: "stephen sefick" <ssefick@gmail.com>
Cc: r-help@stat.math.ethz.ch
Message-ID:
<6B32C438581E5D4C8A34C377C3B334A403B1ED8A@FBCMST11V04.fbc.local>
Content-Type: text/plain
Actually I do use DWT for features extraction which is aimed at clustering
signals bearing statistically comparable patterns.
Trend can easily fool any clustering function. This is why detrending is the
number 1 step in the whole procedure.
Sparing the quasi-harmonic components that bear most of the information is a
must.
I am interested at sparing the quasi-periodic oscillations that, otherwise,
would be wiped out by the detrending procedure.
Thanks,
Maura
-----Messaggio originale-----
Da: stephen sefick [mailto:ssefick@gmail.com]
Inviato: gio 28/05/2009 0.56
A: mauede@alice.it
Cc: r-help@stat.math.ethz.ch
Oggetto: Re: R: [R] Harmonic Analysis
Do you need time localization, or are you only interested in the
period of the high frequency? If you do need time localization why
not use a CWT to look at the signal?
Stephen
On Wed, May 27, 2009 at 4:59 PM, <mauede@alice.it>
wrote:> Well, the time series I am dealing with are non-linear and not-stationary.
> Maura
>
> -----Messaggio originale-----
> Da: r-help-bounces@r-project.org per conto di stephen sefick
> Inviato: mer 27/05/2009 14.58
> A: r-help@stat.math.ethz.ch
> Oggetto: Re: [R] Harmonic Analysis
>
> why will a fourier transform not work?
> 2009/5/27 Uwe Ligges <ligges@statistik.tu-dortmund.de>:
>>
>>
>> Dieter Menne wrote:
>>>
>>> <mauede <at> alice.it> writes:
>>>
>>>> I am looking for a package to perform harmonic analysis with
the goal of
>>>> estimating the period of the
>>>> dominant high frequency component in some mono-channel signals.
>>>
>>> You should widen your scope by looking a "time series"
instead of
>>> harmonic
>>> analysis. There is a task view on the subject at
>>>
>>> http://cran.at.r-project.org/web/views/TimeSeries.html
>>
>>
>> ... or take a look at package tuneR.
>>
>> Uwe Ligges
>>
>>
>>
>>
>>> Dieter
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Stephen Sefick
>
> Let's not spend our time and resources thinking about things that are
> so little or so large that all they really do for us is puff us up and
> make us feel like gods. We are mammals, and have not exhausted the
> annoying little problems of being mammals.
>
> -K. Mullis
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
>
> Alice Messenger ;-) chatti anche con gli amici di Windows Live Messenger e
[[elided Yahoo spam]]
er
--
Stephen Sefick
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis
[[elided Yahoo spam]]
[[alternative HTML version deleted]]
------------------------------
Message: 127
Date: Thu, 28 May 2009 00:16:59 -0400
From: Michael Kubovy <kubovy@virginia.edu>
Subject: [R] Interaction plots as lines or bars?
To: r-help <r-help@r-project.org>
Message-ID: <805308B1-6055-4A97-8FF1-58577B38A98C@virginia.edu>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Dear r-helpers,
An editor has suggested that I use bar plots to capture an interaction
of two 2-level factors and an interaction of a 2 by 3 factorial
experiment. (It would seem that there's a fear that someone might try
to interpolate between, e.g., 'male' and 'female'.) In general
it
seems to me that an interaction plot with lines is easier to read, and
not likely to mislead. Does anyone know if and where this has been
discussed?
_____________________________
Professor Michael Kubovy
University of Virginia
Department of Psychology
USPS: P.O.Box 400400 Charlottesville, VA 22904-4400
Parcels: Room 102 Gilmer Hall
McCormick Road Charlottesville, VA 22903
Office: B011 +1-434-982-4729
Lab: B019 +1-434-982-4751
Fax: +1-434-982-4766
WWW: http://www.people.virginia.edu/~mk9y/
------------------------------
Message: 128
Date: Wed, 27 May 2009 21:38:00 -0700 (PDT)
Subject: [R] PBSmapping problems with importGSHHS
To: r-help@r-project.org
Message-ID: <682672.27807.qm@web36108.mail.mud.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Dear List,
I am trying to use the PBSmapping package to import and plot the shoreline of
Hawaii. I am having problems importing and plotting the specific regions that I
would like to plot. Specifically, I can't get it to import the range of x
variables that I would like. I think my problem is with a variable called xoff,
but I have no idea what it is, what it does, or how I am supposed to use it.
It's default value is -360, but I have found examples in the PBSmapping web
publication of it being set to zero as well. The help files provides the
following definitions:
xlim range of X-coordinates to clip. Range should match the transform xoff.
xoff transform the X-coordinates by specified amount.
I can import certain regions of GSHH, but I don't seem to be able to limit
them like I want using xlim. If I change xoff to something besides -360 I get
different sizes of plots that are all the color I am using for land. I have the
following:
Hawaii.GSHH<- importGSHHS(GSHHfile,
xlim=c(-156.3,-155.92), ylim=c(19.45,19.5), xoff=-360)
The GSHHS shoreline data is found at:
http://www..soest.hawaii.edu/wessel/gshhs/gshhs.html
Any suggestions on what is going on?
Thanks,
Tim
Tim Clark
Department of Zoology
University of Hawaii
------------------------------
Message: 129
Date: Thu, 28 May 2009 01:35:56 -0400
From: "G. Jay Kerns" <gkerns@ysu.edu>
Subject: Re: [R] R Books listing on R-Project
To: Ben Bolker <bolker@ufl.edu>
Cc: r-help@r-project.org
Message-ID:
<a695148b0905272235h201c39afx4476c0206b75d773@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
>
> Jay: why not post your R-books how to on the wiki itself???
Because I thought that it would be better to write the instructions in
R-wiki language that anybody could modify rather than post a PDF by
me.
Here is what I had in mind:
http://wiki.r-project.org/rwiki/doku.php?id=links:books:howto
Also, I put the books from the main page in the above format:
http://wiki.r-project.org/rwiki/doku.php?id=links:books
> ?I wrote some R code to wikify the R-books list from the
> R web site -- it won't deal with LaTeX code in the abstract,
> but otherwise should convert automatically.
>
Excellent.... a person could use what you have written to simply
copy/paste into the wiki in the appropriate place(s).
One of the advantages of the R wiki is the ease with which books may
be categorized - which goes back to Stavros' OP. At the time, I went
to Springer or CRC somewhere and found the below categories. Judging
from the flyers that fill my office mailbox, I believe that all of
them are currently covered by *some* book related to R.
If it were the author's responsibility to put their book in the proper
category or categories, and given that authors are typically of a mind
to sell books... then perhaps the R wiki would populate and maintain
itself....?
Jay
Bayesian Statistics
Biostatistics
Computational Statistics
Environmental Statistics
Introductory Statistics
Probability Theory & Applications
Programming in R and S
Reference Statistics & Collected Works
SPC/Reliability/Quality Control
Statistical Genetics & Bioinformatics
Statistical Learning & Data Mining
Statistical Theory & Methods
Statistics for Biological Sciences
Statistics for Business, Finance & Economics
Statistics for Engineering and Physical Science
Statistics for Psychology, Social Science & Law
Unclassified
------------------------------
Message: 130
Date: Thu, 28 May 2009 11:42:41 +0530
From: anupam sinha <anupam.contact@gmail.com>
Subject: [R] Unable to load R
To: r-help@r-project.org
Message-ID:
<82ec54570905272312n679f4fb0m3e102b93affef297@mail.gmail.com>
Content-Type: text/plain
Dear all,
I have recently installed R on a Red Hat Enterprise Linux
(RHEL5) system. But I am unable to load R and it is giving the following
error :
*/usr/local/lib/R/bin/exec/R: error while loading shared libraries:
libreadline.so.5: cannot open shared object file: No such file or directory*
I have checked for the presence of the above mention library and found that
the library is present. I have run out of ideas. Can anyone help me out???
I will be greatly indebted.
Cheers,
Anupam Sinha,
Graduate Student,
Lab of Computational Biology,
Centre for DNA Fingerprinting and Diagnostics,
Hyderabad, India
[[alternative HTML version deleted]]
------------------------------
Message: 131
Date: Thu, 28 May 2009 08:24:35 +0200
From: Zeljko Vrba <zvrba@ifi.uio.no>
Subject: Re: [R] Unable to load R
To: anupam sinha <anupam.contact@gmail.com>
Cc: r-help@r-project.org
Message-ID: <20090528062435.GF1197@anakin.ifi.uio.no>
Content-Type: text/plain; charset=us-ascii
On Thu, May 28, 2009 at 11:42:41AM +0530, anupam sinha
wrote:>
> I have checked for the presence of the above mention library and found that
> the library is present. I have run out of ideas. Can anyone help me out???
> I will be greatly indebted.
>
First: how did you check that the library is present?
It might be that you have installed 32-bit R on 64-bit RHEL or vice-versa, so
you need to install the requisite libraries in appropriate "bitness".
------------------------------
Message: 132
Date: Wed, 27 May 2009 23:25:58 -0700 (PDT)
From: Dieter Menne <dieter.menne@menne-biomed.de>
Subject: Re: [R] Interaction plots as lines or bars?
To: r-help@r-project.org
Message-ID: <23755909.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Michael Kubovy wrote:>
> An editor has suggested that I use bar plots to capture an interaction
> of two 2-level factors and an interaction of a 2 by 3 factorial
> experiment. (It would seem that there's a fear that someone might try
> to interpolate between, e.g., 'male' and 'female'.) In
general it
> seems to me that an interaction plot with lines is easier to read, and
> not likely to mislead. Does anyone know if and where this has been
> discussed?
>
Editors are paid by the ink.
I have never seen anything else than lines for such a plot. However, I once
was successful in using somewhat wider lines that looked barish. Which made
me believe that the "paid by the ink" is not too bad an analogy:
editors
fear graphics that look too empty, so give them their black. I am sure
Hadley will argue that this is the reason why ggplot2 by default has a gray
background.
Dieter
--
View this message in context:
http://www.nabble.com/Interaction-plots-as-lines-or-bars--tp23754973p23755909.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 133
Date: Thu, 28 May 2009 07:33:13 +0100 (BST)
From: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Subject: Re: [R] question about using a remote system
To: R help <r-help@stat.math.ethz.ch>
Message-ID: <XFMail.090528073313.Ted.Harding@manchester.ac.uk>
Content-Type: text/plain; charset=iso-8859-1
On 28-May-09 00:58:17, Erin Hodgess wrote:> Dear R People:
> I would like to set up a plug-in for Rcmdr to do the following:
>
> I would start on a Linux laptop. Then I would log into another
> outside system and run a some commands.
>
> Now, when I tried to do
> system("ssh erin@xxx.edu")
> password xxxxxx
>
> It goes to the remote system. how do I continue to issue commands
> from the Linux laptop please?
>
> (hope this makes sense)
> thanks,
> Erin
Hi Erin,
If I understand what you want, I think you may be asking a bit
too much from R itself.
When you, Erin, are interacting with your Linux laptop you can
switch between two (or more) different X windows or consoles,
on one of which you are logged in to the remote machine and on
which you issue commands to the remote machine, and can read
its output and, maybe, copy this to a terminal running commands
on your Linux laptop.
When you ask R to log in as you describe, you are tying that
instance of R to the login. I don't know of any resources within
R itself which can emulate your personal behaviour. Though maybe
others do know ...
However, there is in principle a work-round. You will need to
get your toolbox out and get to grips with "Unix plmubing";
and, to make it accessible to R, you will need to create a
"Unix plumber".
The basic component is the FIFO, or "named pipe". To create
one of these, use a Linux command like
mkfifo myFIFO1
Then 'ls -l' in the directory you are working in will show this
pipe as present with the name "myFIFO1". For example, if in one
terminal window [A] you start the command
cat myFIFO1
then in another [B] you
echo "Some text to test my FIFO" > myFIFO1
you will find that this text will be output in the [A] window
by the 'cat' command.
Similarly, you could 'mkfifo myFIFO2' and write tc this while
in the [A] window, and read from it while in the [B] window.
So, if you can get R to write to myFIFO1 (which is being read
from by another program which will send the output to a remote
machine which that program is logged into), and read from myFIFO2
which is being written to by that other program (and that is
easy to do in R, using connections), then you have the plumbing
set up.
But, to set it up, R needs to call in the plumber. In other words,
R needs to execute a command like
system("MakeMyFIFOs")
where MakeMyFIFOs is a script that creates myFIFO1 and myFIFO2,
logs in to the remote machine, watches myFIFO1 and sends anything
it reads to the remote machine, watches for any feedback from
the remote machine and then writes this into myFIFO2.
Meanwhile, back in R, any time you want R to send something to
the remote machine you write() it (or similar) to myFIFO1, and
whenever you want to receive something from the remote machine
you readLines() it (or similar) from myFIFO2.
That's an outline of a possible work-round. Maybe other have solved
your same problem in a better way (and maybe there's an R package
[[elided Yahoo spam]]
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 28-May-09 Time: 07:33:09
------------------------------ XFMail ------------------------------
------------------------------
Message: 134
Date: Wed, 27 May 2009 23:52:12 -0700 (PDT)
From: Dieter Menne <dieter.menne@menne-biomed.de>
Subject: Re: [R] alternative to built-in data editor
To: r-help@r-project.org
Message-ID: <23756182.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
urlwolf wrote:>
> I often have to peek at large data.
> While head and tail are convenient, at times I'd like some more
> comprehensive.
> I guess I debug better in a more visual way?
> I was wondering if there's a way to override the default data editor.
>
I have never seen the data editor. The radical alternative I use is to do
all data-peeking in an external database, e.g. Access or SQLServer Express.
Writing using RODBC is so easy that write all intermediate results back to
the database for inspection and re-use in a later step.
Dieter
--
View this message in context:
http://www.nabble.com/alternative-to-built-in-data-editor-tp23747080p23756182.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 135
Date: Thu, 28 May 2009 07:54:36 +0100
From: Mark Wardle <mark@wardle.org>
Subject: Re: [R] question about using a remote system
To: Erin Hodgess <erinm.hodgess@gmail.com>
Cc: R help <r-help@stat.math.ethz.ch>
Message-ID:
<b59a37130905272354l6f5b77basb9fcfef7a539c6dc@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi.
Do you need an interactive session at the remote machine, or are you
simply wanting to run a pre-written script?
If the latter, then you can ask ssh to execute a remote command, which
conceivably could be "R CMD xxxxx"
If you explain exactly why and what you are trying to do, then perhaps
there's a better solution....
bw
Mark
2009/5/28 Erin Hodgess <erinm.hodgess@gmail.com>:> Dear R People:
>
> I would like to set up a plug-in for Rcmdr to do the following:
>
> I would start on a Linux laptop. ?Then I would log into another
> outside system and run a some commands.
>
> Now, when I tried to do
> system("ssh erin@xxx.edu")
> password xxxxxx
>
> It goes to the remote system. ?how do I continue to issue commands
> from the Linux laptop please?
>
> (hope this makes sense)
>
> thanks,
> Erin
>
>
> --
> Erin Hodgess
> Associate Professor
> Department of Computer and Mathematical Sciences
> University of Houston - Downtown
> mailto: erinm.hodgess@gmail.com
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project..org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
Dr. Mark Wardle
Specialist registrar, Neurology
Cardiff, UK
------------------------------
Message: 136
Date: Thu, 28 May 2009 09:07:29 +0200
From: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Subject: Re: [R] How do I get removed from this mailing list?
To: Andrey Lyalko <alyalko@gmail.com>
Cc: R-help@r-project.org
Message-ID: <4A1E3831.8040001@biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Andrey Lyalko wrote:> How do I get removed from this mailing list?
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
[[elided Yahoo spam]]
Specifically, bottom of page at
https://stat.ethz.ch/mailman/listinfo/r-help
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 137
Date: Thu, 28 May 2009 12:50:46 +0530
From: "bogaso.christofer" <bogaso.christofer@gmail.com>
Subject: [R] "1L" and "0L"
To: <R-help@stat.math.ethz.ch>
Message-ID: <4a1e384c.16538c0a.14b3.4b79@mx.google.com>
Content-Type: text/plain
Hi,
Recently I come through those R-expressions and understood that "1L"
means
"1" and "0L" means "0". Why they are so? I mean,
what the excess meanings
they carry, instead writing simple "1" or "0"? Is there any
more this kind
of expressions in R?
Regards
[[alternative HTML version deleted]]
------------------------------
Message: 138
Date: Thu, 28 May 2009 08:13:14 +0100
From: Mark Wardle <mark@wardle.org>
Subject: Re: [R] Object-oriented programming in R
To: Luc Villandre <villandl@dms.umontreal.ca>
Cc: R Help <r-help@r-project.org>
Message-ID:
<b59a37130905280013q662410en578130f8d8fc00ca@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Hi. I remember considering these options myself but concluded that for
most analyses a strictly procedural approach was satisfactory.
Although I may re-run multiple analyses, the data manipulation (and
subsequent analysis - the former always more complex than the latter
IMHO) is fairly project- and data-specific. As such, quick and
specific code always seemed more appropriate than slow (to write) and
generic.
Obviously that doesn't apply to package creators and maintainers,
creating something that is going to re-used in many different projects
and can be made generic. That's the heuristic I now use - is this good
enough for other people's consumption - if so, create a package and
adopt some of the OO approaches seen in the base R packages.
Otherwise, stick to bespoke and specific (procedural) functions.
I've a small library of helper functions, but these don't use OO
usually. They sometimes make assumptions about data passed in, don't
have particularly robust error checking, but in general, work well. I
suppose it depends on what you're trying to achieve and how much time
you've got!
Interested in other's thoughts too as clearly there's no "right
answer" here.
bw
Mark
2009/5/27 Luc Villandre
<villandl@dms.umontreal.ca>:> Dear R-users,
>
> I have very recently started learning about object-oriented programming in
> R. I am far from being an expert in programming, although I do have an
> elementary C++ background.
>
> Please take a look at these lines of code.
>>
>> some.data = data.frame(V1 = 1:5, V2 = 6:10) ;
>> p.plot = ggplot(data=some.data,aes(x=V1, y=V2)) ;
>> class(p.plot) ;
>> [1] "ggplot"
>
> My understanding is that the object p.plot belongs to the
"ggplot" class.
> However, a new class definition like
>>
>> setClass("AClass", representation(mFirst =
"numeric", mSecond = "ggplot"))
>> ;
>
> yields the warning
>>
>> Warning message:
>> In .completeClassSlots(ClassDef, where) :
>> ?undefined slot classes in definition of "AClass":
mSecond(class "ggplot")
>
> The ggplot object is also a list :
>>
>> is.list(p.plot)
>> [1] TRUE
>
> So, I guess I could identify mSecond as being a list.
>
> However, I don't understand why "ggplot" is not considered a
valid slot
> type. I thought setClass() was analogous to the class declaration in C++,
> but I guess I might be wrong. Would anyone care to provide additional
> explanations about this?
>
> I decided to explore object-oriented programming in R so that I could
> organize the output from my analysis in a more rigorous fashion and then
> define custom methods that would yield relevant output. However, I'm
> starting to wonder if this aspect is not better suited for package
builders.
> R lists are already very powerful and convenient templates. Although it
> wouldn't be as elegant, I could define functions that would take lists
> outputted by the different steps of my analysis and do what I want with
> them. I'm wondering what the merits of both approaches in the context
of R
> would be. If anyone has any thoughts about this, I'd be most glad to
read
> them.
>
> Cheers,
> --
> *Luc Villandr?*
> /Biostatistician
> McGill University Health Center -
> Montreal Children's Hospital Research Institute/
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
Dr. Mark Wardle
Specialist registrar, Neurology
Cardiff, UK
------------------------------
Message: 139
Date: Thu, 28 May 2009 09:20:46 +0200
From: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Subject: Re: [R] How do I get removed from this mailing list?
To: Duncan Murdoch <murdoch@stats.uwo.ca>
Cc: R-help@r-project.org
Message-ID: <4A1E3B4E.4080707@biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Duncan Murdoch wrote:> On 27/05/2009 8:25 PM, Andrey Lyalko wrote:
>> How do I get removed from this mailing list?
>>
>
> Like most lists nowadays, it gives the instructions in each message header:
>
> List-Unsubscribe: <https://stat.ethz..ch/mailman/options/r-help>,
> <mailto:r-help-request@r-project.org?subject=unsubscribe>
>
> Duncan Murdoch
Unfortunately, "user-friendly" mailers nowadays tends to hide such
information from users. E.g. if I set headers to All in Thunderbird I
get a window full of headers larger than the screen and no scrolling
capacity. So it ends with X-Mailman something and the List-Unsubscribe:
stuff is nowhere to be seen.
-p
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku..dk) FAX: (+45) 35327907
------------------------------
Message: 140
Date: Thu, 28 May 2009 09:33:27 +0200
From: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Subject: Re: [R] How do I get removed from this mailing list?
To: Peter Dalgaard <p.dalgaard@biostat.ku.dk>
Cc: R-help@r-project.org, Duncan Murdoch <murdoch@stats.uwo.ca>
Message-ID: <4A1E3E47.5070308@idi.ntnu.no>
Content-Type: text/plain; charset=ISO-8859-1
Peter Dalgaard wrote:> Duncan Murdoch wrote:
>> On 27/05/2009 8:25 PM, Andrey Lyalko wrote:
>>> How do I get removed from this mailing list?
>>>
>>
>> Like most lists nowadays, it gives the instructions in each message
>> header:
>>
>> List-Unsubscribe: <https://stat.ethz.ch/mailman/options/r-help>,
>> <mailto:r-help-request@r-project.org?subject=unsubscribe>
>>
>> Duncan Murdoch
>
> Unfortunately, "user-friendly" mailers nowadays tends to hide
such
> information from users. E.g. if I set headers to All in Thunderbird I
> get a window full of headers larger than the screen and no scrolling
> capacity. So it ends with X-Mailman something and the
> List-Unsubscribe: stuff is nowhere to be seen.
right, but there are (at least) three things you can do in thunderbird
when all headers are on:
- open a print preview -- there you have a scrollable content with all
headers;
- save the message and open it in a regular text editor;
- install a suitable plugin (header scroll [1] works fine for me)
the last one turns thunderbird a bit more user friendly.
vQ
[1] https://addons.mozilla.org/en-US/thunderbird/addon/1003
------------------------------
Message: 141
Date: Thu, 28 May 2009 09:42:39 +0200
From: Ivan Alves <papucho@me.com>
Subject: Re: [R] ggplot2 adding vertical line at a certain date
To: stephen sefick <ssefick@gmail.com>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID: <80F5C2A2-1A14-4997-8B02-0BA84B157927@me.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
check out geom_vline
+ geom_vline(xintercept=as.numeric(as.Date("2002-11-01")))
[you may not need to convert the date to numeric in the most recent
ggplot2 version]
On 27 May 2009, at 20:31, stephen sefick wrote:
> library(ggplot2)
>
> melt.updn <- (structure(list(date = structure(c(11808, 11869, 11961,
> 11992,
> 12084, 12173, 12265, 12418, 12600, 12631, 12753, 12996, 13057,
> 13149, 11808, 11869, 11961, 11992, 12084, 12173, 12265, 12418,
> 12600, 12631, 12753, 12996, 13057, 13149), class = "Date"),
variable > structure(c(1L,
> 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L,
> 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label =
c("unrestored",
> "restored"), class = "factor"), value =
c(1..1080259671261,
> 0.732188576856918,
> 0.410334408061265, 0.458980396410056, 0.429867902470711,
> 0.83126337241925,
> 0.602008712602784, 0.818751283264408, 1.12606382402475,
> 0.246174719479079,
> 0.941043753226865, 0.986511619794787, 0.291074883642735,
> 0.346361775752625,
> 1.36209038621623, 0.878561166753624, 0.525156715576168,
> 0.80305564765846,
> 1.08084449441812, 1.24906568558731, 0.970954515841768,
> 0.936838439269239,
> 1.26970090246036, 0.337831520417547, 0.909204325710795,
> 0.951009811036613,
> 0.290735620653709, 0.426683515714219)), .Names = c("date",
"variable",
> "value"), row.names = c(NA, -28L), class =
"data.frame"))
>
> qplot(date, value, data=melt.updn, shape=variable)+geom_smooth()
>
> #I would like to add a line at November 1, 2002
> #thanks for the help
>
> --
> Stephen Sefick
>
> Let's not spend our time and resources thinking about things that are
> so little or so large that all they really do for us is puff us up and
> make us feel like gods. We are mammals, and have not exhausted the
> annoying little problems of being mammals.
>
> -K. Mullis
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 142
Date: Thu, 28 May 2009 07:50:54 +0000
From: "ms.com" <loginms@hotmail.com>
Subject: [R] optima in unimode
To: R Help <r-help@r-project.org>
Message-ID: <SNT106-W27B57E8E024818409B6BF9B2500@phx.gbl>
Content-Type: text/plain
Dear all
i could not estimate the optima value or range value in unimodal plot in glm
please help me out
thanking you
regard
madan
_________________________________________________________________
Hotmail® has a new way to see what's up with your friends.
orial_WhatsNew1_052009
[[alternative HTML version deleted]]
------------------------------
Message: 143
Date: Thu, 28 May 2009 09:59:46 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] "1L" and "0L"
To: "bogaso.christofer" <bogaso.christofer@gmail.com>
Cc: R-help@stat.math.ethz.ch
Message-ID: <1243501186.3429.3.camel@localhost.localdomain>
Content-Type: text/plain
On Thu, 2009-05-28 at 12:50 +0530, bogaso.christofer
wrote:> Hi,
>
> Recently I come through those R-expressions and understood that
"1L" means
> "1" and "0L" means "0". Why they are so? I
mean, what the excess meanings
> they carry, instead writing simple "1" or "0"? Is there
any more this kind
> of expressions in R?
>
> typeof(1)
[1] "double"> typeof(1L)
[1] "integer"
So the L notation ensures that the number is stored as an integer not a
double.
HTH
G
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
------------------------------
Message: 144
Date: Thu, 28 May 2009 10:03:47 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] How do I get removed from this mailing list?
To: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Cc: R-help@r-project.org, Duncan Murdoch <murdoch@stats.uwo.ca>, Peter
Dalgaard <p.dalgaard@biostat.ku.dk>
Message-ID: <1243501427.3429.5.camel@localhost.localdomain>
Content-Type: text/plain
On Thu, 2009-05-28 at 09:33 +0200, Wacek Kusnierczyk
wrote:> Peter Dalgaard wrote:
> > Duncan Murdoch wrote:
> >> On 27/05/2009 8:25 PM, Andrey Lyalko wrote:
> >>> How do I get removed from this mailing list?
> >>>
> >>
> >> Like most lists nowadays, it gives the instructions in each
message
> >> header:
> >>
> >> List-Unsubscribe:
<https://stat.ethz.ch/mailman/options/r-help>,
> >> <mailto:r-help-request@r-project.org?subject=unsubscribe>
> >>
> >> Duncan Murdoch
> >
> > Unfortunately, "user-friendly" mailers nowadays tends to
hide such
> > information from users. E.g. if I set headers to All in Thunderbird I
> > get a window full of headers larger than the screen and no scrolling
> > capacity. So it ends with X-Mailman something and the
> > List-Unsubscribe: stuff is nowhere to be seen.
>
> right, but there are (at least) three things you can do in thunderbird
> when all headers are on:
The "View message source" would be a more direct way of viewing the
full
email, not just the bits TBird shows you in the preview pane. I forget
how it is named exactly and under which menu it is found as it has been
quite a while since I used TBird.
G
>
> - open a print preview -- there you have a scrollable content with all
> headers;
> - save the message and open it in a regular text editor;
> - install a suitable plugin (header scroll [1] works fine for me)
>
> the last one turns thunderbird a bit more user friendly.
>
> vQ
>
> [1] https://addons.mozilla.org/en-US/thunderbird/addon/1003
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac..uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
------------------------------
Message: 145
Date: Thu, 28 May 2009 10:27:35 +0100
From: Gavin Simpson <gavin.simpson@ucl.ac.uk>
Subject: Re: [R] optima in unimode
To: "ms.com" <loginms@hotmail..com>
Cc: R Help <r-help@r-project.org>
Message-ID: <1243502855.3429.25..camel@localhost.localdomain>
Content-Type: text/plain
On Thu, 2009-05-28 at 07:50 +0000, ms.com wrote:> Dear all
> i could not estimate the optima value or range value in unimodal plot in
glm
> please help me out
>
> thanking you
>
> regard
> madan
You're going to have to give us more to go on than that, but...
Are you an ecologist and you want to estimate the optima and tolerance
range from a fitted logit model? If so, here is an example:
## install.packages("analogue") if not installed
require(analogue)
data(ImbrieKipp, SumSST)
## fit a model
mod <- glm(G.pacR ~ SumSST + I(SumSST^2), data = ImbrieKipp/100,
family = binomial)
## plot it
pdat <- data.frame(SumSST= seq(min(SumSST), max(SumSST), length = 100))
p2 <- predict(mod, pdat, type = "response")
lines(p2 ~ SumSST, data = pdat, col = "blue", lwd = 3)
## model ceofficients
c.mod <- coef(mod)
## optima
-c.mod[2]/(2 * c.mod[3])
## tolerance
sqrt(-(1/(2 * c.mod[3])))
## height == abundance at optimum
exp(c.mod[1] - (c.mod[2]^2/(4 * c.mod[3])))
HTH
G
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www..ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
------------------------------
Message: 146
Date: Thu, 28 May 2009 14:02:52 +0530
From: anupam sinha <anupam.contact@gmail.com>
Subject: Re: [R] Unable to load R
To: Zeljko Vrba <zvrba@ifi.uio.no>
Cc: r-help@r-project.org
Message-ID:
<82ec54570905280132g33c5e5doba37ea698f76e939@mail.gmail.com>
Content-Type: text/plain
Actually I did a search using "locate" but could not find the file.
But when
I do "yum install readline (package containing the dependency
libreadline.so.5) " I get the following error:
yum install readline
Loading "security" plugin
Loading "rhnplugin" plugin
rhel-x86_64-client-5 100% |=========================| 1.3 kB
00:00
Setting up Install Process
Parsing package install arguments
Package readline - 5.1-1.1.x86_64 is already installed.
Package readline - 5.1-1.1.i386 is already installed.
Nothing to do
Plzz help me out.
Cheers,
Anupam Sinha
On Thu, May 28, 2009 at 11:54 AM, Zeljko Vrba <zvrba@ifi.uio.no> wrote:
> On Thu, May 28, 2009 at 11:42:41AM +0530, anupam sinha wrote:
> >
> > I have checked for the presence of the above mention library and found
> that
> > the library is present. I have run out of ideas. Can anyone help me
> out???
> > I will be greatly indebted.
> >
> First: how did you check that the library is present?
>
> It might be that you have installed 32-bit R on 64-bit RHEL or vice-versa,
> so
> you need to install the requisite libraries in appropriate
"bitness".
>
>
[[alternative HTML version deleted]]
------------------------------
Message: 147
Date: Thu, 28 May 2009 10:59:10 +0200
From: Zeljko Vrba <zvrba@ifi.uio.no>
Subject: Re: [R] Unable to load R
To: anupam sinha <anupam.contact@gmail.com>
Cc: r-help@r-project..org
Message-ID: <20090528085910.GG1197@anakin.ifi.uio.no>
Content-Type: text/plain; charset=us-ascii
On Thu, May 28, 2009 at 02:02:52PM +0530, anupam sinha
wrote:>
> Actually I did a search using "locate" but could not find the
file. But when
>
Locate reports useful results only if its database is up-to-date. Try running
one of
ldd /usr/lib/R/bin/exec/R
ldd /usr/lib64/R/bin/exec/R
and see what it reports. Maybe R is linked to another readline version than
is installed on the system.
> I do "yum install readline (package containing the dependency
> libreadline.so.5) " I get the following error:
>
>
> yum install readline
> Loading "security" plugin
> Loading "rhnplugin" plugin
> rhel-x86_64-client-5 100% |=========================| 1.3 kB
> 00:00
> Setting up Install Process
> Parsing package install arguments
> Package readline - 5.1-1.1.x86_64 is already installed.
> Package readline - 5.1-1.1.i386 is already installed.
> Nothing to do
>
Otherwise, you could also ask on a mailing list related to your distribution.
>
> Plzz help me out.
>
And learn how to spell "please".
------------------------------
Message: 148
Date: Thu, 28 May 2009 10:31:11 +0100
From: Richard.Cotton@hsl.gov.uk
Subject: Re: [R] R help
To: mohsin ali <ali.mohsin@hotmail.com>
Cc: r-help@r-project.org, r-help-bounces@r-project.org
Message-ID:
<OF611428B6.BA4F06A3-ON802575C4.0032D09E-802575C4.00344CAC@hsl.gov.uk>
Content-Type: text/plain; charset="US-ASCII"
> I am interested in modeling hydrological extreme events. I found
> MSClaio2008 very interesting function. In this function four
> criterions for choosing distributions. Can we call these criterions
> as model selection techniques or goodness of fit techniques or both?
> Because goodness of fit techniques are usually performed after modle
> selection.
What is MSClaio2008? I don't quite understand this bit. If you provide
code examples, then questions are often easier to answer.
> Can I found chi-square, kolmogrov-sminov and cramer-von mises tests
> for testing goodness of fit for proposed distributions?
Yes, you can do these things in R. The functions you want are:
chisq.test for the chi-square test
ks.test for the kolmogorov-smirnoff test.
You can view the help pages for these functions by typing a question mark
before the function name, e.g. ?chisq.test.
For the Cramer-von Mises tests, I didn't know, so I typed
RSiteSearch("cramer von mises test"), from which there are several
suggestions. In particular, look at the CvM2SL1Test and CvM2SL2Test
packages.
Regards,
Richie.
Mathematical Sciences Unit
HSL
------------------------------------------------------------------------
ATTENTION:
This message contains privileged and confidential inform...{{dropped:20}}
------------------------------
Message: 149
Date: Thu, 28 May 2009 02:42:38 -0700 (PDT)
Subject: [R] Delta Kronecker
To: r-help@r-project.org
Message-ID: <310075.84848.qm@web38305.mail.mud.yahoo.com>
Content-Type: text/plain
Hi,
Could some give some ideas on how to compute the spatiotemporal covariance
matrix by a sum of Kronecker products in R. Is there any special function that
can be used?
Cheers.
Firdaus
[[alternative HTML version deleted]]
------------------------------
_______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 75, Issue 28
**************************************
[[alternative HTML version deleted]]