Dear sir, How to do bilinear time series in R?Is there any functions or
packages? thank you!
-----Sincerely yours
Kuangnan Fang 方匡南 敬上
department of statistics ,Economics school,Xia men University.
Fujian Province (361005) China
Mobile Phone:15860721915 SKYPE: ruiqwy
MSN Messenger: ruiqwy@hotmail.com
QQ:39863401
--- 09年3月31日,周二, r-help-request@r-project.org
<r-help-request@r-project.org> 写道:
发件人: r-help-request@r-project.org <r-help-request@r-project.org>
主题: R-help Digest, Vol 73, Issue 32
收件人: r-help@r-project.org
日期: 2009年3月31日,周二,下午6:00
Send R-help mailing list submissions to
r-help@r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request@r-project.org
You can reach the person managing the list at
r-help-owner@r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. what is R equivalent of Fortran DOUBLE PRECISION ?
(mauede@alice.it)
2. Re: Column name assignment problem (Steve Murray)
3. Re: interpreting "not defined because of singularities" in lm
(Duncan Murdoch)
4. Re: Constrined dependent optimization. (Paul Smith)
5. Re: which rows are duplicates? (Michael Dewey)
6. PLS package loading error! (mienad)
7. (no subject) (ankhee dutta)
8. Re: how to input multiple .txt files (Mike Lawrence)
9. Wrong path to user defined library for the R Help Files
(Breitbach, Nils)
10. (no subject) (ankhee dutta)
11. Re: how to input multiple .txt files (Mike Lawrence)
12. Burt table from word frequency list (Alan Zaslavsky)
13. Re: Constrined dependent optimization. (rkevinburton@charter.net)
14. Re: Column name assignment problem (Steve Murray)
15. Re: which rows are duplicates? (Wacek Kusnierczyk)
16. Re: Constrined dependent optimization. (Paul Smith)
17. Re: how to input multiple .txt files (baptiste auguie)
18. Re: Column name assignment problem (Peter Dalgaard)
19. Re: Constrined dependent optimization. (Ben Bolker)
20. Sliding window over irregular intervals (Irene Gallego Romero)
21. Re: Constrined dependent optimization. (Hans W. Borchers)
22. how does stop() interfere with on.exit()? (Wacek Kusnierczyk)
23. Re: nls, convergence and starting values (Christian Ritz)
24. Re: Burt table from word frequency list (Joan-Josep Vallb?)
25. Re: PLS package loading error! (James W. MacDonald)
26. Re: Constrined dependent optimization. (Paul Smith)
27. [OT] Contacting "Introductory Statistics for Engineering
Experimentation" authors (Douglas Bates)
28. Add missing values/timestamps (j.k)
29. Re: Sliding window over irregular intervals (David Winsemius)
30. Re: Sliding window over irregular intervals (Michael Lawrence)
31. List assignment in a while loop and timing (Saptarshi Guha)
32. Re: Matrix max by row (Bert Gunter)
33. Re: (no subject) (milton ruser)
34. Re: [OT] Contacting "Introductory Statistics for
EngineeringExperimentation" authors (Gaj Vidmar)
35. Excellent Talk on Statistics (Good examples of stat.
visualization) (Ken-JP)
36. Re: how to input multiple .txt files (Mike Lawrence)
37. Re: Column name assignment problem (Steve Murray)
38. Re: Mature SOAP Interface for R (Michael Lawrence)
39. Re: unicode only works with a second one (Greg Snow)
40. Re: Constrined dependent optimization. (Paul Smith)
41. Nonparametric analysis of repeated measurements data with sm
library (Alphonse Monkamg)
42. HELP WITH SEM LIBRARY AND WITH THE MODEL'S SPECIFICATION
(Analisi Dati)
43. NY City Conf for Enthusiastic Users of R, June 18-19, 2009
(HRISHIKESH D. VINOD)
44. Importing csv file with character values into sqlite3 and
subsequent problem in R / RSQLite (Stephan Lindner)
45. pgmm (Blundell-Bond) sample needed) (Millo Giovanni)
46. 64 bit compiled version of R on windows (Vadlamani, Satish {FLNA})
47. Re: how to input multiple .txt files (hadley wickham)
48. Re: HELP WITH SEM LIBRARY AND WITH THE MODEL'S SPECIFICATION
(John Fox)
49. Re: 64 bit compiled version of R on windows (Duncan Murdoch)
50. ggplot2-geom_text() (Felipe Carrillo)
51. Re: Constrined dependent optimization. (Paul Smith)
52. Re: Importing csv file with character values into sqlite3 and
subsequent problem in R / RSQLite (Gabor Grothendieck)
53. Re: Mature SOAP Interface for R (Tobias Verbeke)
54. circular analysis (Blanka Vlasakova)
55. Calculating First Occurance by a factor (jwg20)
56. Re: Calculating First Occurance by a factor (Dimitris Rizopoulos)
57. Re: Calculating First Occurance by a factor (Mike Lawrence)
58. Re: Calculating First Occurance by a factor (Jason Gullifer)
59. Re: Matrix max by row (Wacek Kusnierczyk)
60. Re: cmprsk- another survival-depedent package causes R crash
(Terry Therneau)
61. Re: Calculating First Occurance by a factor (hadley wickham)
62. Re: cmprsk- another survival-depedent package causes R crash
(Nguyen Dinh Nguyen)
63. Re: ggplot2-geom_text() (Felipe Carrillo)
64. Kruskal-Wallis-test: Posthoc-test? (Rabea Sutter)
65. use R Group SFBA April meeting reminder; video of Feb
kickoff (Jim Porzak)
66. Re: Darker markers for symbols in lattice (Deepayan Sarkar)
67. Help with tm assocation analysis and Rgraphviz installation.
(xinrong lei)
68. Re: Darker markers for symbols in lattice (Naomi B. Robbins)
69. Re: Comparing Points on Two Regression Lines (John Fox)
70. Re: use R Group SFBA April meeting reminder; video of Feb k
( (Ted Harding))
71. two monitors (Veerappa Chetty)
72. Re: ggplot2-geom_text() (Paul Murrell)
73. Mapping in R (Kelsey Scheitlin)
74. Comparing Points on Two Regression Lines
(AbouEl-Makarim Aboueissa)
75. Re: use R Group SFBA April meeting reminder; video of Feb k
(Sundar Dorai-Raj)
76. advice for alternative to barchart (kerfuffle)
77. Re: use R Group SFBA April meeting reminder; video of Feb k
(Jim Porzak)
78. Re: two monitors (Daniel Viar)
79. Can I read a file into my workspace from Rprofile.site?
(Elaine Jones)
80. Re: Can I read a file into my workspace from Rprofile.site?
(Duncan Murdoch)
81. Re: use R Group SFBA April meeting reminder; video of Feb k
( (Ted Harding))
82. Re: use R Group SFBA April meeting reminder; video of Feb k
(Rolf Turner)
83. RMySQL compile (stenka1@go.com)
84. Re: advice for alternative to barchart ( (Ted Harding))
85. To save Trellis Plots on A3 size paper (Portrait and
Landscape) (Debabrata Midya)
86. Re: two monitors (Felipe Carrillo)
87. Re: what is R equivalent of Fortran DOUBLE PRECISION ?
(Steven McKinney)
88. How to generate natural cubic spline in R? (minben)
89. Re: How to generate natural cubic spline in R? (David Winsemius)
90. Re: Binning (Gad Abraham)
91. Re: How to get commands history as a character vector instead
of displaying them? (Yihui Xie)
92. Convert Character to Date (Bob Roberts)
93. Re: Convert Character to Date (Bill.Venables@csiro.au)
94. Re: Convert Character to Date (Gabor Grothendieck)
95. Package candisc (MarcioRibeiro)
96. [R-pkgs] data.table is on CRAN (enhanced data.frame for time
series joins and more) (Matthew Dowle)
97. Convert date to integer (thoeb)
98. summarize logical string (dbajic@cnb.csic.es)
99. Re: summarize logical string (Dimitris Rizopoulos)
100. Efficient calculation of partial correlations in R
(Schragi Schwartz)
101. how to increase the limit for max.print in R (pooja arora)
102. Row/columns names within 'assign' command (Steve Murray)
103. Re: summarize logical string (dbajic@cnb.csic.es)
104. Does R support double-exponential smoothing? (minben)
105. Re: how to increase the limit for max.print in R
(Bernardo Rangel Tura)
106. Re: Convert date to integer (Dieter Menne)
107. Re: To save Trellis Plots on A3 size paper (Portrait and
Landscape) (Dieter Menne)
108. Re: To save Trellis Plots on A3 size paper (Portrait and
Landscape) (Dieter Menne)
----------------------------------------------------------------------
Message: 1
Date: Mon, 30 Mar 2009 12:07:15 +0200
From: <mauede@alice.it>
Subject: [R] what is R equivalent of Fortran DOUBLE PRECISION ?
To: <r-help@r-project.org>
Cc: "John C. Nash" <nashjc@uottawa.ca>
Message-ID:
<6B32C438581E5D4C8A34C377C3B334A401752A3E@FBCMST11V04.fbc.local>
Content-Type: text/plain
I noticed taht R cannot understand certain Fortran real constant formats. For
instance:
c14 <- as.double( 7.785205408500864D-02)
Error: unexpected symbol in " c14 <- as.double(
7.785205408500864D"
The above "D" is used in Fortran language to indicate the memory
starage mode. That is for instructing Fortran compiler
to store such a REAL constant in DOUBLE PRECISION... am I right ?
Since R cannot undestand numerical conatant post-fixed by the letter
"D", I wonder how I can instruct R interpreter to
store such a numerical constant reserving as muh memory as necessary so as to
accommodate a double precision number.
I noticed R accepts the folllowing syntax but I do not know if i have achieved
my goal thsi way:
> c14 <- as.double( 7.785205408500864E-02)
> typeof(c4)
[1] "double"
My questions are: what is the best precision I can get with R when dealing with
real number ?
Is R "double" type equvalent to Fortran DOUBLE PRECISION for internal
number representation ?
Thank you very much.
Maura
tutti i telefonini TIM!
[[alternative HTML version deleted]]
------------------------------
Message: 2
Date: Mon, 30 Mar 2009 10:19:49 +0000
From: Steve Murray <smurray444@hotmail.com>
Subject: Re: [R] Column name assignment problem
To: <jholtman@gmail.com>
Cc: r-help@r-project.org
Message-ID: <BAY135-W49BE0AE37BF4C6B3399432888D0@phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"
Jim and all,
Thanks - I managed to get it working based on your helpful advice.
I'm now trying to do something very similar which simply involves changing
the names of the variables in column 1 to make them more succinct. I'm
trying to do this via the 'levels' command as I figured that I might be
able to apply the character strings in a similar way to how you recommended when
dealing with 'colnames'.
# Refine names of rivers to make more succinct
riv_names <- get(paste("arunoff_",table_year,
sep=''))[,1]
levels(riv_names) <- c("AMAZON", "AMUR",
"CONGO", "LENA", "MISSISSIPPI", "NIGER",
"NILE", "OB", "PARANA", "YANGTZE",
"YENISEI", "ZAMBEZI")
assign(get(paste("arunoff_",table_year, sep='')[,1],
levels(riv_names)))
Error in paste("arunoff_", table_year, sep = "")[, 1] :
incorrect number of dimensions
My thinking was to assign the levels of riv_names to column 1 of the table...
Many thanks again for any advice offered,
Steve
------------------------------
Message: 3
Date: Mon, 30 Mar 2009 06:40:07 -0400
From: Duncan Murdoch <murdoch@stats.uwo.ca>
Subject: Re: [R] interpreting "not defined because of singularities"
in lm
To: jiblerize22@yahoo.com
Cc: r-help@r-project.org
Message-ID: <49D0A187.7030701@stats.uwo.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
jiblerize22@yahoo.com wrote:> I run lm to fit an OLS model where one of the covariates is a factor with
30 levels. I use contr.treatment() to set the base level of the factor, so when
I run lm() no coefficients are estimated for that level. But in addition (and
regardless of which level I choose to be the base), lm also gives a vector of NA
coefficients for another level of my factor.
>
> The output says that these coefficients were "not defined because of
singularities", suggesting maybe that the 28 estimated coefficients are
sufficient to pin down the 29th... but why is this the case? Why am I going from
30 levels to 28 coefficients? Am I misunderstanding the way factors/levels are
supposed to work?
The usual cause of this is that one of the levels is not present in the
data set. Another possibility is collinearity with some other covariate
in your model.
Duncan Murdoch
------------------------------
Message: 4
Date: Mon, 30 Mar 2009 11:41:18 +0100
From: Paul Smith <phhs80@gmail.com>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org
Message-ID:
<6ade6f6c0903300341p662c91a2tc51085cbf5d61543@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Sun, Mar 29, 2009 at 9:45 PM, <rkevinburton@charter.net>
wrote:> I have an optimization question that I was hoping to get some suggestions
on how best to go about sovling it. I would think there is probably a package
that addresses this problem.
>
> This is an ordering optimzation problem. Best to describe it with a simple
example. Say I have 100 "bins" each with a ball in it numbered from 1
to 100. Each bin can only hold one ball. This optimization is that I have a
function 'f' that this array of bins and returns a number. The number
returned from f(1,2,3,4....) would return a different number from that of
f(2,1,3,4....). The optimization is finding the optimum order of these balls so
as to produce a minimum value from 'f'.I cannot use the regular
'optim' algorithms because a) the values are discrete, and b) the values
are dependent ie. when the "variable" representing the bin location is
changed (in this example a new ball is put there) the existing ball will need to
be moved to another bin (probably swapping positions), and c) each
"variable" is constrained, in the example above the only allowable
values are integers from 1-100. So the problem becomes finding the optimum order
of the "balls".
>
> Any suggestions?
If your function f is linear, then you can use lpSolve.
Paul
------------------------------
Message: 5
Date: Mon, 30 Mar 2009 11:51:29 +0100
From: Michael Dewey <info@aghmed.fsnet.co.uk>
Subject: Re: [R] which rows are duplicates?
To: "Aaron M. Swoboda" <aaron.swoboda@gmail.com>,
r-help@r-project.org
Message-ID: <Zen-1LoF5b-0003BD-Rf@smarthost01.mail.zen.net.uk>
Content-Type: text/plain; charset="us-ascii"; format=flowed
At 05:07 30/03/2009, Aaron M. Swoboda wrote:>I would like to know which rows are duplicates of each other, not
>simply that a row is duplicate of another row. In the following
>example rows 1 and 3 are duplicates.
>
> > x <- c(1,3,1)
> > y <- c(2,4,2)
> > z <- c(3,4,3)
> > data <- data.frame(x,y,z)
> x y z
>1 1 2 3
>2 3 4 4
>3 1 2 3
Does this do what you want?
> x <- c(1,3,1)
> y <- c(2,4,2)
> z <- c(3,4,3)
> data <- data.frame(x,y,z)
> data.u <- unique(data)
> data.u
x y z
1 1 2 3
2 3 4 4
> data.u <- cbind(data.u, set = 1:nrow(data.u))
> merge(data, data.u)
x y z set
1 1 2 3 1
2 1 2 3 1
3 3 4 4 2
You need to do a bit more work to get them back into the original row
order if that is essential.
>I can't figure out how to get R to tell me that observation 1 and 3
>are the same. It seems like the "duplicated" and
"unique" functions
>should be able to help me out, but I am stumped.
>
>For instance, if I use "duplicated" ...
>
> > duplicated(data)
>[1] FALSE FALSE TRUE
>
>it tells me that row 3 is a duplicate, but not which row it matches.
>How do I figure out WHICH row it matches?
>
>And If I use "unique"...
>
> > unique(data)
> x y z
>1 1 2 3
>2 3 4 4
>
>I see that rows 1 and 2 are unique, leaving me to infer that row 3 was
>a duplicate, but again it doesn't tell me which row it was a duplicate
>of (as far as I can tell). Am I missing something?
>
>How can I determine that row 3 is a duplicate OF ROW 1?
>
>Thanks,
>
>Aaron
>
>
Michael Dewey
http://www.aghmed.fsnet.co.uk
------------------------------
Message: 6
Date: Mon, 30 Mar 2009 03:12:47 -0700 (PDT)
From: mienad <mienad@gmail.com>
Subject: [R] PLS package loading error!
To: r-help@r-project.org
Message-ID: <22780027.post@talk.nabble.com>
Content-Type: text/plain; charset=UTF-8
Hi,
I am using R 2.8.1 version on Windows with RGui. I have loaded pls package
lattest version (2.1-0). When I try to load this package in R using
library(pls) command, the following error message appear:
Erreur dans library(pls) :
'pls' n'est pas un package valide -- a-t-il ?t? install? <
2.0.0 ?
Could you please help me to solve this problem?
Regards
Damien
--
View this message in context:
http://www.nabble.com/PLS-package-loading-error%21-tp22780027p22780027.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 7
Date: Mon, 30 Mar 2009 11:13:01 +0000
From: ankhee dutta <ankheedutta@gmail.com>
Subject: [R] (no subject)
To: r-help@r-project.org
Message-ID:
<f859ebf70903300413g657c6af4m977e1d634c0c164f@mail.gmail.com>
Content-Type: text/plain
Hi, All
I have a linux system of Mandriva-2007 with R version 2.3.0 and MySQL with
5.0.0. I have also got DBI-R database interface version-0.1-11 installed on
my Linux system.While installing RMySQL package version 0.5-11 but facing
the problem mentioned below .
* Installing *source* package 'RMySQL' ...
creating cache ./config.cache
checking how to run the C preprocessor... cc -E
checking for compress in -lz... yes
checking for getopt_long in -lc... yes
checking for mysql_init in -lmysqlclient... no
checking for mysql.h... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for /usr/local/include/mysql/mysql.h... no
checking for /usr/include/mysql/mysql.h... no
checking for /usr/local/mysql/include/
mysql/mysql.h... no
checking for /opt/include/mysql/mysql.h... no
checking for /include/mysql/mysql.h... no
Configuration error:
could not find the MySQL installation include and/or library
directories. Manually specify the location of the MySQL
libraries and the header files and re-run R CMD INSTALL.
INSTRUCTIONS:
1. Define and export the 2 shell variables PKG_CPPFLAGS and
PKG_LIBS to include the directory for header files (*.h)
and libraries, for example (using Bourne shell syntax):
export PKG_CPPFLAGS="-I<MySQL-include-dir>"
export PKG_LIBS="-L<MySQL-lib-dir> -lmysqlclient"
Re-run the R INSTALL command:
R CMD INSTALL RMySQL_<version>.tar.gz
2. Alternatively, you may pass the configure arguments
--with-mysql-dir=<base-dir> (distribution directory)
or
--with-mysql-inc=<base-inc> (where MySQL header files reside)
--with-mysql-lib=<base-lib> (where MySQL libraries reside)
in the call to R INSTALL --configure-args='...'
R CMD INSTALL --configure-args='--with-mysql-dir=DIR'
RMySQL_<version>.tar.gz
ERROR: configuration failed for package 'RMySQL'
** Removing '/usr/lib/R/library/RMySQL'
Any help will be great.
Thankyou in advance.
--
Ankhee Dutta
project trainee,
JNU,New Delhi-67
[[alternative HTML version deleted]]
------------------------------
Message: 8
Date: Mon, 30 Mar 2009 08:55:03 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] how to input multiple .txt files
To: Qianfeng Li <qflichem@yahoo.com>
Cc: r-help@r-project.org
Message-ID:
<37fda5350903300455y4fca4f73h792cd88e375e4468@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
my_files =
list.files(path=path_to_my_files,pattern='.txt',full.names=TRUE)
a=NULL
for(this_file in my_files){
a=rbind(a,read.table(this_file))
}
write.table(a,my_new_file_name)
On Sun, Mar 29, 2009 at 10:37 PM, Qianfeng Li <qflichem@yahoo.com>
wrote:>
>
> how to input multiple .txt files?
>
> A data folder has lots of .txt files from different customers.
>
> Want to read all these .txt files to different master files:
>
> such as:
>
> ?cust1.xx.txt, ?cust1.xxx.txt, cust1.xxxx.txt,.............. to master
file: X.txt
>
> ?cust2.xx.txt, ?cust2.xxx.txt, cust2.xxxx.txt,.............. to master
file: Y.txt
>
>
> Thanks!
>
>
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 9
Date: Mon, 30 Mar 2009 13:55:11 +0200
From: "Breitbach, Nils" <breitbach@uni-mainz.de>
Subject: [R] Wrong path to user defined library for the R Help Files
To: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<6634A5A114BA554C927CF724BA5040410100B1D9D535@EXCHANGE-02.zdv.uni-mainz.de>
Content-Type: text/plain; charset="us-ascii"
Dear R-Community,
since I work on a PC at the University I have not the necessary rights for all
devices and therefore my library is located on a net device. The installation
process worked and everything is right apart from one little thing - the help
files. When I try to search with the function "?helptopic" I allways
get an URL error. The problem is obvious from the error message because it gives
the path where R tries to find the help files. R mixes two paths in the way that
it uses the default path of the home directiory followed by my user defined path
given via .libPaths. How can I give R the information about the right path
without using the default path and mix both up when searching the help files.
Can I simply add a line in the Rprofile.site file?
I do not know if this is a problem, but my personal working directory is
diffrend from my personal library path?
Thanks in advance ...
Cheers,
Nils
------------------------------
Message: 10
Date: Mon, 30 Mar 2009 11:07:07 +0000
From: ankhee dutta <ankheedutta@gmail.com>
Subject: [R] (no subject)
To: r-help@r-project.org
Message-ID:
<f859ebf70903300407u39545305r215c7ffabb903e82@mail.gmail.com>
Content-Type: text/plain
Hi, All
I have a linux system of Mandriva-2007 with R version 2.3.0 and MySQL with
5.0.0. I have also got DBI-R database interface version-0.1-11 installed on
my Linux system.While installing RMySQL package version 0.5-11 but facing
the problem mentioned below .
* Installing *source* package 'RMySQL' ...
creating cache ./config.cache
checking how to run the C preprocessor... cc -E
checking for compress in -lz... yes
checking for getopt_long in -lc... yes
checking for mysql_init in -lmysqlclient... no
checking for mysql.h... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for mysql_init in -lmysqlclient... no
checking for /usr/local/include/mysql/mysql.h... no
checking for /usr/include/mysql/mysql.h... no
checking for /usr/local/mysql/include/
mysql/mysql.h... no
checking for /opt/include/mysql/mysql.h... no
checking for /include/mysql/mysql.h... no
Configuration error:
could not find the MySQL installation include and/or library
directories. Manually specify the location of the MySQL
libraries and the header files and re-run R CMD INSTALL.
INSTRUCTIONS:
1. Define and export the 2 shell variables PKG_CPPFLAGS and
PKG_LIBS to include the directory for header files (*.h)
and libraries, for example (using Bourne shell syntax):
export PKG_CPPFLAGS="-I<MySQL-include-dir>"
export PKG_LIBS="-L<MySQL-lib-dir> -lmysqlclient"
Re-run the R INSTALL command:
R CMD INSTALL RMySQL_<version>.tar.gz
2. Alternatively, you may pass the configure arguments
--with-mysql-dir=<base-dir> (distribution directory)
or
--with-mysql-inc=<base-inc> (where MySQL header files reside)
--with-mysql-lib=<base-lib> (where MySQL libraries reside)
in the call to R INSTALL --configure-args='...'
R CMD INSTALL --configure-args='--with-mysql-dir=DIR'
RMySQL_<version>.tar.gz
ERROR: configuration failed for package 'RMySQL'
** Removing '/usr/lib/R/library/RMySQL'
Any help will be great.
Thankyou in advance.
--
Ankhee Dutta
project trainee,
JNU,New Delhi-67
[[alternative HTML version deleted]]
------------------------------
Message: 11
Date: Mon, 30 Mar 2009 08:58:54 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] how to input multiple .txt files
To: Qianfeng Li <qflichem@yahoo.com>
Cc: r-help@r-project.org
Message-ID: <37fda5350903300458xab9620o455ce2fdd181363@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
oops, didn't read the question fully. If you want to create 2 master files:
cust1_files =
list.files(path=path_to_my_files,pattern='cust1',full.names=TRUE)
a=NULL
for(this_file in cust1_files){
a=rbind(a,read.table(this_file))
}
write.table(a,'cust1.master.txt')
cust2_files =
list.files(path=path_to_my_files,pattern='cust2',full.names=TRUE)
a=NULL
for(this_file in cust2_files){
a=rbind(a,read.table(this_file))
}
write.table(a,'cust2.master.txt')
On Mon, Mar 30, 2009 at 8:55 AM, Mike Lawrence <Mike.Lawrence@dal.ca>
wrote:> my_files =
list.files(path=path_to_my_files,pattern='.txt',full.names=TRUE)
>
> a=NULL
> for(this_file in my_files){
> ? ? ? ?a=rbind(a,read.table(this_file))
> }
> write.table(a,my_new_file_name)
>
>
>
>
> On Sun, Mar 29, 2009 at 10:37 PM, Qianfeng Li <qflichem@yahoo.com>
wrote:
>>
>>
>> how to input multiple .txt files?
>>
>> A data folder has lots of .txt files from different customers.
>>
>> Want to read all these .txt files to different master files:
>>
>> such as:
>>
>> ?cust1.xx.txt, ?cust1.xxx.txt, cust1.xxxx.txt,.............. to master
file: X.txt
>>
>> ?cust2.xx.txt, ?cust2.xxx.txt, cust2.xxxx.txt,.............. to master
file: Y.txt
>>
>>
>> Thanks!
>>
>>
>>
>> ? ? ? ?[[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
> --
> Mike Lawrence
> Graduate Student
> Department of Psychology
> Dalhousie University
>
> Looking to arrange a meeting? Check my public calendar:
> http://tinyurl.com/mikes-public-calendar
>
> ~ Certainty is folly... I think. ~
>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 12
Date: Mon, 30 Mar 2009 08:05:11 -0400 (EDT)
From: Alan Zaslavsky <zaslavsk@hcp.med.harvard.edu>
Subject: [R] Burt table from word frequency list
To: r-help@r-project.org
Cc: ted.harding@manchester.ac.uk
Message-ID:
<Pine.GSO.4.60.0903300801420.20344@mikado.hcp.med.harvard.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
Maybe not terribly hard, depending on exactly what you need. Suppose you
turn your text into a character vector 'mytext' of words. Then for a
table of words appearing delta words apart (ordered), you can table mytext
against itself with a lag:
nwords=length(mytext)
burttab=table(mytext[-(1:delta)],mytext[nwords+1-(1:delta)])
Add to its transpose and sum over delta up to your maximum distance apart.
If you want only words appearing near each other within the same sentence
(or some other unit), pad out the sentence break with at least delta
instances of a dummy spacer:
the cat chased the greedy rat SPACER SPACER SPACER the dog chased the
clever cat
This will count all pairings at distance delta; if you want to count only
those for which this was the NEAREST co-occurence (so
the cat and the rate chased the dog
would count as two at delta=3 but not one at delta=6) it will be trickier
and I'm not sure this approach can be modified to handle it.
> Date: Sun, 29 Mar 2009 22:20:15 -0400
> From: "Murray Cooper" <myrmail@earthlink.net>
> Subject: Re: [R] Burt table from word frequency list
>
> The usual approach is to count the co-occurence within so many words of
> each other. Typical is between 5 words before and 5 words after a
> given word. So for each word in the document, you look for the
> occurence of all other words within -5 -4 -3 -2 -1 0 1 2 3 4 5 words.
> Depending on the language and the question being asked certain words
> may be excluded.
>
> This is not a simple function! I don't know if anyone has done a
> package, for this type of analysis but with over 2000 packages floating
> around you might get lucky.
------------------------------
Message: 13
Date: Mon, 30 Mar 2009 5:16:08 -0700
From: <rkevinburton@charter.net>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org, Paul Smith <phhs80@gmail.com>
Message-ID: <20090330081608.VS3U4.58421.root@mp05>
Content-Type: text/plain; charset=utf-8
It would in the stictess sense be non-linear since it is only defined for
descrete interface values for each variable. And in general it would be
non-linear anyway. If I only have three variables which can take on values 1,2,3
then f(1,2,3) could equal 0 and f(2,1,3) could equal 10.
Thank you for the suggestions.
Kevin
---- Paul Smith <phhs80@gmail.com> wrote: > On Sun, Mar 29, 2009 at 9:45 PM, <rkevinburton@charter.net> wrote:
> > I have an optimization question that I was hoping to get some
suggestions on how best to go about sovling it. I would think there is probably
a package that addresses this problem.
> >
> > This is an ordering optimzation problem. Best to describe it with a
simple example. Say I have 100 "bins" each with a ball in it numbered
from 1 to 100. Each bin can only hold one ball. This optimization is that I have
a function 'f' that this array of bins and returns a number. The number
returned from f(1,2,3,4....) would return a different number from that of
f(2,1,3,4....). The optimization is finding the optimum order of these balls so
as to produce a minimum value from 'f'.I cannot use the regular
'optim' algorithms because a) the values are discrete, and b) the values
are dependent ie. when the "variable" representing the bin location is
changed (in this example a new ball is put there) the existing ball will need to
be moved to another bin (probably swapping positions), and c) each
"variable" is constrained, in the example above the only allowable
values are integers from 1-100. So the problem becomes finding the optimum order
of the "balls".
> >
> > Any suggestions?
>
> If your function f is linear, then you can use lpSolve.
>
> Paul
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 14
Date: Mon, 30 Mar 2009 12:22:05 +0000
From: Steve Murray <smurray444@hotmail.com>
Subject: Re: [R] Column name assignment problem
To: <jholtman@gmail.com>
Cc: r-help@r-project.org
Message-ID: <BAY135-W21BE46AA7336F23E94FEA5888D0@phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"
Dear all,
Apologies for yet another question (!). Hopefully it won't be too tricky to
solve. I am attempting to add row and column names (these are in fact numbers)
to each of the tables created by the code (120 in total).
# Create index of file names
files <- print(ls()[1:120], quote=FALSE) # This is the best way I could
manage to successfully attribute all the table names to a single list - I
realise it's horrible coding (especially as it relies on the first 120
objects stored in the memory actually being the objects I want to use)...
files
[1] "Fekete_198601" "Fekete_198602"
"Fekete_198603" "Fekete_198604"
[5] "Fekete_198605" "Fekete_198606"
"Fekete_198607" "Fekete_198608"
[9] "Fekete_198609" "Fekete_198610"
"Fekete_198611" "Fekete_198612"
[13] "Fekete_198701" "Fekete_198702"
"Fekete_198703" "Fekete_198704"
[17] "Fekete_198705" "Fekete_198706"
"Fekete_198707" "Fekete_198708" ...[truncated - there are
120 in total]
# Provide column and row names according to lat/longs.
rnames <- sprintf("%.2f", seq(from = -89.75, to = 89.75, length =
360))
columnnames <- sprintf("%.2f", seq(from = -179.75, to = 179.75,
length = 720))
for (i in files) {
assign(colnames((paste(Fekete_",index$year[i],
index$month[i])", sep='')), columnnames)
assign(rownames(paste("rownames(Fekete_",index$year[i],
index$month[i],")", sep=''), rnames))
}
Error: unexpected string constant in:
"for (i in files) {
assign(colnames((paste(Fekete_",index$year[i],
index$month[i])"">
assign(rownames(paste("rownames(Fekete_",index$year[i],
index$month[i],")", sep=''), rnames))
Error in if (do.NULL) NULL else if (nr> 0) paste(prefix, seq_len(nr), :
argument is not interpretable as logical
In addition: Warning message:
In if (do.NULL) NULL else if (nr> 0) paste(prefix, seq_len(nr), :
the condition has length> 1 and only the first element will be
used> }
Error: unexpected '}' in " }"
Is there a more elegant way of creating a list of file names in this case
(remember that there are 2 variable parts to each name) which would facilitate
the assigning of column and row names to each table? (And make life easier when
doing other things with the data, e.g. plotting...!).
Many thanks once again - the help offered really is appreciated.
Steve
_________________________________________________________________
All your Twitter and other social updates in one place
------------------------------
Message: 15
Date: Mon, 30 Mar 2009 14:26:24 +0200
From: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Subject: Re: [R] which rows are duplicates?
To: "Aaron M. Swoboda" <aaron.swoboda@gmail.com>
Cc: r-help@r-project.org, Michael Dewey <info@aghmed.fsnet.co.uk>
Message-ID: <49D0BA70.3090005@idi.ntnu.no>
Content-Type: text/plain; charset=ISO-8859-1
Michael Dewey wrote:> At 05:07 30/03/2009, Aaron M. Swoboda wrote:
>> I would like to know which rows are duplicates of each other, not
>> simply that a row is duplicate of another row. In the following
>> example rows 1 and 3 are duplicates.
>>
>> > x <- c(1,3,1)
>> > y <- c(2,4,2)
>> > z <- c(3,4,3)
>> > data <- data.frame(x,y,z)
>> x y z
>> 1 1 2 3
>> 2 3 4 4
>> 3 1 2 3
>
i don't have any solution significantly better than what you have
already been given. but i have a warning instead.
in the below, you use both 'duplicated' and 'unique' on data
frames, and
the proposed solution relies on the latter. you may want to try to
avoid both when working with data frames; this is because of how they
do (or don't) work.
duplicated (and unique, which calls duplicated) simply pastes the
content of each row into a *string*, and then works on the strings.
this means that NAs in the data frame are converted to "NA"s, and
"NA"
== "NA", obviously, so that rows that include NAs and are otherwise
identical will be considered *identical*.
that's not bad (yet), but you should be aware. however, duplicated has
a parameter named 'incomparables', explained in ?duplicated as follows:
"
incomparables: a vector of values that cannot be compared. 'FALSE' is a
special value, meaning that all values can be compared, and
may be the only value accepted for methods other than the
default. It will be coerced internally to the same type as
'x'.
"
and also
"
Values in 'incomparables' will never be marked as duplicated. This
is intended to be used for a fairly small set of values and will
not be efficient for a very large set.
"
that is, for example:
vector = c(NA, NA)
duplicated(vector)
# [1] FALSE TRUE
duplicated(vector), incomparables=NA)
# [1] FALSE FALSE
list = list(NA, NA)
duplicated(list)
# [1] FALSE TRUE
duplicated(list, incomparables=NA)
# [1] FALSE FALSE
what the documentation *fails* to tell you is that the parameter
'incomparables' is defunct in duplicated.data.frame, which you can see
in its source code (below), or in the following example:
# data as above, or any data frame
duplicated(data, incomparables=NA)
# Error in if (!is.logical(incomparables) || incomparables)
.NotYetUsed("incomparables != FALSE") :
# missing value where TRUE/FALSE needed
the error message here is *confusing*. the error is raised because the
author of the code made a mistake and apparently haven't carefully
examined and tested his product; the code goes:
duplicated.data.frame
# function (x, incomparables = FALSE, fromLast = FALSE, ...)
# {
# if (!is.logical(incomparables) || incomparables)
# .NotYetUsed("incomparables != FALSE")
# duplicated(do.call("paste", c(x, sep = "\r")),
fromLast = fromLast)
# }
# <environment: namespace:base>
clearly, the intention here is to raise an error with a (still hardly
clear) message as in:
.NotYetUsed("incomparables != FALSE")
# Error: argument 'incomparables != FALSE' is not used (yet)
but instead, if(NA) is evaluated (because '!is.logical(NA) || NA'
evaluates, *obviously*, to NA) and hence the uninformative error message.
take home point: rtfm, *but* don't believe it.
vQ
> Does this do what you want?
> > x <- c(1,3,1)
> > y <- c(2,4,2)
> > z <- c(3,4,3)
> > data <- data.frame(x,y,z)
> > data.u <- unique(data)
> > data.u
> x y z
> 1 1 2 3
> 2 3 4 4
> > data.u <- cbind(data.u, set = 1:nrow(data.u))
> > merge(data, data.u)
> x y z set
> 1 1 2 3 1
> 2 1 2 3 1
> 3 3 4 4 2
>
> You need to do a bit more work to get them back into the original row
> order if that is essential.
>
>
>
>> I can't figure out how to get R to tell me that observation 1 and 3
>> are the same. It seems like the "duplicated" and
"unique" functions
>> should be able to help me out, but I am stumped.
>>
>> For instance, if I use "duplicated" ...
>>
>> > duplicated(data)
>> [1] FALSE FALSE TRUE
>>
>> it tells me that row 3 is a duplicate, but not which row it matches.
>> How do I figure out WHICH row it matches?
>>
>> And If I use "unique"...
>>
>> > unique(data)
>> x y z
>> 1 1 2 3
>> 2 3 4 4
>>
>> I see that rows 1 and 2 are unique, leaving me to infer that row 3 was
>> a duplicate, but again it doesn't tell me which row it was a
duplicate
>> of (as far as I can tell). Am I missing something?
>>
>> How can I determine that row 3 is a duplicate OF ROW 1?
>>
>> Thanks,
>>
>> Aaron
>>
>>
>
> Michael Dewey
> http://www.aghmed.fsnet.co.uk
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
-------------------------------------------------------------------------------
Wacek Kusnierczyk, MD PhD
Email: waku@idi.ntnu.no
Phone: +47 73591875, +47 72574609
Department of Computer and Information Science (IDI)
Faculty of Information Technology, Mathematics and Electrical Engineering (IME)
Norwegian University of Science and Technology (NTNU)
Sem Saelands vei 7, 7491 Trondheim, Norway
Room itv303
Bioinformatics & Gene Regulation Group
Department of Cancer Research and Molecular Medicine (IKM)
Faculty of Medicine (DMF)
Norwegian University of Science and Technology (NTNU)
Laboratory Center, Erling Skjalgsons gt. 1, 7030 Trondheim, Norway
Room 231.05.060
------------------------------
Message: 16
Date: Mon, 30 Mar 2009 13:27:15 +0100
From: Paul Smith <phhs80@gmail.com>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org
Message-ID:
<6ade6f6c0903300527u1b126ce8p12d7d132aee808b6@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
I do not really understand your argument regarding the non-linearity
of f. Perhaps, it would help us a lot if you defined concretely your
objective function or gave us a minimal example fully detailed and
defined.
Paul
On Mon, Mar 30, 2009 at 1:16 PM, <rkevinburton@charter.net>
wrote:> It would in the stictess sense be non-linear since it is only defined for
descrete interface values for each variable. And in general it would be
non-linear anyway. If I only have three variables which can take on values 1,2,3
then f(1,2,3) could equal 0 and f(2,1,3) could equal 10.
>
> Thank you for the suggestions.
>
> Kevin
>
> ---- Paul Smith <phhs80@gmail.com> wrote:
>> On Sun, Mar 29, 2009 at 9:45 PM, ?<rkevinburton@charter.net>
wrote:
>> > I have an optimization question that I was hoping to get some
suggestions on how best to go about sovling it. I would think there is probably
a package that addresses this problem.
>> >
>> > This is an ordering optimzation problem. Best to describe it with
a simple example. Say I have 100 "bins" each with a ball in it
numbered from 1 to 100. Each bin can only hold one ball. This optimization is
that I have a function 'f' that this array of bins and returns a number.
The number returned from f(1,2,3,4....) would return a different number from
that of f(2,1,3,4....). The optimization is finding the optimum order of these
balls so as to produce a minimum value from 'f'.I cannot use the regular
'optim' algorithms because a) the values are discrete, and b) the values
are dependent ie. when the "variable" representing the bin location is
changed (in this example a new ball is put there) the existing ball will need to
be moved to another bin (probably swapping positions), and c) each
"variable" is constrained, in the example above the only allowable
values are integers from 1-100. So the problem becomes finding the optimum order
of the
"balls".>> >
>> > Any suggestions?
>>
>> If your function f is linear, then you can use lpSolve.
>>
>> Paul
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
------------------------------
Message: 17
Date: Mon, 30 Mar 2009 13:32:21 +0100
From: baptiste auguie <ba208@exeter.ac.uk>
Subject: Re: [R] how to input multiple .txt files
To: Mike Lawrence <Mike.Lawrence@dal.ca>
Cc: Qianfeng Li <qflichem@yahoo.com>, "r-help@r-project.org"
<r-help@r-project.org>
Message-ID: <7C6003A3-EB29-4696-A934-113F847528BD@exeter.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
may i suggest the following,
a <- do.call(rbind, lapply(cust1_files, read.table))
(i believe expanding objects in a for loop belong to the R Inferno)
baptiste
On 30 Mar 2009, at 12:58, Mike Lawrence wrote:
>
> cust1_files =
> list.files(path=path_to_my_files,pattern='cust1',full.names=TRUE)
> a=NULL
> for(this_file in cust1_files){
> a=rbind(a,read.table(this_file))
> }
> write.table(a,'cust1.master.txt')
_____________________________
Baptiste Augui?
School of Physics
University of Exeter
Stocker Road,
Exeter, Devon,
EX4 4QL, UK
Phone: +44 1392 264187
http://newton.ex.ac.uk/research/emag
------------------------------
Message: 18
Date: Mon, 30 Mar 2009 15:10:52 +0200
From: Peter Dalgaard <P.Dalgaard@biostat.ku.dk>
Subject: Re: [R] Column name assignment problem
To: Steve Murray <smurray444@hotmail.com>
Cc: r-help@r-project.org
Message-ID: <49D0C4DC.2080109@biostat.ku.dk>
Content-Type: text/plain; charset=UTF-8
Steve Murray wrote:> Dear all,
>
> Apologies for yet another question (!). Hopefully it won't be too
tricky to solve. I am attempting to add row and column names (these are in fact
numbers) to each of the tables created by the code (120 in total).
>
>
> # Create index of file names
> files <- print(ls()[1:120], quote=FALSE) # This is the best way I could
manage to successfully attribute all the table names to a single list - I
realise it's horrible coding (especially as it relies on the first 120
objects stored in the memory actually being the objects I want to use)...
>
> files
> [1] "Fekete_198601" "Fekete_198602"
"Fekete_198603" "Fekete_198604"
> [5] "Fekete_198605" "Fekete_198606"
"Fekete_198607" "Fekete_198608"
> [9] "Fekete_198609" "Fekete_198610"
"Fekete_198611" "Fekete_198612"
> [13] "Fekete_198701" "Fekete_198702"
"Fekete_198703" "Fekete_198704"
> [17] "Fekete_198705" "Fekete_198706"
"Fekete_198707" "Fekete_198708" ...[truncated - there are
120 in total]
>
>
> # Provide column and row names according to lat/longs.
>
> rnames <- sprintf("%.2f", seq(from = -89.75, to = 89.75,
length = 360))
> columnnames <- sprintf("%.2f", seq(from = -179.75, to =
179.75, length = 720))
>
> for (i in files) {
> assign(colnames((paste(Fekete_",index$year[i],
index$month[i])", sep='')), columnnames)
>
assign(rownames(paste("rownames(Fekete_",index$year[i],
index$month[i],")", sep=''), rnames))
> }
>
>
> Error: unexpected string constant in:
> "for (i in files) {
> assign(colnames((paste(Fekete_",index$year[i],
index$month[i])""
>>
assign(rownames(paste("rownames(Fekete_",index$year[i],
index$month[i],")", sep=''), rnames))
> Error in if (do.NULL) NULL else if (nr> 0) paste(prefix, seq_len(nr), :
> argument is not interpretable as logical
> In addition: Warning message:
> In if (do.NULL) NULL else if (nr> 0) paste(prefix, seq_len(nr), :
> the condition has length> 1 and only the first element will be used
>> }
> Error: unexpected '}' in " }"
The generic issue here (read: I can't really be bothered to do your
problem in all details...) is that you cannot use assignment forms like
foo(x) <- bar
while accessing x via a character name. That is
a <- "plugh!"
assign(foo(a), bar)
and
foo(get(a)) <- bar
are both wrong.
You need to do it in steps, like
x <- get(a)
foo(x) <- bar
assign(a, x)
or, not really any prettier
eval(substitute(
foo(x) <- bar, list(x=as.name(a)
))
>
>
> Is there a more elegant way of creating a list of file names in this case
(remember that there are 2 variable parts to each name) which would facilitate
the assigning of column and row names to each table? (And make life easier when
doing other things with the data, e.g. plotting...!).
>
> Many thanks once again - the help offered really is appreciated.
>
> Steve
>
>
> _________________________________________________________________
> All your Twitter and other social updates in one place
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 19
Date: Mon, 30 Mar 2009 09:14:44 -0400
From: Ben Bolker <bolker@ufl.edu>
Subject: Re: [R] Constrined dependent optimization.
To: "rkevinburton@charter.net" <rkevinburton@charter.net>,
"r-help@r-project.org" <r-help@r-project.org>
Message-ID: <49D0C5C4.9080700@ufl.edu>
Content-Type: text/plain; charset="utf-8"
rkevinburton@charter.net wrote:> I am sorry but I don't see the connection. with SANN and say 3
> variables one of the steps may increment x[1] by 0.1. Not only is
> this a non-discrete integer value but even if I could coerce SANN to
> only return discrete integer values for each step in the optimization
> once x[1] was set to say 2 I would have to search the other
> "variables" for a value of 2 and exchange x[1] and which ever
> variable was two so as to maintain the property that each variable
> has a unique discrete value constained from 1 : number of varables.
>
> Thank you.
>
> Kevin
If you look more closely at the docs for method="SANN" (and
the examples), you'll see that SANN allows you to pass the
"gradient" argument (gr) as a custom function to provide the
candidate distribution. Here's an example:
N <- 10
xvec <- seq(0,1,length=N)
target <- rank((xvec-0.2)^2)
objfun <- function(x) {
sum((x-target)^2)/1e6
}
objfun(1:100)
swapfun <- function(x,N=10) {
loc <- sample(N,size=2,replace=FALSE)
tmp <- x[loc[1]]
x[loc[1]] <- x[loc[2]]
x[loc[2]] <- tmp
x
}
set.seed(1001)
opt1 <- optim(fn=objfun,
par=1:N,
gr=swapfun,method="SANN",
control=list(trace=10))
plot(opt1$par,target)
> ---- Ben Bolker <bolker@ufl.edu> wrote:
>>
>>
>> rkevinburton wrote:
>>> I have an optimization question that I was hoping to get some
>>> suggestions on how best to go about sovling it. I would think
>>> there is probably a package that addresses this problem.
>>>
>>> This is an ordering optimzation problem. Best to describe it with
>>> a simple example. Say I have 100 "bins" each with a ball
in it
>>> numbered from 1 to 100. Each bin can only hold one ball. This
>>> optimization is that I have a function 'f' that this array
of
>>> bins and returns a number. The number returned from
>>> f(1,2,3,4....) would return a different number from that of
>>> f(2,1,3,4....). The optimization is finding the optimum order of
>>> these balls so as to produce a minimum value from 'f'.I
cannot
>>> use the regular 'optim' algorithms because a) the values
are
>>> discrete, and b) the values are dependent ie. when the
"variable"
>>> representing the bin location is changed (in this example a new
>>> ball is put there) the existing ball will need to be moved to
>>> another bin (probably swapping positions), and c) each
"variable"
>>> is constrained, in the example above the only allowable values
>>> are integers from 1-100. So the problem becomes finding the
>>> optimum order of the "balls".
>>>
>>> Any suggestions?
>>>
>>>
>> See method "SANN" under ?optim.
>>
>> Ben Bolker
>>
>> -- View this message in context:
>>
http://www.nabble.com/Constrined-dependent-optimization.-tp22772520p22772795.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________ R-help@r-project.org
>> mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do
>> read the posting guide http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
--
Ben Bolker
Associate professor, Biology Dep't, Univ. of Florida
bolker@ufl.edu / www.zoology.ufl.edu/bolker
GPG key: www.zoology.ufl.edu/bolker/benbolker-publickey.asc
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 260 bytes
Desc: OpenPGP digital signature
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20090330/ab2f6fd3/attachment-0001.bin>
------------------------------
Message: 20
Date: Mon, 30 Mar 2009 14:01:00 +0100
From: Irene Gallego Romero <ig247@cam.ac.uk>
Subject: [R] Sliding window over irregular intervals
To: r-help@R-project.org
Message-ID: <49D0C28C.8030609@cam.ac.uk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Dear all,
I have some very big data files that look something like this:
id chr pos ihh1 ihh2 xpehh
rs5748748 22 15795572 0.0230222 0.0268394 -0.153413
rs5748755 22 15806401 0.0186084 0.0268672 -0.367296
rs2385785 22 15807037 0.0198204 0.0186616 0.0602451
rs1981707 22 15809384 0.0299685 0.0176768 0.527892
rs1981708 22 15809434 0.0305465 0.0187227 0.489512
rs11914222 22 15810040 0.0307183 0.0172399 0.577633
rs4819923 22 15813210 0.02707 0.0159736 0.527491
rs5994105 22 15813888 0.025202 0.0141296 0.578651
rs5748760 22 15814084 0.0242894 0.0146486 0.505691
rs2385786 22 15816846 0.0173057 0.0107816 0.473199
rs1990483 22 15817310 0.0176641 0.0130525 0.302555
rs5994110 22 15821524 0.0178411 0.0129001 0.324267
rs17733785 22 15822154 0.0201797 0.0182093 0.102746
rs7287116 22 15823131 0.0201993 0.0179028 0.12069
rs5748765 22 15825502 0.0193195 0.0176513 0.090302
I'm trying to extract the maximum and minimum xpehh (last column) values
within a sliding window (non overlapping), of width 10000 (calculated
relative to pos (third column)). However, as you can tell from the brief
excerpt here, although all possible intervals will probably be covered
by at least one data point, the number of data points will be variable
(incidentally, if anyone knows of a way to obtain this number, that
would be lovely), as will the spacing between them. Furthermore, values
of chr (second column) will range from 1 to 22, and values of pos will
be overlapping across them; I want to evaluate the window separately for
each value of chr.
I've looked at the help and FAQ on sliding windows, but I'm a relative
newcomer to R and cannot find a way to do what I need to do. Everything
I've managed to unearth so far seems geared towards smoother time
series. Any help on this problem would be vastly appreciated.
Thanks,
Irene
--
Irene Gallego Romero
Leverhulme Centre for Human Evolutionary Studies
University of Cambridge
Fitzwilliam St
Cambridge
CB2 1QH
UK
------------------------------
Message: 21
Date: Mon, 30 Mar 2009 06:22:28 -0700 (PDT)
From: "Hans W. Borchers" <hwborchers@googlemail.com>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org
Message-ID: <22782922.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Image you want to minimize the following linear function
f <- function(x) sum( c(1:50, 50:1) * x / (50*51) )
on the set of all permutations of the numbers 1,..., 100.
I wonder how will you do that with lpSolve? I would simply order
the coefficients and then sort the numbers 1,...,100 accordingly.
I am also wondering how optim with "SANN" could be applied here.
As this is a problem in the area of discrete optimization resp.
constraint programming, I propose to use an appropriate program
here such as the free software Bprolog. I would be interested to
learn what others propose.
Of course, if we don't know anything about the function f then
it amounts to an exhaustive search on the 100! permutations --
probably not a feasible job.
Regards, Hans Werner
Paul Smith wrote:>
> On Sun, Mar 29, 2009 at 9:45 PM, <rkevinburton@charter.net> wrote:
>> I have an optimization question that I was hoping to get some
suggestions
>> on how best to go about sovling it. I would think there is probably a
>> package that addresses this problem.
>>
>> This is an ordering optimzation problem. Best to describe it with a
>> simple example. Say I have 100 "bins" each with a ball in it
numbered
>> from 1 to 100. Each bin can only hold one ball. This optimization is
that
>> I have a function 'f' that this array of bins and returns a
number. The
>> number returned from f(1,2,3,4....) would return a different number
from
>> that of f(2,1,3,4....). The optimization is finding the optimum order
of
>> these balls so as to produce a minimum value from 'f'.I cannot
use the
>> regular 'optim' algorithms because a) the values are discrete,
and b) the
>> values are dependent ie. when the "variable" representing the
bin
>> location is changed (in this example a new ball is put there) the
>> existing ball will need to be moved to another bin (probably swapping
>> positions), and c) each "variable" is constrained, in the
example above
>> the only allowable values are integers from 1-100. So the problem
becomes
>> finding the optimum order of the "balls".
>>
>> Any suggestions?
>
> If your function f is linear, then you can use lpSolve.
>
> Paul
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
View this message in context:
http://www.nabble.com/Constrined-dependent-optimization.-tp22772520p22782922.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 22
Date: Mon, 30 Mar 2009 15:32:13 +0200
From: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Subject: [R] how does stop() interfere with on.exit()?
To: R help <R-help@stat.math.ethz.ch>
Message-ID: <49D0C9DD.2000902@idi.ntnu.no>
Content-Type: text/plain; charset=ISO-8859-1
consider the following example:
(f = function() on.exit(f()))()
# error: evaluation nested too deeply
(f = function() { on.exit(f()); stop() })()
# error in f():
# error in f():
# ... some 100 lines skipped ...
# error: C stack usage is too close to the limit
why does not the second behave as the first, i.e., report, in one line,
too deep recursion? the second seems to break the interface by
reporting a condition internal to the implementation, which should not
be visible to the user.
vQ
------------------------------
Message: 23
Date: Mon, 30 Mar 2009 15:48:35 +0200
From: Christian Ritz <ritz@life.ku.dk>
Subject: Re: [R] nls, convergence and starting values
To: patrick.giraudoux@univ-fcomte.fr
Cc: r-help@stat.math.ethz.ch
Message-ID: <49D0CDB3.8050104@life.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1
Hi Patrick,
there exist specialized functionality in R that offer both automated calculation
of
starting values and relatively robust optimization, which can be used with
success in many
common cases of nonlinear regression, also for your data:
library(drc) # on CRAN
## Fitting 3-parameter logistic model
## (slightly different parameterization from SSlogis())
bdd.m1 <- drm(pourcma~transat, weights=sqrt(nbfeces), data=bdd, fct=L.3())
plot(bdd.m1, broken=TRUE, conLevel=0.0001)
summary(bdd.m1)
Of course, standard errors are huge as the data do not really support this model
(as
already pointed out by other replies to this post).
Christian
------------------------------
Message: 24
Date: Mon, 30 Mar 2009 16:06:35 +0200
From: Joan-Josep Vallb? <Pep.Vallbe@uab.cat>
Subject: Re: [R] Burt table from word frequency list
To: Alan Zaslavsky <zaslavsk@hcp.med.harvard.edu>
Cc: r-help@r-project.org, ted.harding@manchester.ac.uk
Message-ID: <80766FB4-B35C-42C2-96AD-5C5454944FE2@uab.cat>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
Thank you very much for all your comments, and sorry for the confusion
of my messages. My corpus is a collection of responses to an open
question from a questionnaire. Since my intention is not to create
groups of respondents but to treat all responses as a "whole
discourse" on a particular issue so that I can find out different
"semantic contexts" within the text. I have all the responses in a
single document, then I want to split it into strings of (specified) n
words. The resulting semantic contexts would be sets of (correlated)
word-strings containing particularly relevant (correlated) words.
I guess I must dive deeper into the "ca" and "tm" packages.
Any other
ideas will be really welcomed.
best,
Pep Vallb?
On Mar 30, 2009, at 2:05 PM, Alan Zaslavsky wrote:
> Maybe not terribly hard, depending on exactly what you need.
> Suppose you turn your text into a character vector 'mytext' of
> words. Then for a table of words appearing delta words apart
> (ordered), you can table mytext against itself with a lag:
>
> nwords=length(mytext)
> burttab=table(mytext[-(1:delta)],mytext[nwords+1-(1:delta)])
>
> Add to its transpose and sum over delta up to your maximum distance
> apart. If you want only words appearing near each other within the
> same sentence (or some other unit), pad out the sentence break with
> at least delta instances of a dummy spacer:
>
> the cat chased the greedy rat SPACER SPACER SPACER the dog chased
> the
> clever cat
>
> This will count all pairings at distance delta; if you want to count
> only those for which this was the NEAREST co-occurence (so
>
> the cat and the rate chased the dog
>
> would count as two at delta=3 but not one at delta=6) it will be
> trickier and I'm not sure this approach can be modified to handle it.
>
>> Date: Sun, 29 Mar 2009 22:20:15 -0400
>> From: "Murray Cooper" <myrmail@earthlink.net>
>> Subject: Re: [R] Burt table from word frequency list
>> The usual approach is to count the co-occurence within so many
>> words of
>> each other. Typical is between 5 words before and 5 words after a
>> given word. So for each word in the document, you look for the
>> occurence of all other words within -5 -4 -3 -2 -1 0 1 2 3 4 5 words.
>> Depending on the language and the question being asked certain words
>> may be excluded.
>> This is not a simple function! I don't know if anyone has done a
>> package, for this type of analysis but with over 2000 packages
>> floating
>> around you might get lucky.
------------------------------
Message: 25
Date: Mon, 30 Mar 2009 10:15:43 -0400
From: "James W. MacDonald" <jmacdon@med.umich.edu>
Subject: Re: [R] PLS package loading error!
To: mienad <mienad@gmail.com>
Cc: r-help@r-project.org
Message-ID: <49D0D40F.4060402@med.umich.edu>
Content-Type: text/plain; charset=UTF-8; format=flowed
Hi Damien,
How did you install the package? Usually this error pops up when people
simply download the zip file and then unzip into their library directory.
If you use the package installation functions in R, you shouldn't have
this problem:
install.packages("pls")
Best,
Jim
mienad wrote:> Hi,
>
> I am using R 2.8.1 version on Windows with RGui. I have loaded pls package
> lattest version (2.1-0). When I try to load this package in R using
> library(pls) command, the following error message appear:
>
> Erreur dans library(pls) :
> 'pls' n'est pas un package valide -- a-t-il ?t? install? <
2.0.0 ?
>
> Could you please help me to solve this problem?
>
> Regards
>
> Damien
--
James W. MacDonald, M.S.
Biostatistician
Douglas Lab
5912 Buhl
1241 E. Catherine St.
Ann Arbor MI 48109-5618
734-615-7826
------------------------------
Message: 26
Date: Mon, 30 Mar 2009 15:20:06 +0100
From: Paul Smith <phhs80@gmail.com>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org
Message-ID:
<6ade6f6c0903300720n50f21646w3f7d0cba6cb1912d@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Actually, one can use lpSolve to find a solution to your example. To
be more precise, it would be necessary to solve a sequence of linear
*integer* programs. The first one would be:
max f(x)
subject to
x >= 0
x <= 100
sum(x) = 100.
>From this, one would learn the optimal position of the number 100
(coefficient 50). Afterwards, one would remove the coefficient 50 from
the objective function, and the constraints would be:
x >= 0
x <= 99
sum(x) = 99.
The optimal position for the number 99 would be returned by lpSolve. And so on.
Paul
On Mon, Mar 30, 2009 at 2:22 PM, Hans W. Borchers
<hwborchers@googlemail.com> wrote:>
> Image you want to minimize the following linear function
>
> ? ?f <- function(x) sum( c(1:50, 50:1) * x / (50*51) )
>
> on the set of all permutations of the numbers 1,..., 100.
>
> I wonder how will you do that with lpSolve? I would simply order
> the coefficients and then sort the numbers 1,...,100 accordingly.
>
> I am also wondering how optim with "SANN" could be applied here.
>
> As this is a problem in the area of discrete optimization resp.
> constraint programming, I propose to use an appropriate program
> here such as the free software Bprolog. I would be interested to
> learn what others propose.
>
> Of course, if we don't know anything about the function f then
> it amounts to an exhaustive search on the 100! permutations --
> probably not a feasible job.
>
> Regards, ?Hans Werner
>
>
>
> Paul Smith wrote:
>>
>> On Sun, Mar 29, 2009 at 9:45 PM, ?<rkevinburton@charter.net>
wrote:
>>> I have an optimization question that I was hoping to get some
suggestions
>>> on how best to go about sovling it. I would think there is probably
a
>>> package that addresses this problem.
>>>
>>> This is an ordering optimzation problem. Best to describe it with a
>>> simple example. Say I have 100 "bins" each with a ball in
it numbered
>>> from 1 to 100. Each bin can only hold one ball. This optimization
is that
>>> I have a function 'f' that this array of bins and returns a
number. The
>>> number returned from f(1,2,3,4....) would return a different number
from
>>> that of f(2,1,3,4....). The optimization is finding the optimum
order of
>>> these balls so as to produce a minimum value from 'f'.I
cannot use the
>>> regular 'optim' algorithms because a) the values are
discrete, and b) the
>>> values are dependent ie. when the "variable" representing
the bin
>>> location is changed (in this example a new ball is put there) the
>>> existing ball will need to be moved to another bin (probably
swapping
>>> positions), and c) each "variable" is constrained, in the
example above
>>> the only allowable values are integers from 1-100. So the problem
becomes
>>> finding the optimum order of the "balls".
>>>
>>> Any suggestions?
>>
>> If your function f is linear, then you can use lpSolve.
>>
>> Paul
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>
> --
> View this message in context:
http://www.nabble.com/Constrined-dependent-optimization.-tp22772520p22782922.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 27
Date: Mon, 30 Mar 2009 09:25:52 -0500
From: Douglas Bates <bates@stat.wisc.edu>
Subject: [R] [OT] Contacting "Introductory Statistics for Engineering
Experimentation" authors
To: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<40e66e0b0903300725k55ac5294m50f4f953047b0287@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
I have been examining the text "Introductory Statistics for
Engineering Experimentation" by Peter R. Nelson, Marie Coffin and
Karen A.F. Copeland (Elsevier, 2003). There are several interesting
data sets used in the book and I plan to create an R package for them.
I would like to contact the surviving authors (apparently Peter R.
Nelson died in 2004) but have not been able to obtain contact
information for them. According to the preface the book was developed
for an intro engineering stats course at Clemson however no one at
Clemson could provide any leads. Does anyone on this list have
contact information for Marie Coffin or Karen A.F. Copeland? I have
been unsuccessful in various google searches.
------------------------------
Message: 28
Date: Mon, 30 Mar 2009 07:38:57 -0700 (PDT)
From: "j.k" <kathan@gmx.at>
Subject: [R] Add missing values/timestamps
To: r-help@r-project.org
Message-ID: <22784737.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hello alltogheter,
I have the following problem and maybe someone can help me with it.
I have a list of values with times. They look like that:
V1 V2
1 2008-10-14 08:45:00 94411.08
2 2008-10-14 08:50:00 90745.45
3 2008-10-14 08:55:00 82963.35
4 2008-10-14 09:00:00 75684.38
5 2008-10-14 09:05:00 78931.82
6 2008-10-14 09:20:00 74580.11
7 2008-10-14 09:25:00 69666.48
8 2008-10-14 09:30:00 77794.89
I have these data combined from different series of measurements.
As you can see the problem is that between these series are gaps which I
want to fill.
The format of the time is POSIXct
Are there any suggestions how I can fill these missing times and afterwards
interpolate/predict their values?
Thanks in advance
Johannes
--
View this message in context:
http://www.nabble.com/Add-missing-values-timestamps-tp22784737p22784737.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 29
Date: Mon, 30 Mar 2009 10:49:28 -0400
From: David Winsemius <dwinsemius@comcast.net>
Subject: Re: [R] Sliding window over irregular intervals
To: Irene Gallego Romero <ig247@cam.ac.uk>
Cc: r-help@r-project.org
Message-ID: <B9569045-B1A1-4BE1-B2FC-8E57EB01F225@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
The window you describe is not one I would call sliding and the
intervals are regular with an irregular number of events within the
windows. One way would be to use the results of trunc(pos/10000) as a
factor with tapply:
(Related functions are floor() and round(), but your pos values appear
to be positive, so there should not be problems with how they work
across 0)
After creating a dataframe, dta, try something like:
> tapply(dta$xpehh, as.factor(trunc(dta$pos/10000)), min)
1579 1580 1581 1582
-0.153413 -0.367296 0.302555 0.090302
--
David Winsemius
On Mar 30, 2009, at 9:01 AM, Irene Gallego Romero wrote:
> Dear all,
>
> I have some very big data files that look something like this:
>
> id chr pos ihh1 ihh2 xpehh
> rs5748748 22 15795572 0.0230222 0.0268394 -0.153413
> rs5748755 22 15806401 0.0186084 0.0268672 -0.367296
> rs2385785 22 15807037 0.0198204 0.0186616 0.0602451
> rs1981707 22 15809384 0.0299685 0.0176768 0.527892
> rs1981708 22 15809434 0.0305465 0.0187227 0.489512
> rs11914222 22 15810040 0.0307183 0.0172399 0.577633
> rs4819923 22 15813210 0.02707 0.0159736 0.527491
> rs5994105 22 15813888 0.025202 0.0141296 0.578651
> rs5748760 22 15814084 0.0242894 0.0146486 0.505691
> rs2385786 22 15816846 0.0173057 0.0107816 0.473199
> rs1990483 22 15817310 0.0176641 0.0130525 0.302555
> rs5994110 22 15821524 0.0178411 0.0129001 0.324267
> rs17733785 22 15822154 0.0201797 0.0182093 0.102746
> rs7287116 22 15823131 0.0201993 0.0179028 0.12069
> rs5748765 22 15825502 0.0193195 0.0176513 0.090302
>
> I'm trying to extract the maximum and minimum xpehh (last column)
> values within a sliding window (non overlapping), of width 10000
> (calculated relative to pos (third column)). However, as you can
> tell from the brief excerpt here, although all possible intervals
> will probably be covered by at least one data point, the number of
> data points will be variable (incidentally, if anyone knows of a way
> to obtain this number, that would be lovely), as will the spacing
> between them. Furthermore, values of chr (second column) will range
> from 1 to 22, and values of pos will be overlapping across them; I
> want to evaluate the window separately for each value of chr.
>
> I've looked at the help and FAQ on sliding windows, but I'm a
> relative newcomer to R and cannot find a way to do what I need to
> do. Everything I've managed to unearth so far seems geared towards
> smoother time series. Any help on this problem would be vastly
> appreciated.
>
> Thanks,
> Irene
>
> --
> Irene Gallego Romero
> Leverhulme Centre for Human Evolutionary Studies
> University of Cambridge
> Fitzwilliam St
> Cambridge
> CB2 1QH
> UK
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
Heritage Laboratories
West Hartford, CT
------------------------------
Message: 30
Date: Mon, 30 Mar 2009 08:14:55 -0700
From: Michael Lawrence <mflawren@fhcrc.org>
Subject: Re: [R] Sliding window over irregular intervals
To: Irene Gallego Romero <ig247@cam.ac.uk>
Cc: r-help@r-project.org
Message-ID:
<509e0620903300814p54752b2clcca1b19c4ea4cf36@mail.gmail.com>
Content-Type: text/plain
On Mon, Mar 30, 2009 at 6:01 AM, Irene Gallego Romero
<ig247@cam.ac.uk>wrote:
> Dear all,
>
> I have some very big data files that look something like this:
>
> id chr pos ihh1 ihh2 xpehh
> rs5748748 22 15795572 0.0230222 0.0268394 -0.153413
> rs5748755 22 15806401 0.0186084 0.0268672 -0.367296
> rs2385785 22 15807037 0.0198204 0.0186616 0.0602451
> rs1981707 22 15809384 0.0299685 0.0176768 0.527892
> rs1981708 22 15809434 0.0305465 0.0187227 0.489512
> rs11914222 22 15810040 0.0307183 0.0172399 0.577633
> rs4819923 22 15813210 0.02707 0.0159736 0.527491
> rs5994105 22 15813888 0.025202 0.0141296 0.578651
> rs5748760 22 15814084 0.0242894 0.0146486 0.505691
> rs2385786 22 15816846 0.0173057 0.0107816 0.473199
> rs1990483 22 15817310 0.0176641 0.0130525 0.302555
> rs5994110 22 15821524 0.0178411 0.0129001 0.324267
> rs17733785 22 15822154 0.0201797 0.0182093 0.102746
> rs7287116 22 15823131 0.0201993 0.0179028 0.12069
> rs5748765 22 15825502 0.0193195 0.0176513 0.090302
>
> I'm trying to extract the maximum and minimum xpehh (last column)
values
> within a sliding window (non overlapping), of width 10000 (calculated
> relative to pos (third column)). However, as you can tell from the brief
> excerpt here, although all possible intervals will probably be covered by
at
> least one data point, the number of data points will be variable
> (incidentally, if anyone knows of a way to obtain this number, that would
be
> lovely), as will the spacing between them. Furthermore, values of chr
> (second column) will range from 1 to 22, and values of pos will be
> overlapping across them; I want to evaluate the window separately for each
> value of chr.
>
The IRanges package from the Bioconductor project attempts to solve problems
like these. For example, to count the number of overlapping intervals at a
given position in the chromosome, you would use the coverage() function. The
RangedData class is designed to store data like yours and rdapply() makes it
easy to perform operations one chromosome at a time.
That said, I don't think it has any easy way to solve your problem of
calculating quantiles. That's a feature that needs to be added to the
package. I could imagine something like (with the development version),
calling disjointBins() to separate the ranges in bins where there is no
overlap, then converting each bin into an Rle, and then using pmin/max on
the Rle objects in series to get your answer.
Anyway, you probably want to check out IRanges.
Michael
>
> I've looked at the help and FAQ on sliding windows, but I'm a
relative
> newcomer to R and cannot find a way to do what I need to do. Everything
I've
> managed to unearth so far seems geared towards smoother time series. Any
> help on this problem would be vastly appreciated.
>
> Thanks,
> Irene
>
> --
> Irene Gallego Romero
> Leverhulme Centre for Human Evolutionary Studies
> University of Cambridge
> Fitzwilliam St
> Cambridge
> CB2 1QH
> UK
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 31
Date: Mon, 30 Mar 2009 11:24:55 -0400
From: Saptarshi Guha <saptarshi.guha@gmail.com>
Subject: [R] List assignment in a while loop and timing
To: R-help@r-project.org
Message-ID:
<1e7471d50903300824o44fb1b82uc2a81d2512cdf60d@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Hello R users
I have question about the time involved in list assignment.
Consider the following code snippet(see below). The first line creates
a reader object,
which is the interface to 1MM key-value pairs (serialized R objects) spanning 50
files (a total of 50MB). rhsqstart initiates the reading and I loop, reading
each key-value pair using rhsqnextKVR. If this returns NULL, we switch to the
next file and if this returns null we break.
If I comment out line A1, it takes 39 seconds on a quad core intel with
16GB ram running R-2.8
If I include the assignment A1 it takes ~85 seconds.
I have preassigned the list in line A0, so I'm guessing there is no resizing
going on, so why does the time increase so much?
Thank you for your time.
Regards
Saptarshi
==code=rdr <- rhsqreader("~/tmp/pp",local=T,pattern="^p")
rdr <- rhsqstart(rdr)
i <- 1;
h=as.list(rep(1,1e6)) ##A0
while(TRUE){
value <-rhsqnextKVR(rdr) ##Returns a list of two elements K,V
if(is.null(value)) {
message(rdr$df[rdr$current])
rdr <- rhsqnextpath(rdr)
if(is.null(rdr)) break;
}
h[[i]] <- value; ##A1
i <- i+1
}
------------------------------
Message: 32
Date: Mon, 30 Mar 2009 08:24:36 -0700
From: Bert Gunter <gunter.berton@gene.com>
Subject: Re: [R] Matrix max by row
To: "'Wacek Kusnierczyk'"
<Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>,
"'Rolf Turner'" <r.turner@auckland.ac.nz>
Cc: r-help@r-project.org
Message-ID: <001301c9b14b$a2e99390$3a0b2c0a@gne.windows.gene.com>
Content-Type: text/plain; charset="US-ASCII"
Serves me right, I suppose. Timing seems also very dependent on the
dimensions of the matrix. Here's what I got with my inadequate test:
> x <- matrix(rnorm(3e5),ncol=3)
## via apply> system.time(apply(x,1,max))
user system elapsed
2.09 0.02 2.10
## via pmax > system.time(do.call(pmax,data.frame(x)))
user system elapsed
0.10 0.02 0.11 >
Draw your own conclusions!
Cheers,
Bert
Bert Gunter
Genentech Nonclinical Biostatistics
650-467-7374
-----Original Message-----
From: Wacek Kusnierczyk [mailto:Waclaw.Marcin.Kusnierczyk@idi.ntnu.no]
Sent: Monday, March 30, 2009 2:33 AM
To: Rolf Turner
Cc: Bert Gunter; 'Ana M Aparicio Carrasco'; r-help@r-project.org
Subject: Re: [R] Matrix max by row
Rolf Turner wrote:> I tried the following:
>
> m <- matrix(runif(100000),1000,100)
> junk <- gc()
> print(system.time(for(i in 1:100) X1 <- do.call(pmax,data.frame(m))))
> junk <- gc()
> print(system.time(for(i in 1:100) X2 <- apply(m,1,max)))
>
> and got
>
> user system elapsed
> 2.704 0.110 2.819
> user system elapsed
> 1.938 0.098 2.040
>
> so unless there's something that I am misunderstanding (always a
serious
> consideration) Wacek's apply method looks to be about 1.4 times
> *faster* than
> the do.call/pmax method.
hmm, since i was called by name (i'm grateful, rolf), i feel obliged to
check the matters myself:
# dummy data, presumably a 'large matrix'?
n = 5e3
m = matrix(rnorm(n^2), n, n)
# what is to be benchmarked...
waku = expression(matrix(apply(m, 1, max), nrow(m)))
bert = expression(do.call(pmax,data.frame(m)))
# to be benchmarked
library(rbenchmark)
benchmark(replications=10, order='elapsed',
columns=c('test',
'elapsed'),
waku=matrix(apply(m, 1, max), nrow(m)),
bert=do.call(pmax,data.frame(m)))
takes quite a while, but here you go:
# test elapsed
# 1 waku 11.838
# 2 bert 20.833
where bert's solution seems to require a wonder to 'be considerably
faster for large matrices'. to have it fair, i also did
# to be benchmarked
library(rbenchmark)
benchmark(replications=10, order='elapsed',
columns=c('test',
'elapsed'),
bert=do.call(pmax,data.frame(m)),
waku=matrix(apply(m, 1, max), nrow(m)))
# test elapsed
# 2 waku 11.695
# 1 bert 20.912
take home point: a good product sells itself, a bad product may not sell
despite aggressive marketing.
rolf, thanks for pointing this out.
cheers,
vQ
> cheers,
>
> Rolf Turner
>
>
> On 30/03/2009, at 3:55 PM, Bert Gunter wrote:
>
>> If speed is a consideration,availing yourself of the built-in pmax()
>> function via
>>
>> do.call(pmax,data.frame(yourMatrix))
>>
>> will be considerably faster for large matrices.
>>
>> If you are puzzled by why this works, it is a useful exercise in R to
>> figure
>> it out.
>>
>> Hint:The man page for ?data.frame says:
>> "A data frame is a list of variables of the same length with
unique row
>> names, given class 'data.frame'."
>>
>> Cheers,
>> Bert
>>
>> Bert Gunter
>> Genentech Nonclinical Statistics
>>
>> -----Original Message-----
>> From: r-help-bounces@r-project.org
>> [mailto:r-help-bounces@r-project.org] On
>> Behalf Of Wacek Kusnierczyk
>> Sent: Saturday, March 28, 2009 5:22 PM
>> To: Ana M Aparicio Carrasco
>> Cc: r-help@r-project.org
>> Subject: Re: [R] Matrix max by row
>>
>> Ana M Aparicio Carrasco wrote:
>>> I need help about how to obtain the max by row in a matrix.
>>> For example if I have the following matrix:
>>> 2 5 3
>>> 8 7 2
>>> 1 8 4
>>>
>>> The max by row will be:
>>> 5
>>> 8
>>> 8
>>>
>>
>> matrix(apply(m, 1, max), nrow(m))
>>
>> vQ
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
>
> ######################################################################
> Attention:This e-mail message is privileged and confidential. If you
> are not theintended recipient please delete the message and notify the
> sender.Any views or opinions presented are solely those of the author.
>
> This e-mail has been scanned and cleared by
> MailMarshalwww.marshalsoftware.com
> ######################################################
------------------------------
Message: 33
Date: Mon, 30 Mar 2009 11:26:45 -0400
From: milton ruser <milton.ruser@gmail.com>
Subject: Re: [R] (no subject)
To: ankhee dutta <ankheedutta@gmail.com>
Cc: r-help@r-project.org
Message-ID:
<3aaf1a030903300826w4ba67c90tbeec57690af2abc6@mail.gmail.com>
Content-Type: text/plain
How about you include a thread like
"Problem with R 2.3.0 and MySQL on Mandriva-2007".
Bests,
milton
On Mon, Mar 30, 2009 at 7:07 AM, ankhee dutta <ankheedutta@gmail.com>
wrote:
> Hi, All
> I have a linux system of Mandriva-2007 with R version 2.3.0 and MySQL with
> 5.0.0. I have also got DBI-R database interface version-0.1-11 installed on
> my Linux system.While installing RMySQL package version 0.5-11 but facing
> the problem mentioned below .
>
>
>
> * Installing *source* package 'RMySQL' ...
> creating cache ./config.cache
> checking how to run the C preprocessor... cc -E
> checking for compress in -lz... yes
> checking for getopt_long in -lc... yes
> checking for mysql_init in -lmysqlclient... no
> checking for mysql.h... no
> checking for mysql_init in -lmysqlclient... no
> checking for mysql_init in -lmysqlclient... no
> checking for mysql_init in -lmysqlclient... no
> checking for mysql_init in -lmysqlclient... no
> checking for mysql_init in -lmysqlclient... no
> checking for /usr/local/include/mysql/mysql.h... no
> checking for /usr/include/mysql/mysql.h... no
> checking for /usr/local/mysql/include/
> mysql/mysql.h... no
> checking for /opt/include/mysql/mysql.h... no
> checking for /include/mysql/mysql.h... no
>
> Configuration error:
> could not find the MySQL installation include and/or library
> directories. Manually specify the location of the MySQL
> libraries and the header files and re-run R CMD INSTALL.
>
> INSTRUCTIONS:
>
> 1. Define and export the 2 shell variables PKG_CPPFLAGS and
> PKG_LIBS to include the directory for header files (*.h)
> and libraries, for example (using Bourne shell syntax):
>
> export PKG_CPPFLAGS="-I<MySQL-include-dir>"
> export PKG_LIBS="-L<MySQL-lib-dir> -lmysqlclient"
>
> Re-run the R INSTALL command:
>
> R CMD INSTALL RMySQL_<version>.tar.gz
>
> 2. Alternatively, you may pass the configure arguments
> --with-mysql-dir=<base-dir> (distribution directory)
> or
> --with-mysql-inc=<base-inc> (where MySQL header files reside)
> --with-mysql-lib=<base-lib> (where MySQL libraries reside)
> in the call to R INSTALL --configure-args='...'
>
> R CMD INSTALL --configure-args='--with-mysql-dir=DIR'
> RMySQL_<version>.tar.gz
>
> ERROR: configuration failed for package 'RMySQL'
> ** Removing '/usr/lib/R/library/RMySQL'
>
>
>
>
>
> Any help will be great.
> Thankyou in advance.
>
> --
> Ankhee Dutta
> project trainee,
> JNU,New Delhi-67
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
>
http://www.R-project.org/posting-guide.html<http://www.r-project.org/posting-guide.html>
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 34
Date: Mon, 30 Mar 2009 17:27:26 +0200
From: "Gaj Vidmar" <gaj.vidmar@mf.uni-lj.si>
Subject: Re: [R] [OT] Contacting "Introductory Statistics for
EngineeringExperimentation" authors
To: r-help@stat.math.ethz.ch
Message-ID: <gqqod0$j1j$1@ger.gmane.org>
Two authors appear to be the same as of the book "Analysis of Means"
(ANOM),
which I read and has a website at http://www.analysisofmeans.com/
If I remember correctly, Mr. Nelson is deceased, but you might nevertheless
reach Mrs. Copeland following the Contact Us link at the ANOM website, which
leads to info@analysisofmeans.com.
Or, you might be able to contact her through Boulder Statistics (another
link at the ANOM website) at http://www.boulderstats.com/
(getinfo@boulderstats.com).
Regards,
Assist.Prof. Gaj Vidmar, PhD
Institute for Rehabilitation, Republic of Slovenia
"Douglas Bates" <bates@stat.wisc.edu> wrote in message
news:40e66e0b0903300725k55ac5294m50f4f953047b0287@mail.gmail.com...>I have been examining the text "Introductory Statistics for
> Engineering Experimentation" by Peter R. Nelson, Marie Coffin and
> Karen A.F. Copeland (Elsevier, 2003). There are several interesting
> data sets used in the book and I plan to create an R package for them.
> I would like to contact the surviving authors (apparently Peter R.
> Nelson died in 2004) but have not been able to obtain contact
> information for them. According to the preface the book was developed
> for an intro engineering stats course at Clemson however no one at
> Clemson could provide any leads. Does anyone on this list have
> contact information for Marie Coffin or Karen A.F. Copeland? I have
> been unsuccessful in various google searches.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 35
Date: Mon, 30 Mar 2009 08:28:15 -0700 (PDT)
From: Ken-JP <kfmfe04@gmail.com>
Subject: [R] Excellent Talk on Statistics (Good examples of stat.
visualization)
To: r-help@r-project.org
Message-ID: <22785778.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
with very good examples of statistical visualization.
"Talks Hans Rosling: Debunking third-world myths with the best stats
you've
ever seen"
http://www.ted.com/index.php/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html
--
View this message in context:
http://www.nabble.com/Excellent-Talk-on-Statistics-%28Good-examples-of-stat.-visualization%29-tp22785778p22785778.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 36
Date: Mon, 30 Mar 2009 12:33:52 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] how to input multiple .txt files
To: baptiste auguie <ba208@exeter.ac.uk>
Cc: Qianfeng Li <qflichem@yahoo.com>, "r-help@r-project.org"
<r-help@r-project.org>
Message-ID:
<37fda5350903300833u6abfd0cah7d52dfcecfb7d202@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
To repent for my sins, I'll also suggest that Hadley Wickham's
"plyr"
package (http://had.co.nz/plyr/) is also useful/parsimonious in this
context:
a <- ldply(cust1_files,read.table)
On Mon, Mar 30, 2009 at 9:32 AM, baptiste auguie <ba208@exeter.ac.uk>
wrote:> may i suggest the following,
>
>
> a <- do.call(rbind, lapply(cust1_files, read.table))
>
> (i believe expanding objects in a for loop belong to the R Inferno)
>
> baptiste
>
> On 30 Mar 2009, at 12:58, Mike Lawrence wrote:
>
>>
>> cust1_files >>
list.files(path=path_to_my_files,pattern='cust1',full.names=TRUE)
>> a=NULL
>> for(this_file in cust1_files){
>> ? ? ?a=rbind(a,read.table(this_file))
>> }
>> write.table(a,'cust1.master.txt')
>
> _____________________________
>
> Baptiste Augui?
>
> School of Physics
> University of Exeter
> Stocker Road,
> Exeter, Devon,
> EX4 4QL, UK
>
> Phone: +44 1392 264187
>
> http://newton.ex.ac.uk/research/emag
> ______________________________
>
>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 37
Date: Mon, 30 Mar 2009 15:45:40 +0000
From: Steve Murray <smurray444@hotmail.com>
Subject: Re: [R] Column name assignment problem
To: <p.dalgaard@biostat.ku.dk>
Cc: r-help@r-project.org
Message-ID: <BAY135-W25D48D9DC29CF8F9740CFA888D0@phx.gbl>
Content-Type: text/plain; charset="Windows-1252"
Dear Peter, Jim and all,
Thanks for the information regarding how to structure 'assign' commands.
I've had a go at doing this, based on your advice, and although I feel
I'm a lot closer now, I can't quite get it to work:
rnames <- sprintf("%.2f", seq(from = -89.75, to = 89.75, length =
360))
columnnames <- sprintf("%.2f", seq(from = -179.75, to = 179.75,
length = 720))
for (i in 1:120) {
Fekete_table <- get(paste("Fekete_", index$year[i],
index$month[i], sep=''))
colnames(Fekete_table) <- columnnames
rownames(Fekete_table) <- rnames
assign(paste("Fekete_",index$year[i], index$month[i],
sep=''),
colnames(Fekete_table))
}
This assigns the column headings to each table, so that each table doesn't
contain data any longer, but simply the column values. I tried inserting
assign(colnames(paste("Fekete_"...) but this resulted in the type of
error that was mentioned in the previous message. I've run dry of ideas as
to how I should restructure the commands, so would be grateful for any pointers.
Many thanks,
Steve
_________________________________________________________________
[[elided Hotmail spam]]
------------------------------
Message: 38
Date: Mon, 30 Mar 2009 08:46:58 -0700
From: Michael Lawrence <mflawren@fhcrc.org>
Subject: Re: [R] Mature SOAP Interface for R
To: zubin <binabina@bellsouth.net>
Cc: r-help@r-project.org
Message-ID:
<509e0620903300846x19791c59y56bbaaa5c5920543@mail.gmail.com>
Content-Type: text/plain
On Sat, Mar 28, 2009 at 6:08 PM, zubin <binabina@bellsouth.net> wrote:
> Hello, we are writing rich internet user interfaces and like to call R for
> some of the computational needs on the data, as well as some creation of
> image files. Our objects communicate via the SOAP interface. We have been
> researching the various packages to expose R as a SOAP service.
>
> No current CRAN SOAP packages however.
>
> Found 3 to date:
>
> RSOAP (http://sourceforge.net/projects/rsoap/)
> SSOAP http://www.omegahat.org/SSOAP/
>
> looks like a commercial version?
> http://random-technologies-llc.com/products/rsoap
>
> Does anyone have experience with these 3 and can recommend the most
> 'mature' R - SOAP interface package?
>
Well, SSOAP is (the last time I checked) just a SOAP client. rsoap (if we're
talking about the same package) is actually a python SOAP server that
communicates to R via rpy.
You might want to check out the RWebServices package in Bioconductor. I
think it uses Java for its SOAP handling.
Michael
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 39
Date: Mon, 30 Mar 2009 10:06:01 -0600
From: Greg Snow <Greg.Snow@imail.org>
Subject: Re: [R] unicode only works with a second one
To: Thomas Steiner <finbref.2006@gmail.com>,
"r-help@stat.math.ethz.ch" <r-help@stat.math.ethz.ch>
Message-ID:
<B37C0A15B8FB3C468B5BC7EBC7DA14CC61CDD76B4D@LP-EXMBVS10.CO.IHC.COM>
Content-Type: text/plain; charset="us-ascii"
I don't know how to help with the Unicode issue, but one alternative is the
my.symbols function in the TeachingDemos package (see ?ms.male as well as
?my.symbols).
Hope this helps,
--
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.snow@imail.org
801.408.8111
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-
> project.org] On Behalf Of Thomas Steiner
> Sent: Saturday, March 28, 2009 9:19 AM
> To: r-help@stat.math.ethz.ch
> Subject: [R] unicode only works with a second one
>
> I'd like to paste a zodiac sign on a graph, but it only prints it when
> I add another unicode ( \u3030) to the desired \u2648 - why?
> See the examplecode (compare the orange with the skyblue):
>
> plot(c(-1,1),c(-4,-2),type="n")
> text(x=0,y=-3.0,labels="\u2648
\u3030",cex=2.3,col="skyblue")
> text(x=0,y=-3.2,labels="\u2648",cex=2.3,col="orange")
> zodiac=c("\u2642 \u2643 \u2644 \u2645 \u2646 \u2647 \u2648 \u2649
> \u2650 \u2651 \u2652 \u2653")
>
text(x=0,y=-3.5,labels=paste(zodiac,"\u3030"),cex=2.3,col="navy")
>
> I use R version 2.8.1 (2008-12-22) under MS Windows Vista.
> Thanks for help
> Thomas
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 40
Date: Mon, 30 Mar 2009 17:11:25 +0100
From: Paul Smith <phhs80@gmail.com>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org
Message-ID:
<6ade6f6c0903300911t241a363br503fa872d54693e5@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Optim with SANN also solves your example:
-------------------------------------------
f <- function(x) sum(c(1:50,50:1)*x)
swapfun <- function(x,N=100) {
loc <- sample(N,size=2,replace=FALSE)
tmp <- x[loc[1]]
x[loc[1]] <- x[loc[2]]
x[loc[2]] <- tmp
x
}
N <- 100
opt1 <-
optim(fn=f,par=sample(1:N,N),gr=swapfun,method="SANN",control=list(maxit=50000,fnscale=-1,trace=10))
opt1$par
opt1$value
-------------------------------------------
We need to specify a large number of iterations to get the optimal
solution. The objective function at the optimum is 170425, and one
gets a close value with optim and SANN.
Paul
On Mon, Mar 30, 2009 at 2:22 PM, Hans W. Borchers
<hwborchers@googlemail.com> wrote:>
> Image you want to minimize the following linear function
>
> ? ?f <- function(x) sum( c(1:50, 50:1) * x / (50*51) )
>
> on the set of all permutations of the numbers 1,..., 100.
>
> I wonder how will you do that with lpSolve? I would simply order
> the coefficients and then sort the numbers 1,...,100 accordingly.
>
> I am also wondering how optim with "SANN" could be applied here.
>
> As this is a problem in the area of discrete optimization resp.
> constraint programming, I propose to use an appropriate program
> here such as the free software Bprolog. I would be interested to
> learn what others propose.
>
> Of course, if we don't know anything about the function f then
> it amounts to an exhaustive search on the 100! permutations --
> probably not a feasible job.
>
> Regards, ?Hans Werner
>
>
>
> Paul Smith wrote:
>>
>> On Sun, Mar 29, 2009 at 9:45 PM, ?<rkevinburton@charter.net>
wrote:
>>> I have an optimization question that I was hoping to get some
suggestions
>>> on how best to go about sovling it. I would think there is probably
a
>>> package that addresses this problem.
>>>
>>> This is an ordering optimzation problem. Best to describe it with a
>>> simple example. Say I have 100 "bins" each with a ball in
it numbered
>>> from 1 to 100. Each bin can only hold one ball. This optimization
is that
>>> I have a function 'f' that this array of bins and returns a
number. The
>>> number returned from f(1,2,3,4....) would return a different number
from
>>> that of f(2,1,3,4....). The optimization is finding the optimum
order of
>>> these balls so as to produce a minimum value from 'f'.I
cannot use the
>>> regular 'optim' algorithms because a) the values are
discrete, and b) the
>>> values are dependent ie. when the "variable" representing
the bin
>>> location is changed (in this example a new ball is put there) the
>>> existing ball will need to be moved to another bin (probably
swapping
>>> positions), and c) each "variable" is constrained, in the
example above
>>> the only allowable values are integers from 1-100. So the problem
becomes
>>> finding the optimum order of the "balls".
>>>
>>> Any suggestions?
>>
>> If your function f is linear, then you can use lpSolve.
>>
>> Paul
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>>
>
> --
> View this message in context:
http://www.nabble.com/Constrined-dependent-optimization.-tp22772520p22782922.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 41
Date: Mon, 30 Mar 2009 16:12:36 +0000 (GMT)
From: Alphonse Monkamg <amonkamg@yahoo.fr>
Subject: [R] Nonparametric analysis of repeated measurements data with
sm library
To: r-help@r-project.org
Message-ID: <354705.19298.qm@web25907.mail.ukl.yahoo.com>
Content-Type: text/plain
Dear all,
Does anybody know how to get more evaluation points in performing Nonparametric
analysis of repeated measurements data with "sm" library. The
following command gives the estimation for 50 points, by I would like to
increase to 100 points
But I do not know how to do that.
library(sm)
provide.data(citrate, options=list(describe=FALSE))
provide.data(dogs, options=list(describe=FALSE))
a <- sm.rm(y=citrate, display.rice=TRUE)
a$eval.points
Many thanks in advance.
Alphonse
[[alternative HTML version deleted]]
------------------------------
Message: 42
Date: Mon, 30 Mar 2009 17:13:28 +0200
From: Analisi Dati <analisi.dati@sdn-napoli.it>
Subject: [R] HELP WITH SEM LIBRARY AND WITH THE MODEL'S SPECIFICATION
To: r-help@r-project.org
Message-ID:
<11053966.128261238426008377.JavaMail.root@mail.sdn-napoli.it>
Content-Type: text/plain
Dear users,
i'm using the sem package in R, because i need to improve a confermative
factor analisys.
I have so many questions in my survey, and i suppose, for example, that
Question 1 (Q1) Q2 and Q3 explain the same thing (factor F1), Q4,Q5 and Q6
explain F2 and Q7 and Q8 explain F3...
For check that what i supposed is true, i run this code to see if the values of
loadings are big or not.
(In this code i used more than 3 factors)
library("sem")
#put in "mydata", the value of the questions
mydata <-
data.frame(X$X12a,X$X12b,X$X12c,X$X12d,X$X12e,X$X12f,X$X12g,X$X12h,X$X12i,X$X12l,X$X12m,X$X12n,X$X12o,X$X12p,X$X12q,X$X12r,X$X12s,X$X1a,X$X1b,X$X1c,X$X1d,X$X1e,X$X1f,X$X3h,X$X3i,X$X3l,X$X3m,X$X3n,X$X3o,X$X3p,X$X3q,X$X3r,X$X3s,X$X3t,X$X3u,X$X3v,X$X4a,X$X5q,X$X5r,X$X5s,X$X8a,X$X8b,X$X8c,X$X8d)
#i calculate the covariance of the data
mydata.cov <- cov(mydata,use="complete.obs")
#I specify my model
model.mydata <- specify.model()
F1 -> X.X12a, lam1, NA
F1 -> X.X12b, lam2, NA
F1 -> X.X12c, lam3, NA
F1 -> X.X12d, lam4, NA
F1 -> X.X12e, lam5, NA
F1 -> X.X12f, lam6, NA
F1 -> X.X12g, lam7, NA
F2 -> X.X12h, lam8, NA
F2 -> X.X12i, lam9, NA
F2 -> X.X12l, lam10, NA
F2 -> X.X12m, lam11, NA
F2 -> X.X12n, lam12, NA
F2 -> X.X12o, lam13, NA
F3 -> X.X12p, lam14, NA
F3 -> X.X12q, lam15, NA
F3 -> X.X12r, lam16, NA
F3 -> X.X12s, lam17, NA
F4 -> X.X1a, lam18, NA
F4 -> X.X1b, lam19, NA
F4 -> X.X1c, lam20, NA
F4 -> X.X1d, lam21, NA
F4 -> X.X1e, lam22, NA
F4 -> X.X1f, lam23, NA
F5 -> X.X3h, lam24, NA
F5 -> X.X3i, lam25, NA
F5 -> X.X3l, lam26, NA
F5 -> X.X3m, lam27, NA
F5 -> X.X3n, lam28, NA
F5 -> X.X3o, lam29, NA
F5 -> X.X3p, lam30, NA
F5 -> X.X3q, lam31, NA
F6 -> X.X3r, lam32, NA
F6 -> X.X3s, lam33, NA
F6 -> X.X3t, lam34, NA
F6 -> X.X3u, lam35, NA
F6 -> X.X3v, lam36, NA
F6 -> X.X4a, lam37, NA
F7 -> X.X5q, lam38, NA
F7 -> X.X5r, lam39, NA
F7 -> X.X5s, lam40, NA
F8 -> X.X8a, lam41, NA
F8 -> X.X8b, lam42, NA
F8 -> X.X8c, lam43, NA
F8 -> X.X8d, lam44, NA
X.X12a <-> X.X12a, e1, NA
X.X12b <-> X.X12b, e2, NA
X.X12c <-> X.X12c, e3, NA
X.X12d <-> X.X12d, e4, NA
X.X12e <-> X.X12e, e5, NA
X.X12f <-> X.X12f, e6, NA
X.X12g <-> X.X12g, e7, NA
X.X12h <-> X.X12h, e8, NA
X.X12i <-> X.X12i, e9, NA
X.X12l <-> X.X12l, e10, NA
X.X12m <-> X.X12m, e11, NA
X.X12n <-> X.X12n, e12, NA
X.X12o <-> X.X12o, e13, NA
X.X12p <-> X.X12p, e14, NA
X.X12q <-> X.X12q, e15, NA
X.X12r <-> X.X12r, e16, NA
X.X12s <-> X.X12s, e17, NA
X.X1a <-> X.X1a, e18, NA
X.X1b <-> X.X1b, e19, NA
X.X1c <-> X.X1c, e20, NA
X.X1d <-> X.X1d, e21, NA
X.X1e <-> X.X1e, e22, NA
X.X1f <-> X.X1f, e23, NA
X.X3h <-> X.X3h, e24, NA
X.X3i <-> X.X3i, e25, NA
X.X3l <-> X.X3l, e26, NA
X.X3m <-> X.X3m, e27, NA
X.X3n <-> X.X3n, e28, NA
X.X3o <-> X.X3o, e29, NA
X.X3p <-> X.X3p, e30, NA
X.X3q <-> X.X3q, e31, NA
X.X3r <-> X.X3r, e32, NA
X.X3s <-> X.X3s, e33, NA
X.X3t <-> X.X3t, e34, NA
X.X3u <-> X.X3u, e35, NA
X.X3v <-> X.X3v, e36, NA
X.X4a <-> X.X4a, e37, NA
X.X5q <-> X.X5q, e38, NA
X.X5r <-> X.X5r, e39, NA
X.X5s <-> X.X5s, e40, NA
X.X8a <-> X.X8a, e41, NA
X.X8b <-> X.X8b, e42, NA
X.X8c <-> X.X8c, e43, NA
X.X8d <-> X.X8d, e44, NA
F1 <-> F1, NA, 1
F2 <-> F2, NA, 1
F3 <-> F3, NA, 1
F4 <-> F4, NA, 1
F5 <-> F5, NA, 1
F6 <-> F6, NA, 1
F7 <-> F7, NA, 1
F8 <-> F8, NA, 1
mydata.sem <- sem(model.mydata, mydata.cov, nrow(mydata))
# print results (fit indices, paramters, hypothesis tests)
summary(mydata.sem)
# print standardized coefficients (loadings)
std.coef(mydata.sem)
Now the problems, and my questions, are various:
1)In "mydata" i need to have only the questions or also my latent
variables? In other words, i suppose that the mean of Q1,Q2,Q3 give me a
variable called "OCB". In mydata i need also this mean???
2)In the specification of my model, i didn't use nothing like
"F1<->F2......", is this a problem? this sentence what
indicates??? that i have a mediation/moderation effect between variables???
3)Now, if you look my code,you could see that i don't put in
"mydata" the mean value called "OCB" (see point 1), and i
don't write nothing about the relation between F1 and F2, and when i run the
sem function i receive these warnings:
1: In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars, :
S is numerically singular: expect problems
2: In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars, :
S is not positive-definite: expect problems
3: In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names =
vars, :
Could not compute QR decomposition of Hessian.
Optimization probably did not converge.
and after the summary i receive this error:
coefficient covariances cannot be computed
What i can do for all this????
Hoping in your interest about this problem, i wish you the best.
[[elided Yahoo spam]]
[[alternative HTML version deleted]]
------------------------------
Message: 43
Date: Mon, 30 Mar 2009 10:46:31 -0400
From: "HRISHIKESH D. VINOD" <vinod@fordham.edu>
Subject: [R] NY City Conf for Enthusiastic Users of R, June 18-19,
2009
To: r-help@R-project.org
Message-ID:
<OF0518A42C.BF69E4D7-ON85257589.005129CD-85257589.005129D4@fordham.edu>
Content-Type: text/plain; charset=US-ASCII
Conference on Quantitative Social Science
Research Using R
June 18-19 (Thursday-Friday), 2009, Fordham University, 113 West 60th
Street, New York. (next door to Lincoln Center for Performing Arts).
conf. website: http://www.cis.fordham.edu/QR2009
Hrishikesh (Rick) D. Vinod
Professor of Economics, Fordham University
author of new econometrics book using R:
http://www.worldscibooks.com/economics/6895.html
------------------------------
Message: 44
Date: Mon, 30 Mar 2009 12:19:32 -0400
From: Stephan Lindner <lindners@umich.edu>
Subject: [R] Importing csv file with character values into sqlite3 and
subsequent problem in R / RSQLite
To: r-help@stat.math.ethz.ch
Message-ID: <20090330161932.GH22278@umich.edu>
Content-Type: text/plain; charset=us-ascii
Dear all,
I'm trying to import a csv file into sqlite3 and from there into
R. Everything looks fine exepct that R outputs the character values in
an odd fashion: they are shown as "\"CHARACTER\"" instead of
"CHARACTER", but only if I show the character variable as a
vector. Does someone know why this happens? Below is a sample
code. The first part is written in bash. Of course I could just
read.csv for the spreadsheet, but the real datasets are more than 3
GB, that's why I'm using RSQLite (which is really awesome!). Also, I
could get rid of the "" in the csv file (the csv file has only
numbers, but it is easier for my to use identifiers such as v1 as
character strings), but I thought I'd first see whether there is a
different way to solve this issue.
Thanks!
Stephan
<--
bash$ more example.csv
bash$ echo -e
"\"001074034\",90,1,7,89,12\n\"001074034\",90,1,1,90,12\n\"001074034\",90,1,2,90,12\n\"001074034\",90,1,3,90,12"
> example.csv
bash$ echo "create table t(v1,v2,v3,v4,v5,v6);" > example.sql
bash$ sqlite3 example.db < example.sql
bash$ echo -e ".separator , \n.import example.csv t" | sqlite3
example.db
bash$ R> library(RSQLite)
Loading required package: DBI> example.db <- dbConnect(SQLite(),"example.db")
> x <- dbGetQuery(example.db,"select * from t")
> x
v1 v2 v3 v4 v5 v6
1 "001074034" 90 1 7 89 12
2 "001074034" 90 1 1 90 12
3 "001074034" 90 1 2 90 12
4 "001074034" 90 1 3 90 12
> x$v1
[1] "\"001074034\"" "\"001074034\""
"\"001074034\"" "\"001074034\""
-->
Only the codes:
<--
more example.csv
echo -e
"\"001074034\",90,1,7,89,12\n\"001074034\",90,1,1,90,12\n\"001074034\",90,1,2,90,12\n\"001074034\",90,1,3,90,12"
> example.csv
echo "create table t(v1,v2,v3,v4,v5,v6);" > example.sql
sqlite3 example.db < example.sql
echo -e ".separator , \n.import example.csv t" | sqlite3 example.db
R
library(RSQLite)
example.db <- dbConnect(SQLite(),"example.db")
x <- dbGetQuery(example.db,"select * from t")
x
x$v1
-->
--
-----------------------
Stephan Lindner
University of Michigan
------------------------------
Message: 45
Date: Mon, 30 Mar 2009 18:35:58 +0200
From: "Millo Giovanni" <Giovanni_Millo@Generali.com>
Subject: [R] pgmm (Blundell-Bond) sample needed)
To: "Ivo Welch" <ivo.welch@gmail.com>,
<r-help@r-project.org>
Cc: yves croissant <yves.croissant@let.ish-lyon.cnrs.fr>
Message-ID:
<28643F754DDB094D8A875617EC4398B202AE7947@BEMAILEXTV03.corp.generali.net>
Content-Type: text/plain; charset="iso-8859-1"
Dear Ivo, dear list,
(see: Message: 70
Date: Thu, 26 Mar 2009 21:39:19 +0000
From: ivowel@gmail.com
Subject: [R] pgmm (Blundell-Bond) sample needed)
I think I finally figured out how to replicate your supersimple GMM
example with pgmm() so as to get the very same results as Stata.
Having no other regressors in the formula initially drove me crazy. This was a
case where simpler models are trickier than more
complicated ones!
For the benefit of other GMM people on this list, here's a brief r?sum?
of our rather long private mail exchange of these days, answering to
some other pgmm()-related posts which have appeared on this list
lately. Sorry for the overlong posting but it might be worth the space.
I will refer to the very good Stata tutorial by David Roodman that Ivo
himself pointed me to, which gives a nice
(and free) theoretical intro as well. Please (the others) find it
here: http://repec.org/nasug2006/howtodoxtabond2.cgdev.pdf. As far as
textbooks are concerned, Arellano's
panel data book (Oxford) is the theoretical reference I would
suggest.
There have been two separate issues:
- syntax (how to get the right model)
- small sample behaviour (minimal time dimension to get estimates)
I'll start with this last one, then provide a quick "Rosetta
stone" of
pgmm() and Stata commands producing the same results. The established
benchmarks for dynamic panels' GMM are the DPD routines written by Arellano
et
al. for Gauss and later Ox, but Stata is proven to give the same
results, and it is the established general reference for panel
data. Lastly I will add the usual examples found in the literature,
although they are very close relatives of 'example(pgmm)', so as to
show the correspondence between the models.
1) Small samples and N-asymptotics:
GMM needs big N, small T. Else you end up having more instruments than
observations and you get a "singular matrix" error (which, as Ivo
correctly found out, happens in the computation of the optimal
weights' matrix). While this is
probably going to be substituted with a more descriptive error
message, it still explains you the heart of the matter.
Yet Stata
gives you estimates in this case as well: as I suspected, it is
because it uses a generalized inverse (see Roodman's tutorial,
2.6). This looks theoretically ok. Whether this is meaningful in
applied practice is an issue I will discuss with the package
maintainer. IMHO it is not, apart maybe for illustrative purposes, and
it might well encourage bad habits (see the discussion about (not)
fitting the Grunfeld model by GMM on this list, some weeks ago).
2) fitting the simple models
Simplest possible model: AR(1) with individual effects
x(i,t)= a*(x(i,t-1)) + bi + c
This is what Ivo asked for in the first place. As the usual example is on data
from the Arellano and Bond paper,
available in package 'plm' as
> data(EmplUK)
I'll use log(emp) from this dataset as 'x', for ease of
reproducibility. Same data are
available in Stata by 'use
"http://www.stata-press.com/data/r7/abdata.dta"'. The Stata
dataset is
identical but for the variable names and the fact that in Stata you
have to generate logs beforehand (ugh!). I'm also adding the
'nomata' option to avoid complications, but this will be unnecessary on
most
systems (not on mine...).
The system-GMM estimator (with robust SEs) in Stata is 'xtabond2 n
nL1, gmm(L.(n)) nomata robust' whose R equivalent is:
> sysmod<-pgmm( dynformula( log(emp) ~ 1, list(1)), data=EmplUK,
gmm.inst=~log(emp), lag.gmm=c(2,99),
+ effect="individual", model="onestep",
transformation="ld" )> summary(sysmod, robust=TRUE)
(note that although 'summary(sysmod)' does not report a constant,
it's
actually there; this is an issue to be checked).
while the difference-GMM is 'xtabond2 n nL1, gmm(L.(n)) noleveleq
nomata robust', in R:
> diffmod<-pgmm( dynformula( log(emp) ~ 1, list(1)), data=EmplUK,
gmm.inst=~log(emp), lag.gmm=c(2,99),
+ effect="individual", model="onestep",
transformation="d" )> summary(diffmod,robust=TRUE)
The particular model Ivo asked for, using only lags 2-4 as
instruments, is 'xtabond2 x lx, gmm(L.(x),lag(1 3)) robust' in Stata
and only requires to set 'lag.gmm=c(2,4)' in the 'sysmod' above
(notice the difference in the lags specification!).
Note also that, unlike Ivo, I am using robust covariances.
3) fitting the standard examples from the literature.
'example(pgmm)' is a somewhat simplified version of the standard
Arellano-Bond example. For better comparability, here I am replicating
the results from the abest.do Stata script from
http://ideas.repec.org/c/boc/bocode/s435901.html (i.e., the results of
the Arellano and Bond paper done via xtabond2). The same output is also to
be found in Roodman's tutorial, 3.3.
Here's how to replicate the output of abest.do:
(must execute the preceding lines in the file as well for data transf.)
* Replicate difference GMM runs in Arellano and Bond 1991, Table 4
* Column (a1)
xtabond2 n L(0/1).(l.n w) l(0/2).(k ys) yr198?c cons, gmm(L.n) iv(L(0/1).w
l(0/2).(k ys) yr198?c cons) noleveleq noconstant robust nomata
replicated by:>
abmod1<-pgmm(dynformula(log(emp)~log(wage)+log(capital)+log(output),list(2,1,2,2)),
+ data=EmplUK, effect="twoways", model="onestep",
+ gmm.inst=~log(emp), lag.gmm=list(c(2,99)),
transformation="d")> summary(abmod1,robust=TRUE)
* Column (a2)
xtabond2 n L(0/1).(l.n w) l(0/2).(k ys) yr198?c cons, gmm(L.n)
iv(L(0/1).w l(0/2).(k ys) yr198?c cons) noleveleq noconstant two
nomata
replicated by:>
mymod2<-pgmm(dynformula(log(emp)~log(wage)+log(capital)+log(output),list(2,1,2,2)),
+ data=EmplUK, effect="twoways", model="twosteps",
+ gmm.inst=~log(emp), lag.gmm=list(c(2,99)),
transformation="d")> summary(mymod2)
* Column (b)
xtabond2 n L(0/1).(l.n ys w) k yr198?c cons, gmm(L.n) iv(L(0/1).(ys w) k yr198?c
cons) noleveleq noconstant two nomata
replicated by:>
abmod3<-pgmm(dynformula(log(emp)~log(wage)+log(capital)+log(output),list(2,1,0,1)),
+ data=EmplUK, effect="twoways", model="twosteps",
+ gmm.inst=~log(emp), lag.gmm=list(c(2,99)),
transformation="d")> summary(abmod3)
The system versions from bbest.do (ibid.) can be estimated setting
'transformation="ld"' along the lines in 2)
Yves has done a great job, but any GMM estimator is bound to have a
lot of switches. I hope this helps, together with Roodman's paper, in
putting instruments and lags
into place. Sure it helped me to better figure out what pgmm() does.
Thanks to Ivo for motivating me and for ultimately finding most of the answers
by himself :^)
Giovanni
Giovanni Millo
Research Dept.,
Assicurazioni Generali SpA
Via Machiavelli 4,
34132 Trieste (Italy)
tel. +39 040 671184
fax +39 040 671160
Ai sensi del D.Lgs. 196/2003 si precisa che le informazi...{{dropped:13}}
------------------------------
Message: 46
Date: Mon, 30 Mar 2009 11:46:52 -0500
From: "Vadlamani, Satish {FLNA}" <SATISH.VADLAMANI@fritolay.com>
Subject: [R] 64 bit compiled version of R on windows
To: "'R-help@r-project.org'" <R-help@r-project.org>
Message-ID:
<3B95C29A9AFAF54BA2212277976B2891164917B445@PEPWMV00073.corp.pep.pvt>
Content-Type: text/plain; charset="us-ascii"
Hi:
1) Does anyone have experience with 64 bit compiled version of R on windows? Is
this available or one has to compile it oneself?
2) If we do compile the source in 64 bit, would we then need to compile any
additional modules also in 64 bit?
I am just trying to prepare for the time when I will get larger datasets to
analyze. Each of the datasets is about 1 GB in size and I will try to bring in
about 16 of them in memory at the same time. At least that is the plan.
I asked a related question in the past and someone recommended the product
RevolutionR - I am looking into this also. If you can think of any other
options, please mention. I have not been doing low level programming for a while
now and therefore, the self compilation on windows would be the least preferable
(and then I have to worry about how to compile any modules that I need). Thanks.
Thanks.
Satish
------------------------------
Message: 47
Date: Mon, 30 Mar 2009 11:56:52 -0500
From: hadley wickham <h.wickham@gmail.com>
Subject: Re: [R] how to input multiple .txt files
To: Mike Lawrence <Mike.Lawrence@dal.ca>
Cc: Qianfeng Li <qflichem@yahoo.com>, "r-help@r-project.org"
<r-help@r-project.org>
Message-ID:
<f8e6ff050903300956g5afdf183l993d7d51b5270c02@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Mon, Mar 30, 2009 at 10:33 AM, Mike Lawrence <Mike.Lawrence@dal.ca>
wrote:> To repent for my sins, I'll also suggest that Hadley Wickham's
"plyr"
> package (http://had.co.nz/plyr/) is also useful/parsimonious in this
> context:
>
> a <- ldply(cust1_files,read.table)
You might also want to do
names(cust1_files) <- basename(cust1_files)
so that you can easily see where each part of the data came from
(although I think this will only work in the next version of plyr)
Hadley
--
http://had.co.nz/
------------------------------
Message: 48
Date: Mon, 30 Mar 2009 13:12:45 -0400
From: "John Fox" <jfox@mcmaster.ca>
Subject: Re: [R] HELP WITH SEM LIBRARY AND WITH THE MODEL'S
SPECIFICATION
To: "'Analisi Dati'" <analisi.dati@sdn-napoli.it>
Cc: r-help@r-project.org
Message-ID: <003801c9b15a$be824070$3b86c150$@ca>
Content-Type: text/plain; charset="us-ascii"
Dear Costantino,
> -----Original Message-----
> From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org]
On> Behalf Of Analisi Dati
> Sent: March-30-09 11:13 AM
> To: r-help@r-project.org
> Subject: [R] HELP WITH SEM LIBRARY AND WITH THE MODEL'S SPECIFICATION
>
> Dear users,
> i'm using the sem package in R, because i need to improve a
confermative
> factor analisys.
> I have so many questions in my survey, and i suppose, for example, that
> Question 1 (Q1) Q2 and Q3 explain the same thing (factor F1), Q4,Q5 and Q6
> explain F2 and Q7 and Q8 explain F3...
> For check that what i supposed is true, i run this code to see if the
values> of loadings are big or not.
> (In this code i used more than 3 factors)
>
. . . (many lines elided)
>
>
> Now the problems, and my questions, are various:
> 1)In "mydata" i need to have only the questions or also my latent
variables?> In other words, i suppose that the mean of Q1,Q2,Q3 give me a variable
> called "OCB". In mydata i need also this mean???
No. sem() recognizes as latent variables (F1, F2, etc.) those variables that
do not appear in the observed-variable covariance matrix. There are several
examples in ?sem that illustrate this point. Moreover, the latent variables
are not in general simply means of observed variables.
> 2)In the specification of my model, i didn't use nothing like
"F1<-
> >F2......", is this a problem? this sentence what indicates??? that
i have
a> mediation/moderation effect between variables???
By not specifying F1 <-> F2, you imply that the factors F1 and F2 are
uncorrelated. This isn't illogical, but it produces a very restrictive
model. Conversely, specifying F1 <-> F2 causes the covariance of F1 and F2
to be estimated; because you set the variances of the factors to 1, this
covariance would be the factor correlation.
> 3)Now, if you look my code,you could see that i don't put in
"mydata" the
> mean value called "OCB" (see point 1), and i don't write
nothing about the
> relation between F1 and F2, and when i run the sem function i receive
these> warnings:
>
> 1: In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names
> vars, :
> S is numerically singular: expect problems
> 2: In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names
> vars, :
That seems to me a reasonably informative error message: The
observed-variable covariance matrix is singular. This could happen, e.g., if
two observed variables are perfectly correlated, if an observed variable had
0 variance, or if there were more observed variables than observations.
> S is not positive-definite: expect problems
> 3: In sem.default(ram = ram, S = S, N = N, param.names = pars, var.names
> vars, :
That S is singular implies that it is not positive-definite, but because a
non-singular matrix need not be positive-definite, sem() checks for both.
> Could not compute QR decomposition of Hessian.
> Optimization probably did not converge.
>
> and after the summary i receive this error:
>
> coefficient covariances cannot be computed
These are the problems that sem() told you to expect.
>
> What i can do for all this????
Without more information, it's not possible to know. You should figure out
why the observed-variable covariance matrix is singular.
I hope this helps,
John
>
> Hoping in your interest about this problem, i wish you the best.
>
[[elided Yahoo spam]]> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 49
Date: Mon, 30 Mar 2009 14:43:28 -0400
From: Duncan Murdoch <murdoch@stats.uwo.ca>
Subject: Re: [R] 64 bit compiled version of R on windows
To: "Vadlamani, Satish {FLNA}" <SATISH.VADLAMANI@fritolay.com>
Cc: "'R-help@r-project.org'" <R-help@r-project.org>
Message-ID: <49D112D0.900@stats.uwo.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 3/30/2009 12:46 PM, Vadlamani, Satish {FLNA} wrote:> Hi:
> 1) Does anyone have experience with 64 bit compiled version of R on
windows? Is this available or one has to compile it oneself?
> 2) If we do compile the source in 64 bit, would we then need to compile any
additional modules also in 64 bit?
R for Windows is compiled using the MinGW port of gcc, and the 64 bit
version of that compiler is not really ready for general use yet, so
compiling for 64 bits is not completely straightforward. Revolution
Computing has announced on the R-devel list that they are beta testing a
build, with some information at
http://www.revolution-computing.com/products/windows-64bit.php
The page says it is scheduled for release at the end of March, so there
should be something available soon.
Duncan Murdoch
>
> I am just trying to prepare for the time when I will get larger datasets to
analyze. Each of the datasets is about 1 GB in size and I will try to bring in
about 16 of them in memory at the same time. At least that is the plan.
>
> I asked a related question in the past and someone recommended the product
RevolutionR - I am looking into this also. If you can think of any other
options, please mention. I have not been doing low level programming for a while
now and therefore, the self compilation on windows would be the least preferable
(and then I have to worry about how to compile any modules that I need). Thanks.
>
> Thanks.
> Satish
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 50
Date: Mon, 30 Mar 2009 11:43:54 -0700 (PDT)
From: Felipe Carrillo <mazatlanmexico@yahoo.com>
Subject: [R] ggplot2-geom_text()
To: r-help@stat.math.ethz.ch
Message-ID: <558876.15811.qm@web56608.mail.re3.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Hi: I need help with geom_text().
I would like to count the number of Locations
and put the sum of it right above each bar.
x <- "Location Lake_dens Fish Pred
Lake1 1.132 1 0.115
Lake1 0.627 1 0.148
Lake1 1.324 1 0.104
Lake1 1.265 1 0.107
Lake2 1.074 0 0.096
Lake2 0.851 0 0.108
Lake2 1.098 0 0.095
Lake2 0.418 0 0.135
Lake2 1.256 1 0.088
Lake2 0.554 1 0.126
Lake2 1.247 1 0.088
Lake2 0.794 1 0.112
Lake2 0.181 0 0.152
Lake3 1.694 0 0.001
Lake3 1.018 0 0.001
Lake3 2.88 0 0"
DF <- read.table(textConnection(x), header = TRUE)
p <- ggplot(DF,aes(x=Location)) + geom_bar()
p + geom_text(aes(y=Location),label=sum(count)) # Error because count
doesn't exist in dataset
What should I use instead of 'count' to be able to sum the number
of Locations?
Felipe D. Carrillo
Supervisory Fishery Biologist
Department of the Interior
US Fish & Wildlife Service
California, USA
------------------------------
Message: 51
Date: Mon, 30 Mar 2009 19:51:19 +0100
From: Paul Smith <phhs80@gmail.com>
Subject: Re: [R] Constrined dependent optimization.
To: r-help@r-project.org
Message-ID:
<6ade6f6c0903301151s1f0a6706vc2d92450f5414174@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Apparently, the convergence is faster if one uses this new swap function:
swapfun <- function(x,N=100) {
loc <- c(sample(1:(N/2),size=1,replace=FALSE),sample((N/2):100,1))
tmp <- x[loc[1]]
x[loc[1]] <- x[loc[2]]
x[loc[2]] <- tmp
x
}
It seems that within 20 millions of iterations, one gets the exact
optimal solution, which does not take too long.
Paul
On Mon, Mar 30, 2009 at 5:11 PM, Paul Smith <phhs80@gmail.com>
wrote:> Optim with SANN also solves your example:
>
> -------------------------------------------
>
> f <- function(x) sum(c(1:50,50:1)*x)
>
> swapfun <- function(x,N=100) {
> ?loc <- sample(N,size=2,replace=FALSE)
> ?tmp <- x[loc[1]]
> ?x[loc[1]] <- x[loc[2]]
> ?x[loc[2]] <- tmp
> ?x
> }
>
> N <- 100
>
> opt1 <-
optim(fn=f,par=sample(1:N,N),gr=swapfun,method="SANN",control=list(maxit=50000,fnscale=-1,trace=10))
> opt1$par
> opt1$value
>
> -------------------------------------------
>
> We need to specify a large number of iterations to get the optimal
> solution. The objective function at the optimum is 170425, and one
> gets a close value with optim and SANN.
>
> Paul
>
>
> On Mon, Mar 30, 2009 at 2:22 PM, Hans W. Borchers
> <hwborchers@googlemail.com> wrote:
>>
>> Image you want to minimize the following linear function
>>
>> ? ?f <- function(x) sum( c(1:50, 50:1) * x / (50*51) )
>>
>> on the set of all permutations of the numbers 1,..., 100.
>>
>> I wonder how will you do that with lpSolve? I would simply order
>> the coefficients and then sort the numbers 1,...,100 accordingly.
>>
>> I am also wondering how optim with "SANN" could be applied
here.
>>
>> As this is a problem in the area of discrete optimization resp.
>> constraint programming, I propose to use an appropriate program
>> here such as the free software Bprolog. I would be interested to
>> learn what others propose.
>>
>> Of course, if we don't know anything about the function f then
>> it amounts to an exhaustive search on the 100! permutations --
>> probably not a feasible job.
>>
>> Regards, ?Hans Werner
>>
>>
>>
>> Paul Smith wrote:
>>>
>>> On Sun, Mar 29, 2009 at 9:45 PM, ?<rkevinburton@charter.net>
wrote:
>>>> I have an optimization question that I was hoping to get some
suggestions
>>>> on how best to go about sovling it. I would think there is
probably a
>>>> package that addresses this problem.
>>>>
>>>> This is an ordering optimzation problem. Best to describe it
with a
>>>> simple example. Say I have 100 "bins" each with a
ball in it numbered
>>>> from 1 to 100. Each bin can only hold one ball. This
optimization is that
>>>> I have a function 'f' that this array of bins and
returns a number. The
>>>> number returned from f(1,2,3,4....) would return a different
number from
>>>> that of f(2,1,3,4....). The optimization is finding the optimum
order of
>>>> these balls so as to produce a minimum value from 'f'.I
cannot use the
>>>> regular 'optim' algorithms because a) the values are
discrete, and b) the
>>>> values are dependent ie. when the "variable"
representing the bin
>>>> location is changed (in this example a new ball is put there)
the
>>>> existing ball will need to be moved to another bin (probably
swapping
>>>> positions), and c) each "variable" is constrained, in
the example above
>>>> the only allowable values are integers from 1-100. So the
problem becomes
>>>> finding the optimum order of the "balls".
>>>>
>>>> Any suggestions?
>>>
>>> If your function f is linear, then you can use lpSolve.
>>>
>>> Paul
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>>
>>
>> --
>> View this message in context:
http://www.nabble.com/Constrined-dependent-optimization.-tp22772520p22782922.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
------------------------------
Message: 52
Date: Mon, 30 Mar 2009 14:51:29 -0400
From: Gabor Grothendieck <ggrothendieck@gmail.com>
Subject: Re: [R] Importing csv file with character values into sqlite3
and subsequent problem in R / RSQLite
To: Stephan Lindner <lindners@umich.edu>
Cc: r-help@stat.math.ethz.ch
Message-ID:
<971536df0903301151u1b38f019p1e248a1de58f7509@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
There are some examples of reading files into sqlite on the
sqldf home page:
http://sqldf.googlecode.com
On Mon, Mar 30, 2009 at 12:19 PM, Stephan Lindner <lindners@umich.edu>
wrote:> Dear all,
>
>
> I'm trying to import a csv file into sqlite3 and from there into
> R. Everything looks fine exepct that R outputs the character values in
> an odd fashion: they are shown as "\"CHARACTER\""
instead of
> "CHARACTER", but only if I show the character variable as a
> vector. Does someone know why this happens? Below is a sample
> code. The first part is written in bash. Of course I could just
> read.csv for the spreadsheet, but the real datasets are more than 3
> GB, that's why I'm using RSQLite (which is really awesome!). Also,
I
> could get rid of the "" in the csv file (the csv file has only
> numbers, but it is easier for my to use identifiers such as v1 as
> character strings), but I thought I'd first see whether there is a
> different way to solve this issue.
>
>
> Thanks!
>
>
> ? ? ? ?Stephan
>
>
> <--
>
> bash$ more example.csv
> bash$ echo -e
"\"001074034\",90,1,7,89,12\n\"001074034\",90,1,1,90,12\n\"001074034\",90,1,2,90,12\n\"001074034\",90,1,3,90,12"
> example.csv
> bash$ echo "create table t(v1,v2,v3,v4,v5,v6);" > example.sql
> bash$ sqlite3 example.db < example.sql
> bash$ echo -e ".separator , \n.import example.csv t" | sqlite3
example.db
> bash$ R
>> library(RSQLite)
> Loading required package: DBI
>> example.db <- dbConnect(SQLite(),"example.db")
>> x <- dbGetQuery(example.db,"select * from t")
>> x
> ? ? ? ? ? v1 v2 v3 v4 v5 v6
> 1 "001074034" 90 ?1 ?7 89 12
> 2 "001074034" 90 ?1 ?1 90 12
> 3 "001074034" 90 ?1 ?2 90 12
> 4 "001074034" 90 ?1 ?3 90 12
>
>> x$v1
> ?[1] "\"001074034\""
"\"001074034\"" "\"001074034\""
"\"001074034\""
>
> -->
>
>
> Only the codes:
>
>
> <--
>
> more example.csv
> echo -e
"\"001074034\",90,1,7,89,12\n\"001074034\",90,1,1,90,12\n\"001074034\",90,1,2,90,12\n\"001074034\",90,1,3,90,12"
> example.csv
> echo "create table t(v1,v2,v3,v4,v5,v6);" > example.sql
> sqlite3 example.db < example.sql
> echo -e ".separator , \n.import example.csv t" | sqlite3
example.db
> R
>
> library(RSQLite)
> example.db <- dbConnect(SQLite(),"example.db")
> x <- dbGetQuery(example.db,"select * from t")
> x
> x$v1
>
> -->
>
>
>
>
> --
> -----------------------
> Stephan Lindner
> University of Michigan
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 53
Date: Mon, 30 Mar 2009 21:01:54 +0200
From: Tobias Verbeke <tobias.verbeke@openanalytics.be>
Subject: Re: [R] Mature SOAP Interface for R
To: Michael Lawrence <mflawren@fhcrc.org>
Cc: r-help@r-project.org, zubin <binabina@bellsouth.net>
Message-ID: <49D11722.4090208@openanalytics.be>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Michael Lawrence wrote:> On Sat, Mar 28, 2009 at 6:08 PM, zubin <binabina@bellsouth.net>
wrote:
>
>> Hello, we are writing rich internet user interfaces and like to call R
for
>> some of the computational needs on the data, as well as some creation
of
>> image files. Our objects communicate via the SOAP interface. We have
been
>> researching the various packages to expose R as a SOAP service.
>>
>> No current CRAN SOAP packages however.
>>
>> Found 3 to date:
>>
>> RSOAP (http://sourceforge.net/projects/rsoap/)
>> SSOAP http://www.omegahat.org/SSOAP/
>>
>> looks like a commercial version?
>> http://random-technologies-llc.com/products/rsoap
>>
>> Does anyone have experience with these 3 and can recommend the most
>> 'mature' R - SOAP interface package?
>>
>
> Well, SSOAP is (the last time I checked) just a SOAP client. rsoap (if
we're
> talking about the same package) is actually a python SOAP server that
> communicates to R via rpy.
>
> You might want to check out the RWebServices package in Bioconductor. I
> think it uses Java for its SOAP handling.
Connecting to R as a server via SOAP is one of the
many ways the biocep project
http://www.biocep.net
allows one to make use of R in statistical application
development (there is also RESTful web services,
connections over RMI, etc.).
HTH,
Tobias
------------------------------
Message: 54
Date: Mon, 30 Mar 2009 15:23:21 -0300
From: Blanka Vlasakova <vlasakb@gmail.com>
Subject: [R] circular analysis
To: r-help@r-project.org
Message-ID:
<2a4375010903301123p1c04dca3rdac538f24ed4b095@mail.gmail.com>
Content-Type: text/plain
Hi,
I am looking for a way to analyze a dataset with a circular dependent
variable and three independent factors. To be specific, the circular
variable comprises of arrival times of pollinators to flowers. The
independent variables are pollinator species, flower sex and locality. I
have failed to find a way how to include all three factors. The
"circular"
package seems to enable testing of a single factor - or am I wrong?
Does the "circular" or any other package enables to perform such
analysis?
Many thanks
Blanka Vlasakova
--
Department of Botany
Charles University in Prague
Benatska 2
128 01 Prague 2
CZECH REPUBLIC
[[alternative HTML version deleted]]
------------------------------
Message: 55
Date: Mon, 30 Mar 2009 11:40:08 -0700 (PDT)
From: jwg20 <jason.gullifer@gmail.com>
Subject: [R] Calculating First Occurance by a factor
To: r-help@r-project.org
Message-ID: <22789964.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
I'm having difficulty finding a solution to my problem that without using a
for loop. For the amount of data I (will) have, the for loop will probably
be too slow. I tried searching around before posting and couldn't find
anything, hopefully it's not embarrassingly easy.
Consider the data.frame, Data, below
Data
Sub Tr IA FixInx FixTime
p1 t1 1 1 200
p1 t1 2 2 350
p1 t1 2 3 500
p1 t1 3 4 600
p1 t1 3 5 700
p1 t1 4 6 850
p1 t1 3 7 1200
p1 t1 5 8 1350
p1 t1 5 9 1500
What I'm trying to do is for each unique IA get the first occurring FixTime.
This will eventually need to be done by each Trial (Tr) and each Subject
Number (Sub). FixInx is essentially the number of rows in a trial. The
resulting data.frame is below.
Sub Tr IA FirstFixTime
p1 t1 1 200
p1 t1 2 350
p1 t1 3 600
p1 t1 4 850
p1 t1 5 1350
Here is the solution I have now.
agg = aggregate(data$FixInx, list(data$Sub, data$Tr, data$IA), min) #get the
minimum fix index by Sub, Tr, and IA... I can use this min fix index to pull
out the desired fixtime
agg$firstfixtime = 0 # new column for results
for (rown in 1:length(rownames(agg))){ #cycle through rows and get each
data$firstfixtime from FixTime in matching rows
agg$firstfixtime[rown] = as.character(data[data$Tr == agg$Group.2[rown] &
data$Sub == agg$Group.1[rown] & data$IA == agg$Group.3[rown] &
data$FixInx
== agg$x[rown], ]$FixTime)
}
--
View this message in context:
http://www.nabble.com/Calculating-First-Occurance-by-a-factor-tp22789964p22789964.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 56
Date: Mon, 30 Mar 2009 21:37:41 +0200
From: Dimitris Rizopoulos <d.rizopoulos@erasmusmc.nl>
Subject: Re: [R] Calculating First Occurance by a factor
To: jwg20 <jason.gullifer@gmail.com>
Cc: r-help@r-project.org
Message-ID: <49D11F85.2010400@erasmusmc.nl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
one way is:
ind <- ave(Data$IA, Data$Sub, Data$Tr, FUN = function (x) !duplicated(x))
Data[as.logical(ind), ]
I hope it helps.
Best,
Dimitris
jwg20 wrote:> I'm having difficulty finding a solution to my problem that without
using a
> for loop. For the amount of data I (will) have, the for loop will probably
> be too slow. I tried searching around before posting and couldn't find
> anything, hopefully it's not embarrassingly easy.
>
> Consider the data.frame, Data, below
>
> Data
> Sub Tr IA FixInx FixTime
> p1 t1 1 1 200
> p1 t1 2 2 350
> p1 t1 2 3 500
> p1 t1 3 4 600
> p1 t1 3 5 700
> p1 t1 4 6 850
> p1 t1 3 7 1200
> p1 t1 5 8 1350
> p1 t1 5 9 1500
>
> What I'm trying to do is for each unique IA get the first occurring
FixTime.
> This will eventually need to be done by each Trial (Tr) and each Subject
> Number (Sub). FixInx is essentially the number of rows in a trial. The
> resulting data.frame is below.
>
> Sub Tr IA FirstFixTime
> p1 t1 1 200
> p1 t1 2 350
> p1 t1 3 600
> p1 t1 4 850
> p1 t1 5 1350
>
> Here is the solution I have now.
>
> agg = aggregate(data$FixInx, list(data$Sub, data$Tr, data$IA), min) #get
the
> minimum fix index by Sub, Tr, and IA... I can use this min fix index to
pull
> out the desired fixtime
>
> agg$firstfixtime = 0 # new column for results
>
> for (rown in 1:length(rownames(agg))){ #cycle through rows and get each
> data$firstfixtime from FixTime in matching rows
> agg$firstfixtime[rown] = as.character(data[data$Tr == agg$Group.2[rown]
&
> data$Sub == agg$Group.1[rown] & data$IA == agg$Group.3[rown] &
data$FixInx
> == agg$x[rown], ]$FixTime)
> }
--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center
Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014
------------------------------
Message: 57
Date: Mon, 30 Mar 2009 16:58:54 -0300
From: Mike Lawrence <Mike.Lawrence@dal.ca>
Subject: Re: [R] Calculating First Occurance by a factor
To: jwg20 <jason.gullifer@gmail.com>, "r-help@r-project.org"
<r-help@r-project.org>
Message-ID:
<37fda5350903301258u76a7bc35xebe38bb63828a130@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
I discovered Hadley Wickham's "plyr" package last week and have
found
it very useful in circumstances like this:
library(plyr)
firstfixtime = ddply(
.data = data
, .variables = c('Sub','Tr','IA')
, .fun <- function(df){
df$FixTime[which.min(df$FixInx)]
}
)
> On Mon, Mar 30, 2009 at 3:40 PM, jwg20 <jason.gullifer@gmail.com>
wrote:
>>
>> I'm having difficulty finding a solution to my problem that without
using a
>> for loop. For the amount of data I (will) have, the for loop will
probably
>> be too slow. I tried searching around before posting and couldn't
find
>> anything, hopefully it's not embarrassingly easy.
>>
>> Consider the data.frame, Data, ?below
>>
>> Data
>> Sub Tr ?IA ? FixInx ?FixTime
>> p1 ? t1 ?1 ? ?1 ? ? ? ?200
>> p1 ? t1 ?2 ? ?2 ? ? ? ?350
>> p1 ? t1 ?2 ? ?3 ? ? ? ?500
>> p1 ? t1 ?3 ? ?4 ? ? ? ?600
>> p1 ? t1 ?3 ? ?5 ? ? ? ?700
>> p1 ? t1 ?4 ? ?6 ? ? ? ?850
>> p1 ? t1 ?3 ? ?7 ? ? ? ?1200
>> p1 ? t1 ?5 ? ?8 ? ? ? ?1350
>> p1 ? t1 ?5 ? ?9 ? ? ? ?1500
>>
>> What I'm trying to do is for each unique IA get the first occurring
FixTime.
>> This will eventually need to be done by each Trial (Tr) and each
Subject
>> Number (Sub). FixInx is essentially the number of rows in a trial. The
>> resulting data.frame is below.
>>
>> Sub Tr ?IA ?FirstFixTime
>> p1 ? t1 ?1 ? 200
>> p1 ? t1 ?2 ? 350
>> p1 ? t1 ?3 ? 600
>> p1 ? t1 ?4 ? 850
>> p1 ? t1 ?5 ? 1350
>>
>> Here is the solution I have now.
>>
>> agg = aggregate(data$FixInx, list(data$Sub, data$Tr, data$IA), min)
#get the
>> minimum fix index by Sub, Tr, and IA... I can use this min fix index to
pull
>> out the desired fixtime
>>
>> agg$firstfixtime = 0 # new column for results
>>
>> for (rown in 1:length(rownames(agg))){ #cycle through rows and get each
>> data$firstfixtime from FixTime in matching rows
>> ?agg$firstfixtime[rown] = as.character(data[data$Tr ==
agg$Group.2[rown] &
>> data$Sub == agg$Group.1[rown] & data$IA == agg$Group.3[rown] &
data$FixInx
>> == agg$x[rown], ]$FixTime)
>> }
>> --
>> View this message in context:
http://www.nabble.com/Calculating-First-Occurance-by-a-factor-tp22789964p22789964.html
>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar
~ Certainty is folly... I think. ~
------------------------------
Message: 58
Date: Mon, 30 Mar 2009 16:05:04 -0400
From: Jason Gullifer <jason.gullifer@gmail.com>
Subject: Re: [R] Calculating First Occurance by a factor
To: Mike Lawrence <Mike.Lawrence@dal.ca>
Cc: "r-help@r-project.org" <r-help@r-project.org>
Message-ID:
<98bcde3c0903301305s3664912cndc7d99b2237bacb4@mail.gmail.com>
Content-Type: text/plain
Thank you Mike and Dimitris for your replies.
I was able to get Mike's command to work and it does what I want (and fast
too!) I hadn't looked into the plyr package at all, but I have seen it load
when loading the reshape package. (Another useful package for manipulating
data frames!)
Thanks again.
-Jason
On Mon, Mar 30, 2009 at 3:58 PM, Mike Lawrence <Mike.Lawrence@dal.ca>
wrote:
> I discovered Hadley Wickham's "plyr" package last week and
have found
> it very useful in circumstances like this:
>
> library(plyr)
>
> firstfixtime = ddply(
> .data = data
> , .variables = c('Sub','Tr','IA')
> , .fun <- function(df){
> df$FixTime[which.min(df$FixInx)]
> }
> )
>
> > On Mon, Mar 30, 2009 at 3:40 PM, jwg20
<jason.gullifer@gmail.com> wrote:
> >>
> >> I'm having difficulty finding a solution to my problem that
without
> using a
> >> for loop. For the amount of data I (will) have, the for loop will
> probably
> >> be too slow. I tried searching around before posting and
couldn't find
> >> anything, hopefully it's not embarrassingly easy.
> >>
> >> Consider the data.frame, Data, below
> >>
> >> Data
> >> Sub Tr IA FixInx FixTime
> >> p1 t1 1 1 200
> >> p1 t1 2 2 350
> >> p1 t1 2 3 500
> >> p1 t1 3 4 600
> >> p1 t1 3 5 700
> >> p1 t1 4 6 850
> >> p1 t1 3 7 1200
> >> p1 t1 5 8 1350
> >> p1 t1 5 9 1500
> >>
> >> What I'm trying to do is for each unique IA get the first
occurring
> FixTime.
> >> This will eventually need to be done by each Trial (Tr) and each
Subject
> >> Number (Sub). FixInx is essentially the number of rows in a trial.
The
> >> resulting data.frame is below.
> >>
> >> Sub Tr IA FirstFixTime
> >> p1 t1 1 200
> >> p1 t1 2 350
> >> p1 t1 3 600
> >> p1 t1 4 850
> >> p1 t1 5 1350
> >>
> >> Here is the solution I have now.
> >>
> >> agg = aggregate(data$FixInx, list(data$Sub, data$Tr, data$IA),
min) #get
> the
> >> minimum fix index by Sub, Tr, and IA... I can use this min fix
index to
> pull
> >> out the desired fixtime
> >>
> >> agg$firstfixtime = 0 # new column for results
> >>
> >> for (rown in 1:length(rownames(agg))){ #cycle through rows and get
each
> >> data$firstfixtime from FixTime in matching rows
> >> agg$firstfixtime[rown] = as.character(data[data$Tr ==
agg$Group.2[rown]
> &
> >> data$Sub == agg$Group.1[rown] & data$IA == agg$Group.3[rown]
&
> data$FixInx
> >> == agg$x[rown], ]$FixTime)
> >> }
> >> --
> >> View this message in context:
>
http://www.nabble.com/Calculating-First-Occurance-by-a-factor-tp22789964p22789964.html
> >> Sent from the R help mailing list archive at Nabble.com.
> >>
> >> ______________________________________________
> >> R-help@r-project.org mailing list
> >> https://stat.ethz.ch/mailman/listinfo/r-help
> >> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> >> and provide commented, minimal, self-contained, reproducible code.
> >>
>
>
>
> --
> Mike Lawrence
> Graduate Student
> Department of Psychology
> Dalhousie University
>
> Looking to arrange a meeting? Check my public calendar:
> http://tinyurl.com/mikes-public-calendar
>
> ~ Certainty is folly... I think. ~
>
[[alternative HTML version deleted]]
------------------------------
Message: 59
Date: Mon, 30 Mar 2009 22:06:05 +0200
From: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Subject: Re: [R] Matrix max by row
To: Bert Gunter <gunter.berton@gene.com>
Cc: r-help@r-project.org
Message-ID: <49D1262D.2070001@idi.ntnu.no>
Content-Type: text/plain; charset=ISO-8859-1
Bert Gunter wrote:>
> Serves me right, I suppose. Timing seems also very dependent on the
> dimensions of the matrix. Here's what I got with my inadequate test:
>
>
>> x <- matrix(rnorm(3e5),ncol=3)
>>
> ## via apply
>
>> system.time(apply(x,1,max))
>>
> user system elapsed
> 2.09 0.02 2.10
>
> ## via pmax
>
>> system.time(do.call(pmax,data.frame(x)))
>>
> user system elapsed
> 0.10 0.02 0.11
>
>
>
yes, similar to what i got. but with the transpose, the ratio is way
more than inverted:
waku = expression(matrix(apply(m, 1, max), nrow(m)))
bert = expression(do.call(pmax, data.frame(m)))
library(rbenchmark)
m = matrix(rnorm(1e6), ncol=10)
benchmark(replications=10, columns=c('test', 'elapsed'),
order='elapsed',
waku=waku,
bert=bert)
# test elapsed
# 2 bert 1.633
# 1 waku 9.974
m = t(m)
benchmark(replications=10, columns=c('test', 'elapsed'),
order='elapsed',
waku=waku,
bert=bert)
# test elapsed
# 1 waku 0.507
# 2 bert 27.261
[[elided Yahoo spam]]>
my favourite: you should have specified what 'large matrices' means.
vQ
------------------------------
Message: 60
Date: Mon, 30 Mar 2009 15:31:37 -0500 (CDT)
From: Terry Therneau <therneau@mayo.edu>
Subject: Re: [R] cmprsk- another survival-depedent package causes R
crash
To: Nguyen Dinh Nguyen <n.nguyen@garvan.org.au>
Cc: r-help@r-project.org, tlumley@u.washington.edu
Message-ID: <200903302031.n2UKVbg29942@hsrnfs-101.mayo.edu>
Content-Type: TEXT/plain; charset=us-ascii
> As our package developers discussed about incompatibility between Design
and
survival packages, I faced another problem with cmprsk- a survival dependent
packacge.> The problem is exactly similar to what happened to the Design package that
when I just started running cuminc function, R was suddenly
closed.> These incidents suggest that maybe many other survival dependent packages
being involved the problem
I don't see how this is related to survival. I just checked the source
code
to the cmprsk function, and it has no dependencies on my library. As I would
expect, the cmprks function works as expected on our machines.
Could you send a reproducable example?
Terry Therneau
------------------------------
Message: 61
Date: Mon, 30 Mar 2009 15:43:55 -0500
From: hadley wickham <h.wickham@gmail.com>
Subject: Re: [R] Calculating First Occurance by a factor
To: Mike Lawrence <Mike.Lawrence@dal.ca>
Cc: "r-help@r-project.org" <r-help@r-project.org>, jwg20
<jason.gullifer@gmail.com>
Message-ID:
<f8e6ff050903301343j131dd194ye61a06623560271b@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Mon, Mar 30, 2009 at 2:58 PM, Mike Lawrence <Mike.Lawrence@dal.ca>
wrote:> I discovered Hadley Wickham's "plyr" package last week and
have found
> it very useful in circumstances like this:
>
> library(plyr)
>
> firstfixtime = ddply(
> ? ? ? .data = data
> ? ? ? , .variables = c('Sub','Tr','IA')
> ? ? ? , .fun <- function(df){
> ? ? ? ? ? ? ? df$FixTime[which.min(df$FixInx)]
> ? ? ? }
> )
Or to save a little typing:
ddply(data, .(Sub, Tr, IA), colwise(min, .(FixTime))
Hadley
--
http://had.co.nz/
------------------------------
Message: 62
Date: Tue, 31 Mar 2009 08:16:20 +1100
From: "Nguyen Dinh Nguyen" <n.nguyen@garvan.org.au>
Subject: Re: [R] cmprsk- another survival-depedent package causes R
crash
To: "'Terry Therneau'" <therneau@mayo.edu>
Cc: r-help@r-project.org, tlumley@u.washington.edu
Message-ID: <3E5487894AAD4C76AA983F9CE68E4EEE@DDNCDZ1S>
Content-Type: text/plain; charset="us-ascii"
Dear Terry,
When I hit cumic function (my saved command, it used to work previously), R
was suddenly shut down. Therefore, there is no error message. This happened
not only on my PC (window, service pack 3) but also on others from my
colleagues.
Regards
Nguyen Nguyen
-----Original Message-----
From: Terry Therneau [mailto:therneau@mayo.edu]
Sent: Tuesday, 31 March 2009 7:32 AM
To: Nguyen Dinh Nguyen
Cc: tlumley@u.washington.edu; r-help@r-project.org
Subject: Re: [R] cmprsk- another survival-depedent package causes R crash
> As our package developers discussed about incompatibility between Design
and
survival packages, I faced another problem with cmprsk- a survival
dependent
packacge.> The problem is exactly similar to what happened to the Design package that
when I just started running cuminc function, R was suddenly
closed.> These incidents suggest that maybe many other survival dependent packages
being involved the problem
I don't see how this is related to survival. I just checked the source
code
to the cmprsk function, and it has no dependencies on my library. As I
would
expect, the cmprks function works as expected on our machines.
Could you send a reproducable example?
Terry Therneau
------------------------------
Message: 63
Date: Mon, 30 Mar 2009 14:55:24 -0700 (PDT)
From: Felipe Carrillo <mazatlanmexico@yahoo.com>
Subject: Re: [R] ggplot2-geom_text()
To: Paul Murrell <p.murrell@auckland.ac.nz>
Cc: r-help@stat.math.ethz.ch
Message-ID: <392176.31068.qm@web56601.mail.re3.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Thanks Paul, I tried to use ..count.. once but it didn't work. What I
realized I was missing 'stat="bin"'. Thanks for your help.
--- On Mon, 3/30/09, Paul Murrell <p.murrell@auckland.ac.nz> wrote:
> From: Paul Murrell <p.murrell@auckland.ac.nz>
> Subject: Re: [R] ggplot2-geom_text()
> To: mazatlanmexico@yahoo.com
> Cc: r-help@stat.math.ethz.ch
> Date: Monday, March 30, 2009, 2:46 PM
> Hi
>
>
> Felipe Carrillo wrote:
> > Hi: I need help with geom_text().
> > I would like to count the number of Locations
> > and put the sum of it right above each bar.
> >
> > x <- "Location Lake_dens Fish Pred
> > Lake1 1.132 1 0.115
> > Lake1 0.627 1 0.148
> > Lake1 1.324 1 0.104
> > Lake1 1.265 1 0.107
> > Lake2 1.074 0 0.096
> > Lake2 0.851 0 0.108
> > Lake2 1.098 0 0.095
> > Lake2 0.418 0 0.135
> > Lake2 1.256 1 0.088
> > Lake2 0.554 1 0.126
> > Lake2 1.247 1 0.088
> > Lake2 0.794 1 0.112
> > Lake2 0.181 0 0.152
> > Lake3 1.694 0 0.001
> > Lake3 1.018 0 0.001
> > Lake3 2.88 0 0"
> > DF <- read.table(textConnection(x), header = TRUE)
> > p <- ggplot(DF,aes(x=Location)) + geom_bar()
> > p + geom_text(aes(y=Location),label=sum(count)) #
> Error because count doesn't exist in dataset
> >
> > What should I use instead of 'count' to be
> able to sum the number
> > of Locations?
>
>
> How about ... ?
>
> p + geom_text(aes(label=..count..), stat="bin",
> vjust=1, colour="white")
>
> Paul
>
>
> > Felipe D. Carrillo
> > Supervisory Fishery Biologist
> > Department of the Interior
> > US Fish & Wildlife Service
> > California, USA
> >
> > ______________________________________________
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained,
> reproducible code.
>
> --
> Dr Paul Murrell
> Department of Statistics
> The University of Auckland
> Private Bag 92019
> Auckland
> New Zealand
> 64 9 3737599 x85392
> paul@stat.auckland.ac.nz
> http://www.stat.auckland.ac.nz/~paul/
------------------------------
Message: 64
Date: Mon, 30 Mar 2009 15:04:28 -0700 (PDT)
From: Rabea Sutter <sutter_rabea@yahoo.de>
Subject: [R] Kruskal-Wallis-test: Posthoc-test?
To: r-help@r-project.org
Message-ID: <22794025.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hello.
We have some questions concerning the statistical analysis of a dataset.
We aim to compare the sample means of more than 2 independent samples; the
sample sizes are unbalanced. The requirements of normality distribution and
variance homogeneity were not met even after transforming the data. Thus we
applied a nonparametric test: the Kruskal-Wallis-test (H-Test). The null
hypothesis was rejected.
Now we try to find a suitable posthoc-test in order to find out which sample
means actually are statistically different.
1. We think that the Behrens-Fisher-test and multiple steel test are not
applicable, because they assume normality distribution as far as we know. Is
that right?
2. Statistical literature suggested to do a Nemenyi-test as posthoc-test.
But this test in general requires balanced sample sizes; so we need a
special type of this test. Is it possible to do such a test in R?
3. We could also test all the samples against each other with a
nonparamatric Mann-Whitney-U-test and correct for the multiple comparisons
(m = 11) according to Bonferroni. Is this testing method allowed?
[[elided Yahoo spam]]
Christine Hellmann and Rabea Sutter
--
View this message in context:
http://www.nabble.com/Kruskal-Wallis-test%3A-Posthoc-test--tp22794025p22794025.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 65
Date: Mon, 30 Mar 2009 15:13:04 -0700
From: Jim Porzak <jporzak@gmail.com>
Subject: [R] use R Group SFBA April meeting reminder; video of Feb
kickoff
To: r-help <r-help@r-project.org>
Message-ID:
<2a9c000c0903301513t403b8488xad1f2d8bf9a8b531@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Next week Wednesday evening, April 8th, Mike Driscoll will be talking
about "Building Web Dashboards using R"
see: http://www.meetup.com/R-Users/calendar/9718968/ for details & to RSVP.
Also of interest, our member Ron Fredericks has just posted a well
edited video of the February kickoff panel discussion at Predictive
Analytics World "The R and Science of Predictive Analytics: Four Case
Studies in R" with
* Bo Cowgill, Google
* Itamar Rosenn, Facebook
* David Smith, Revolution Computing
* Jim Porzak, The Generations Network
and chaired by Michael Driscoll, Dataspora LLC
see: http://www.lecturemaker.com/2009/02/r-kickoff-video/
Best,
Jim Porzak
TGN.com
San Francisco, CA
www.linkedin.com/in/jimporzak
use R! Group SF: www.meetup.com/R-Users/
------------------------------
Message: 66
Date: Mon, 30 Mar 2009 15:13:50 -0700
From: Deepayan Sarkar <deepayan.sarkar@gmail.com>
Subject: Re: [R] Darker markers for symbols in lattice
To: "Naomi B. Robbins" <nbrgraphs@optonline.net>
Cc: r-help@r-project.org
Message-ID:
<eb555e660903301513s2a0bb3advfce6e3b5fbb9a421@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
On Sun, Mar 29, 2009 at 12:35 PM, Naomi B. Robbins
<nbrgraphs@optonline.net> wrote:> In lattice, using the command trellis.par.get for superpose.symbol, plot,
> symbol and/or dot.symbol shows that we can specify alpha, cex, col, fill
> (for ?superpose.symbol and plot.symbol), font, and pch. ?Trial and error
> shows that the font affects letters but not pch=1 or pch=3 (open circles
> and plus signs.) I want to use open circles and plus signs, keep the colors
> and cex ?I've specified but make the symbols bolder, much the way a
> higher lwd makes lines bolder. ?Does anyone know of a library that
> does that or can anyone think of a workaround to make the markers
> stand out better without making them larger?
?grid::gpar lists 'lex' as a "Multiplier applied to line
width", and
that seems to work when supplied as a top-level argument (though not
in the parameter settings):
xyplot(1:10 ~ 1:10, pch = c(1, 3), cex = 2, lex = 3)
I'm not sure if 'lwd' should have the same effect (it does in base
graphics).
-Deepayan
------------------------------
Message: 67
Date: Mon, 30 Mar 2009 17:26:22 -0500
From: xinrong lei <xleiuiuc@gmail.com>
Subject: [R] Help with tm assocation analysis and Rgraphviz
installation.
To: r-help@r-project.org
Message-ID:
<e589f82b0903301526p3c44d999wd5ca971fe77a980d@mail.gmail.com>
Content-Type: text/plain
Help with tm assocation analysis and Rgraphviz installation.
THANK YOU IN ADVANCE
Question 1:
I saved two txt file in C:\textfile
And each txt file contents only one text column, and both have 100 records.
I know term “research” occurs 49 times, so I want to find out which other
words are correlated to this word, and I got tons of association ‘1’ .
I tried other terms, and no association value is less than 1, which
obviously is wrong.
Could any export tell me where did I do wrong?
My R-code is:
R>my.path<-'C:\\textfile'
R>library(tm)
R>my.corpus <- Corpus(DirSource(my.path), readerControl = list
(reader=readPlain))
R>tdmO <- TermDocMatrix(my.corpus)
R>tdmO
An object of class “TermDocMatrix”
Slot "Data":
2 x 1426 sparse Matrix of class "dgCMatrix"
[[ suppressing 1426 column names ‘000’, ‘0092’, ‘0093’ ... ]]
1 3 1 12 1 1 1 8 1 1 2 1 9 . 2 2 1 518 1 1 1 2 1 1 2 6 1 1 3 3 2 1 1 4 1 4 3
3 1 11 5 1 7 2 5 4 3 1 1
2 . . . . . . . . . . . . 3 . . . 6 . . . . . . . . . . . . . . . 3 . . .
. . 1 . 1 . . . . . . .
1 1 2 1 4 1 5 4 4 2 4 6 2 2 . 3 1 2 1 3 1 2 1 4 1 1 3 1 1 1 12 2 1 1 2 1 1 4
1 1 . 3 1 2 1 3 3 1 1 2 2
2 . . . . . . . 3 . . 3 . . 1 . . . . . . . . . . . . . . . . . . . . . . .
. . 1 . . 1 . . 2 . . . .
…
R>findAssocs(tdmO,”research”,0.95)
academ access accompani
accord
ace
1 1 1
1
1
achiev acquir acquisit
act
activ
1 1 1
1
1
activi adapt add
addit
adequ
1 1 1
1
1
……
Question2:
I can’t load Rgraphviz in R.
I am using windows XP professional, R 2.8.1
I followed the instruction in this link
http://groups.google.com/group/r-help-archive/browse_thread/thread/413605edc81b3422/b7917083646d9cd2?lnk=gst&q=Rgraphviz#b7917083646d9cd2
and
https://stat.ethz.ch/pipermail/bioconductor/2008-June/022838.html
What I did is
1. Close down any R sessions you have open.
2. Download and install Microsoft Visual C++ 2005 SP1 Redistributable
Package:
http://www.microsoft.com/downloads/details.aspx?familyid=200B2FD9-AE1A-4A14-984D-389C36F85647&displaylang=en
2. Download and install the Graphviz 2.16.1 from the archives:
I also tried 2.18.1, and 2.22.2
3. Check your PATH to see how Graphviz was added: graphvis 2.18 and later
versions will automatically add
C:\Program Files\Graphviz2.16\Bin
to Path.
4. open R and download and install Rgraphviz using:
R> source("http://bioconductor.org/biocLite.R"
<http://bioconductor.org/biocLite.R%22>)
R> biocLite("Rgraphviz")
I got no error before the next step:
R>library(Rgraphviz)
I got this error message:
Error in inDL(x, as.logical(local), as.logical(now), ...) :
unable to load shared library
'C:/PROGRA~1/R/R-28~1.1/library/Rgraphviz/libs/Rgraphviz.dll':
LoadLibrary failure: The specified module could not be found.
Error : .onLoad failed in 'loadNamespace' for 'Rgraphviz'
Error: package/namespace load failed for 'Rgraphviz'
What else shall I do?
[[elided Yahoo spam]]
[[alternative HTML version deleted]]
------------------------------
Message: 68
Date: Mon, 30 Mar 2009 18:30:02 -0400
From: "Naomi B. Robbins" <nbrgraphs@optonline.net>
Subject: Re: [R] Darker markers for symbols in lattice
To: Deepayan Sarkar <deepayan.sarkar@gmail.com>
Cc: r-help@r-project.org
Message-ID: <49D147EA.5070109@optonline.net>
Content-Type: text/plain
Many thanks to Deepayan for providing just what I wanted.
I've tried lwd many times and it does not work but lex does
the trick. Thanks also to Paul Murrell for his very simple
suggestion of using lower case o for an open circle since
bold works on letters and to Bert Gunter for suggesting an
overplotting technique to try if nothing easier was suggested.
Naomi
--
Naomi B. Robbins
NBR
11 Christine Court
Wayne, NJ 07470
Phone: (973) 694-6009
naomi@nbr-graphs.com
http://www.nbr-graphs.com
Author of /Creating More Effective Graphs
<http://www.nbr-graphs.com/bookframe.html>/
Deepayan Sarkar wrote:> On Sun, Mar 29, 2009 at 12:35 PM, Naomi B. Robbins
> <nbrgraphs@optonline.net> wrote:
>
>> In lattice, using the command trellis.par.get for superpose.symbol,
plot,
>> symbol and/or dot.symbol shows that we can specify alpha, cex, col,
fill
>> (for superpose.symbol and plot.symbol), font, and pch. Trial and
error
>> shows that the font affects letters but not pch=1 or pch=3 (open
circles
>> and plus signs.) I want to use open circles and plus signs, keep the
colors
>> and cex I've specified but make the symbols bolder, much the way a
>> higher lwd makes lines bolder. Does anyone know of a library that
>> does that or can anyone think of a workaround to make the markers
>> stand out better without making them larger?
>>
>
> ?grid::gpar lists 'lex' as a "Multiplier applied to line
width", and
> that seems to work when supplied as a top-level argument (though not
> in the parameter settings):
>
> xyplot(1:10 ~ 1:10, pch = c(1, 3), cex = 2, lex = 3)
>
> I'm not sure if 'lwd' should have the same effect (it does in
base graphics).
>
> -Deepayan
>
>
>
--
Naomi B. Robbins
NBR
11 Christine Court
Wayne, NJ 07470
Phone: (973) 694-6009
naomi@nbr-graphs.com
http://www.nbr-graphs.com
Author of /Creating More Effective Graphs
<http://www.nbr-graphs.com/bookframe.html>/
[[alternative HTML version deleted]]
------------------------------
Message: 69
Date: Mon, 30 Mar 2009 18:38:43 -0400
From: "John Fox" <jfox@mcmaster.ca>
Subject: Re: [R] Comparing Points on Two Regression Lines
To: "'AbouEl-Makarim Aboueissa'"
<aaboueissa@usm.maine.edu>
Cc: r-help@stat.math.ethz.ch, pburns@pburns.seanet.com
Message-ID: <006c01c9b188$48089380$d819ba80$@ca>
Content-Type: text/plain; charset="utf-8"
Dear Abu,
I'm not sure why you're addressing this question to me.
It's unclear from your description whether there is one sample with four
variables or two independent samples with the same two variables x and y.
I'll assume the latter. The formula that you sent appears to assume equal
error variances in two independent samples. A simple alternative that
doesn't assume equal error variances would be to use something like
mod1 <- lm(y1 ~ x1)
mod2 <- lm(y2 ~ x2)
f1 <- predict(mod1, newdata=data.frame(x1=13), se.fit=TRUE)
f2 <- predict(mod2, newdata=data.frame(x2=13), se.fit=TRUE)
diff <- f1$fit - f2$fit
sediff <- sqrt(f1$se.fit^2 + f2$se.fit^2)
diff/sediff
The df for the test statistic aren't clear to me and in small samples this
could make a difference. I suppose that one could use a Satterthwaite
approximation, but a simple alternative would be to take the smaller of the
residual df, here 5 - 2 = 3. In any event, the resulting test is likely
sensitive to departures from normality, so it would probably be better to use a
randomization test.
John
> -----Original Message-----
> From: AbouEl-Makarim Aboueissa [mailto:aaboueissa@usm.maine.edu]
> Sent: March-30-09 4:57 PM
> To: jfox@mcmaster.ca; jrkrideau@yahoo.ca; pburns@pburns.seanet.com; r-
> help@stat.math.ethz.ch; r-help-request@stat.math.ethz.ch;
> roland.rproject@gmail.com; tuechler@gmx.at; wwwhsd@gmail.com
> Subject: Comparing Points on Two Regression Lines
>
> Dear R users:
>
>
>
> Suppose I have two different response variables y1, y2 that I regress
> separately on the different explanatory variables, x1 and x2 respectively.
I
> need to compare points on two regression lines.
>
>
>
> These are the x and y values for each lines.
>
>
>
> x1<-c(0.5,1.0,2.5,5.0,10.0)
> y1<-c(204,407,1195,27404313)
> x2<-c(2.5,5.0,10.0,25.0)
> y2<-c(440,713,1520,2634)
>
>
>
> Suppose we need to compare the two lines at the common value of x=13.
>
>
>
> Please see attached the method as described in section 18.3 in Jerrold H.
> Zar.
>
>
>
> With many thanks
>
>
>
> Abou
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> =========================> AbouEl-Makarim Aboueissa, Ph.D.
> Assistant Professor of Statistics
> Department of Mathematics & Statistics
> University of Southern Maine
> 96 Falmouth Street
> P.O. Box 9300
> Portland, ME 04104-9300
>
>
> Tel: (207) 228-8389
> Fax: (207) 780-5607
> Email: aaboueissa@usm.maine.edu
> aboueiss@yahoo.com
>
> http://www.usm.maine.edu/~aaboueissa/
>
>
> Office: 301C Payson Smith
>
------------------------------
Message: 70
Date: Mon, 30 Mar 2009 23:55:09 +0100 (BST)
From: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Subject: Re: [R] use R Group SFBA April meeting reminder; video of Feb
k
To: Jim Porzak <jporzak@gmail.com>
Cc: r-help <r-help@r-project.org>
Message-ID: <XFMail.090330235509.Ted.Harding@manchester.ac.uk>
Content-Type: text/plain; charset=iso-8859-1
On 30-Mar-09 22:13:04, Jim Porzak wrote:> Next week Wednesday evening, April 8th, Mike Driscoll will be talking
> about "Building Web Dashboards using R"
> see: http://www.meetup.com/R-Users/calendar/9718968/ for details & to
> RSVP.
>
> Also of interest, our member Ron Fredericks has just posted a well
> edited video of the February kickoff panel discussion at Predictive
> Analytics World "The R and Science of Predictive Analytics: Four Case
> Studies in R" with
> * Bo Cowgill, Google
> * Itamar Rosenn, Facebook
> * David Smith, Revolution Computing
> * Jim Porzak, The Generations Network
> and chaired by Michael Driscoll, Dataspora LLC
>
> see: http://www.lecturemaker.com/2009/02/r-kickoff-video/
>
> Best,
> Jim Porzak
It could be very interesting to watch that video! However, I have
had a close look at the web page you cite:
http://www.lecturemaker.com/2009/02/r-kickoff-video/
and cannot find a link to a video. Lots of links to non-video
things, but none that I could see to a video.
There is a link on that page at:
How Google and Facebook are using R
by Michael E. Driscoll | February 19, 2009
<http://dataspora.com/blog/predictive-analytics-using-r/>
Following that link leads to a page, on which the first link, in:
<(March 26th Update: Video now available)>
Last night, I moderated our Bay Area R Users Group kick-off
event with a panel discussion entitled "The R and Science of
Predictive Analytics", co-located with the Predictive Analytics
World conference here in SF.
leads you back to where you came from, and likewise the link at
the bottom of the page:
<A video of the event> is now available courtesy of Ron Fredericks
and LectureMaker.
Could you help by describing where on that web page it can be found?
With thanks,
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 30-Mar-09 Time: 23:55:07
------------------------------ XFMail ------------------------------
------------------------------
Message: 71
Date: Mon, 30 Mar 2009 18:55:39 -0400
From: Veerappa Chetty <chettyvk@gmail.com>
Subject: [R] two monitors
To: r-help@r-project.org
Message-ID:
<14825e3f0903301555w101dd00cs94aac36af6992ce@mail.gmail.com>
Content-Type: text/plain
Hi, I have set up two monitors. I am using windows XP. I would like to keep
one window- command line in one monitor and the script and graphs in the
second monitor. How do I set it up?
It works for word documents simply by dragging the document. It does not
work if I drag and drop the scripts window. Is R not compatible for this?
Thanks.
Chetty
--
Professor of Family Medicine
Boston University
Tel: 617-414-6221, Fax:617-414-3345
emails: chettyvk@gmail.com,vchetty@bu.edu
[[alternative HTML version deleted]]
------------------------------
Message: 72
Date: Tue, 31 Mar 2009 10:46:01 +1300
From: Paul Murrell <p.murrell@auckland.ac.nz>
Subject: Re: [R] ggplot2-geom_text()
To: mazatlanmexico@yahoo.com
Cc: r-help@stat.math.ethz.ch
Message-ID: <49D13D99.4030504@stat.auckland.ac.nz>
Content-Type: text/plain; charset=ISO-8859-1
Hi
Felipe Carrillo wrote:> Hi: I need help with geom_text().
> I would like to count the number of Locations
> and put the sum of it right above each bar.
>
> x <- "Location Lake_dens Fish Pred
> Lake1 1.132 1 0.115
> Lake1 0.627 1 0.148
> Lake1 1.324 1 0.104
> Lake1 1.265 1 0.107
> Lake2 1.074 0 0.096
> Lake2 0.851 0 0.108
> Lake2 1.098 0 0.095
> Lake2 0.418 0 0.135
> Lake2 1.256 1 0.088
> Lake2 0.554 1 0.126
> Lake2 1.247 1 0.088
> Lake2 0.794 1 0.112
> Lake2 0.181 0 0.152
> Lake3 1.694 0 0.001
> Lake3 1.018 0 0.001
> Lake3 2.88 0 0"
> DF <- read.table(textConnection(x), header = TRUE)
> p <- ggplot(DF,aes(x=Location)) + geom_bar()
> p + geom_text(aes(y=Location),label=sum(count)) # Error because count
doesn't exist in dataset
>
> What should I use instead of 'count' to be able to sum the number
> of Locations?
How about ... ?
p + geom_text(aes(label=..count..), stat="bin",
vjust=1, colour="white")
Paul
> Felipe D. Carrillo
> Supervisory Fishery Biologist
> Department of the Interior
> US Fish & Wildlife Service
> California, USA
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--
Dr Paul Murrell
Department of Statistics
The University of Auckland
Private Bag 92019
Auckland
New Zealand
64 9 3737599 x85392
paul@stat.auckland.ac.nz
http://www.stat.auckland.ac.nz/~paul/
------------------------------
Message: 73
Date: Mon, 30 Mar 2009 18:17:25 -0400
From: Kelsey Scheitlin<kns07g@fsu.edu>
Subject: [R] Mapping in R
To: r-help@r-project.org
Message-ID: <fb7ee426702b.49d10cb5@fsu.edu>
Content-Type: text/plain; charset=us-ascii
Hi, I am looking for a specific mapping capability in R that I can't seem to
find, but think exists. I would like to make a border of a map have alternating
black and white squares instead of the common latitude and longitude grid.
(example: http://www.cccturtle.org/sat_maps/map0bw8.gif). If anyone knows if
there is or is not a function capable if doing this could yo[[elided Yahoo
spam]]
Kelsey
------------------------------
Message: 74
Date: Mon, 30 Mar 2009 16:56:46 -0400
From: "AbouEl-Makarim Aboueissa" <aaboueissa@usm.maine.edu>
Subject: [R] Comparing Points on Two Regression Lines
To: jfox@mcmaster.ca, jrkrideau@yahoo.ca, pburns@pburns.seanet.com,
r-help@stat.math.ethz.ch, r-help-request@stat.math.ethz.ch,
roland.rproject@gmail.com, tuechler@gmx.at, wwwhsd@gmail.com
Message-ID: <49D0F9CD.01BB.00A6.1@usm.maine.edu>
Content-Type: text/plain; charset="utf-8"
Dear R users:
Suppose I have two different response variables y1, y2 that I regress separately
on the different explanatory variables, x1 and x2 respectively. I need to
compare points on two regression lines.
These are the x and y values for each lines.
x1<-c(0.5,1.0,2.5,5.0,10.0)
y1<-c(204,407,1195,27404313)
x2<-c(2.5,5.0,10.0,25.0)
y2<-c(440,713,1520,2634)
Suppose we need to compare the two lines at the common value of x=13.
Please see attached the method as described in section 18.3 in Jerrold H. Zar.
With many thanks
Abou
=========================AbouEl-Makarim Aboueissa, Ph.D.
Assistant Professor of Statistics
Department of Mathematics & Statistics
University of Southern Maine
96 Falmouth Street
P.O. Box 9300
Portland, ME 04104-9300
Tel: (207) 228-8389
Fax: (207) 780-5607
Email: aaboueissa@usm.maine.edu
aboueiss@yahoo.com
http://www.usm.maine.edu/~aaboueissa/
Office: 301C Payson Smith
------------------------------
Message: 75
Date: Mon, 30 Mar 2009 16:00:11 -0700
From: Sundar Dorai-Raj <sdorairaj@gmail.com>
Subject: Re: [R] use R Group SFBA April meeting reminder; video of Feb
k
To: ted.harding@manchester.ac.uk
Cc: r-help <r-help@r-project.org>, Jim Porzak <jporzak@gmail.com>
Message-ID:
<c9ce82b00903301600o29f42e04t4ffe0b3dce3d74ed@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Could be that you have some sort of ad filter in your browser that's
blocking the video? It appears just fine for me in Firefox 3.
On Mon, Mar 30, 2009 at 3:55 PM, Ted Harding
<Ted.Harding@manchester.ac.uk> wrote:> On 30-Mar-09 22:13:04, Jim Porzak wrote:
>> Next week Wednesday evening, April 8th, Mike Driscoll will be talking
>> about "Building Web Dashboards using R"
>> see: http://www.meetup.com/R-Users/calendar/9718968/ for details &
to
>> RSVP.
>>
>> Also of interest, our member Ron Fredericks has just posted a well
>> edited video of the February kickoff panel discussion at Predictive
>> Analytics World "The R and Science of Predictive Analytics: Four
Case
>> Studies in R" with
>> ? ? * Bo Cowgill, Google
>> ? ? * Itamar Rosenn, Facebook
>> ? ? * David Smith, Revolution Computing
>> ? ? * Jim Porzak, The Generations Network
>> and chaired by Michael Driscoll, Dataspora LLC
>>
>> see: http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>
>> Best,
>> Jim Porzak
>
> It could be very interesting to watch that video! However, I have
> had a close look at the web page you cite:
>
> ?http://www.lecturemaker.com/2009/02/r-kickoff-video/
>
> and cannot find a link to a video. Lots of links to non-video
> things, but none that I could see to a video.
>
> There is a link on that page at:
> ?How Google and Facebook are using R
> ?by Michael E. Driscoll | February 19, 2009
> ?<http://dataspora.com/blog/predictive-analytics-using-r/>
>
> Following that link leads to a page, on which the first link, in:
>
> ?<(March 26th Update: Video now available)>
> ?Last night, I moderated our Bay Area R Users Group kick-off
> ?event with a panel discussion entitled "The R and Science of
> ?Predictive Analytics", co-located with the Predictive Analytics
> ?World conference here in SF.
>
> leads you back to where you came from, and likewise the link at
> the bottom of the page:
>
> ?<A video of the event> is now available courtesy of Ron Fredericks
> ?and LectureMaker.
>
> Could you help by describing where on that web page it can be found?
> With thanks,
> Ted.
>
> --------------------------------------------------------------------
> E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
> Fax-to-email: +44 (0)870 094 0861
> Date: 30-Mar-09 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Time: 23:55:07
> ------------------------------ XFMail ------------------------------
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 76
Date: Mon, 30 Mar 2009 16:04:15 -0700 (PDT)
From: kerfuffle <pswi@ceh.ac.uk>
Subject: [R] advice for alternative to barchart
To: r-help@r-project.org
Message-ID: <22795050.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
hi folks,
I was wondering if anybody could give me some advice. I've created a
stacked barchart, with 'car model' along the x axis, 'number of
cars' along
the y axis. There are 45 individuals involved, each of which can own any
number of cars, of any model (eg an individual could own two cars of one
model, and another car of a different model). I've got a legend by the side
of the barchart which gives the name of the individual, which gives the
colour to identify which bars belong to which individuals.
The problem (as you've probably guessed) is that it's almost impossible
to
have a distinctive legend for 45 individuals. I can manage 30 distinctive
colors, but as soon as I use shaded lines the number of distinct colours
drops considerably because the legend boxes are so small. This is true even
if I vary line density and angle. Therefore, after a long period of
experimentation, I'm thinking of giving up on barchart.
What I have in mind now is a plot where each 'bar' is a single line, and
the
top of each 'bar' is a symbol (+, *, etc). I figure it should be
possible
to find 45 different symbols. Does anyone have any advice? I'm sorry this
is so open-ended, but I've played with stripchart and dotplot without a lot
of joy. I figure this can't be that uncommon a need (barchart with a
ridiculous number of groups), but I could well be wrong. Is there some way
of altering the size of the legend boxes in the barchart? Using symbols in
the barchart? Some way of using, say, 30 blocks of colour, and 15 cases of
a dashed line?
Any thoughts would be greatly appreciated.
Paul
--
View this message in context:
http://www.nabble.com/advice-for-alternative-to-barchart-tp22795050p22795050.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 77
Date: Mon, 30 Mar 2009 16:04:40 -0700
From: Jim Porzak <jporzak@gmail.com>
Subject: Re: [R] use R Group SFBA April meeting reminder; video of Feb
k
To: Sundar Dorai-Raj <sdorairaj@gmail.com>
Cc: r-help <r-help@r-project.org>, ted.harding@manchester.ac.uk
Message-ID:
<2a9c000c0903301604l2ee19d87ld80f8680d853097d@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Since Sundar beat me to it w/ Firefox 3 test, I checked with IE 7.0 -
works fine for me there also.
-Jim
On Mon, Mar 30, 2009 at 4:00 PM, Sundar Dorai-Raj <sdorairaj@gmail.com>
wrote:> Could be that you have some sort of ad filter in your browser that's
> blocking the video? It appears just fine for me in Firefox 3.
>
> On Mon, Mar 30, 2009 at 3:55 PM, Ted Harding
> <Ted.Harding@manchester.ac.uk> wrote:
>> On 30-Mar-09 22:13:04, Jim Porzak wrote:
>>> Next week Wednesday evening, April 8th, Mike Driscoll will be
talking
>>> about "Building Web Dashboards using R"
>>> see: http://www.meetup.com/R-Users/calendar/9718968/ for details
& to
>>> RSVP.
>>>
>>> Also of interest, our member Ron Fredericks has just posted a well
>>> edited video of the February kickoff panel discussion at Predictive
>>> Analytics World "The R and Science of Predictive Analytics:
Four Case
>>> Studies in R" with
>>> ? ? * Bo Cowgill, Google
>>> ? ? * Itamar Rosenn, Facebook
>>> ? ? * David Smith, Revolution Computing
>>> ? ? * Jim Porzak, The Generations Network
>>> and chaired by Michael Driscoll, Dataspora LLC
>>>
>>> see: http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>>
>>> Best,
>>> Jim Porzak
>>
>> It could be very interesting to watch that video! However, I have
>> had a close look at the web page you cite:
>>
>> ?http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>
>> and cannot find a link to a video. Lots of links to non-video
>> things, but none that I could see to a video.
>>
>> There is a link on that page at:
>> ?How Google and Facebook are using R
>> ?by Michael E. Driscoll | February 19, 2009
>> ?<http://dataspora.com/blog/predictive-analytics-using-r/>
>>
>> Following that link leads to a page, on which the first link, in:
>>
>> ?<(March 26th Update: Video now available)>
>> ?Last night, I moderated our Bay Area R Users Group kick-off
>> ?event with a panel discussion entitled "The R and Science of
>> ?Predictive Analytics", co-located with the Predictive Analytics
>> ?World conference here in SF.
>>
>> leads you back to where you came from, and likewise the link at
>> the bottom of the page:
>>
>> ?<A video of the event> is now available courtesy of Ron
Fredericks
>> ?and LectureMaker.
>>
>> Could you help by describing where on that web page it can be found?
>> With thanks,
>> Ted.
>>
>> --------------------------------------------------------------------
>> E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
>> Fax-to-email: +44 (0)870 094 0861
>> Date: 30-Mar-09 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Time: 23:55:07
>> ------------------------------ XFMail ------------------------------
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
------------------------------
Message: 78
Date: Mon, 30 Mar 2009 18:18:33 -0500
From: Daniel Viar <dan.viar@gmail.com>
Subject: Re: [R] two monitors
To: Veerappa Chetty <chettyvk@gmail.com>
Cc: r-help@r-project.org
Message-ID:
<4cca6b120903301618u29111c57n350579ddd1a17c2f@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Try Edit --> Gui Preferences --> SDI
and see if that works.
Dan Viar
Chesapeake, VA
On Mon, Mar 30, 2009 at 5:55 PM, Veerappa Chetty <chettyvk@gmail.com>
wrote:> Hi, I have set up two monitors. I am using windows XP. ?I would like to
keep
> one window- command line in one monitor and the script and graphs in the
> second monitor. How do I set it up?
> It works for word documents simply by dragging the document. It does not
> work if I drag and drop the scripts window. Is R not compatible for this?
> Thanks.
> Chetty
>
> --
> Professor of Family Medicine
> Boston University
> Tel: 617-414-6221, Fax:617-414-3345
> emails: chettyvk@gmail.com,vchetty@bu.edu
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 79
Date: Mon, 30 Mar 2009 16:33:29 -0700
From: Elaine Jones <jones2@us.ibm.com>
Subject: [R] Can I read a file into my workspace from Rprofile.site?
To: r-help@r-project.org
Message-ID:
<OF5350A4BC.022B9A84-ON87257589.00802CE0-88257589.00812413@us.ibm.com>
Content-Type: text/plain
I am running R version 2.8.1 on Windows XP OS.
When I launch R, I would like to automatically read a file containing my
database connections, user ids, and passwords into my workspace.
I tried including this in my Rprofile.site file:
...
local({
old <- getOption("defaultPackages")
options(defaultPackages = c(old, "Rcmdr","RODBC",
"utils"))
})
.First <- function() {
library(utils)
setwd("C:/Documents and Settings/Administrator/My Documents/R")
connections <- read.csv("connections.csv", header=TRUE)
cat("\n Welcome to R Elaine!\n\n")
}
...
When I launch R, it does not give me any error. The working directory
appears to be set by the Rprofile.site file, but the connections object is
not in my workspace:
[[elided Yahoo spam]]
Loading required package: tcltk
Loading Tcl/Tk interface ... done
Loading required package: car
Rcmdr Version 1.4-7
> ls()
character(0)
[[elided Yahoo spam]]
**************** Elaine McGovern Jones ************************
ISC Tape and DASD Storage Products
Characterization and Failure Analysis Engineering
Phone: 408 284 4853 Internal: 3-4853
jones2@us.ibm.com
[[alternative HTML version deleted]]
------------------------------
Message: 80
Date: Mon, 30 Mar 2009 19:41:37 -0400
From: Duncan Murdoch <murdoch@stats.uwo.ca>
Subject: Re: [R] Can I read a file into my workspace from
Rprofile.site?
To: Elaine Jones <jones2@us.ibm.com>
Cc: r-help@r-project.org
Message-ID: <49D158B1.5020308@stats.uwo.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Elaine Jones wrote:> I am running R version 2.8.1 on Windows XP OS.
>
> When I launch R, I would like to automatically read a file containing my
> database connections, user ids, and passwords into my workspace.
>
> I tried including this in my Rprofile.site file:
>
> ...
> local({
> old <- getOption("defaultPackages")
> options(defaultPackages = c(old, "Rcmdr","RODBC",
"utils"))
> })
>
> .First <- function() {
> library(utils)
> setwd("C:/Documents and Settings/Administrator/My Documents/R")
> connections <- read.csv("connections.csv", header=TRUE)
> cat("\n Welcome to R Elaine!\n\n")
> }
>
>
The connections variable will be local to .First, and will disappear
after that function is done. To save the variable into the global
environment, use
connections <<- read.csv("connections.csv", header=TRUE)
instead.
Duncan Murdoch> ...
>
> When I launch R, it does not give me any error. The working directory
> appears to be set by the Rprofile.site file, but the connections object is
> not in my workspace:
>
>
[[elided Yahoo spam]]>
> Loading required package: tcltk
> Loading Tcl/Tk interface ... done
> Loading required package: car
>
> Rcmdr Version 1.4-7
>
>
>> ls()
>>
> character(0)
>
>
[[elided Yahoo spam]]>
> **************** Elaine McGovern Jones ************************
>
> ISC Tape and DASD Storage Products
> Characterization and Failure Analysis Engineering
> Phone: 408 284 4853 Internal: 3-4853
> jones2@us.ibm.com
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 81
Date: Tue, 31 Mar 2009 00:49:11 +0100 (BST)
From: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Subject: Re: [R] use R Group SFBA April meeting reminder; video of Feb
k
To: Jim Porzak <jporzak@gmail.com>
Cc: r-help <r-help@r-project.org>
Message-ID: <XFMail.090331004911.Ted.Harding@manchester.ac.uk>
Content-Type: text/plain; charset=iso-8859-1
On 30-Mar-09 23:04:40, Jim Porzak wrote:> Since Sundar beat me to it w/ Firefox 3 test, I checked with IE 7.0 -
> works fine for me there also.
>
> -Jim
Interesting! I'm using Iceweasel 2.0.0.19 (Firefox under another
[[elided Yahoo spam]]
I put on a fragrantly scented facemask and started IE up on Windows.
Going to the same URL, I now find a big "video screen" just below
the line:
The R and Science of Predictive Analytics: Four Case in R -- The Video
And it duly plays.
But at the same place in my Firefox, I only see a little button
inviting me to "Get Adobe Flash Player". But I already have that
installed for Iceweasel!. Well, maybe it needs updating. Let me
try that ... It says "Adobe Flash Player version 10.0.22.87" and
I have flashplayer_9 already there, so ... (some time later) I now
have flashplayer_10 installed, but I still get the same result.
Hmmm ....
[[elided Yahoo spam]]
Ted.
> On Mon, Mar 30, 2009 at 4:00 PM, Sundar Dorai-Raj
<sdorairaj@gmail.com>
> wrote:
>> Could be that you have some sort of ad filter in your browser
that's
>> blocking the video? It appears just fine for me in Firefox 3.
>>
>> On Mon, Mar 30, 2009 at 3:55 PM, Ted Harding
>> <Ted.Harding@manchester.ac.uk> wrote:
>>> On 30-Mar-09 22:13:04, Jim Porzak wrote:
>>>> Next week Wednesday evening, April 8th, Mike Driscoll will be
>>>> talking
>>>> about "Building Web Dashboards using R"
>>>> see: http://www.meetup.com/R-Users/calendar/9718968/ for
details &
>>>> to
>>>> RSVP.
>>>>
>>>> Also of interest, our member Ron Fredericks has just posted a
well
>>>> edited video of the February kickoff panel discussion at
Predictive
>>>> Analytics World "The R and Science of Predictive
Analytics: Four
>>>> Case
>>>> Studies in R" with
>>>> _ _ * Bo Cowgill, Google
>>>> _ _ * Itamar Rosenn, Facebook
>>>> _ _ * David Smith, Revolution Computing
>>>> _ _ * Jim Porzak, The Generations Network
>>>> and chaired by Michael Driscoll, Dataspora LLC
>>>>
>>>> see: http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>>>
>>>> Best,
>>>> Jim Porzak
>>>
>>> It could be very interesting to watch that video! However, I have
>>> had a close look at the web page you cite:
>>>
>>> _http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>>
>>> and cannot find a link to a video. Lots of links to non-video
>>> things, but none that I could see to a video.
>>>
>>> There is a link on that page at:
>>> _How Google and Facebook are using R
>>> _by Michael E. Driscoll | February 19, 2009
>>> _<http://dataspora.com/blog/predictive-analytics-using-r/>
>>>
>>> Following that link leads to a page, on which the first link, in:
>>>
>>> _<(March 26th Update: Video now available)>
>>> _Last night, I moderated our Bay Area R Users Group kick-off
>>> _event with a panel discussion entitled "The R and Science of
>>> _Predictive Analytics", co-located with the Predictive
Analytics
>>> _World conference here in SF.
>>>
>>> leads you back to where you came from, and likewise the link at
>>> the bottom of the page:
>>>
>>> _<A video of the event> is now available courtesy of Ron
Fredericks
>>> _and LectureMaker.
>>>
>>> Could you help by describing where on that web page it can be
found?
>>> With thanks,
>>> Ted.
>>>
>>>
--------------------------------------------------------------------
>>> E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
>>> Fax-to-email: +44 (0)870 094 0861
>>> Date: 30-Mar-09 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Time:
23:55:07
>>> ------------------------------ XFMail
------------------------------
>>>
>>> ______________________________________________
>>> R-help@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
>>> http://www.R-project.org/posting-guide.html
>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 31-Mar-09 Time: 00:49:08
------------------------------ XFMail ------------------------------
------------------------------
Message: 82
Date: Tue, 31 Mar 2009 13:05:05 +1300
From: Rolf Turner <r.turner@auckland.ac.nz>
Subject: Re: [R] use R Group SFBA April meeting reminder; video of Feb
k
To: "ted.harding@manchester.ac.uk"
<ted.harding@manchester.ac.uk>
Cc: r-help <r-help@r-project.org>, Jim Porzak <jporzak@gmail.com>
Message-ID: <85BA9CAB-C8F0-4701-AF1B-5E82681FB862@auckland.ac.nz>
Content-Type: text/plain; charset="US-ASCII"; delsp=yes; format=flowed
I get to the video screen OK --- there's a large greenish sideways
triangle waiting to be clicked on. I do so; there's a message that
says it's downloading, with a little progress bar. That seems to
complete quite rapidly. Then nothing for a while. Then an error
message on the video screen saying ``Fatal error --- video source
not ready.'' Then that error message goes away. Long wait. Then
I get audio, but never any video. Give up.
I'm using Firefox on an Imac; the ``About Mozilla Firefox'' button
on the Firefox dropdown menu says I've got Mozilla 5.0, Firefox 2.0.0.2
--- whatever that means.
Bottom line --- I can't watch the video.
But that's the story of my life. ***Nothing*** ever works for me! :-)
Except R, which *mostly* works.
cheers,
Rolf Turner
On 31/03/2009, at 12:49 PM, Ted Harding wrote:
> On 30-Mar-09 23:04:40, Jim Porzak wrote:
>> Since Sundar beat me to it w/ Firefox 3 test, I checked with IE 7.0 -
>> works fine for me there also.
>>
>> -Jim
>
> Interesting! I'm using Iceweasel 2.0.0.19 (Firefox under another
[[elided Yahoo spam]]> I put on a fragrantly scented facemask and started IE up on Windows.
> Going to the same URL, I now find a big "video screen" just below
> the line:
>
> The R and Science of Predictive Analytics: Four Case in R -- The Video
>
> And it duly plays.
>
> But at the same place in my Firefox, I only see a little button
> inviting me to "Get Adobe Flash Player". But I already have that
> installed for Iceweasel!. Well, maybe it needs updating. Let me
> try that ... It says "Adobe Flash Player version 10.0.22.87" and
> I have flashplayer_9 already there, so ... (some time later) I now
> have flashplayer_10 installed, but I still get the same result.
> Hmmm ....
[[elided Yahoo spam]]> Ted.
>
>
>
>> On Mon, Mar 30, 2009 at 4:00 PM, Sundar Dorai-Raj
>> <sdorairaj@gmail.com>
>> wrote:
>>> Could be that you have some sort of ad filter in your browser
that's
>>> blocking the video? It appears just fine for me in Firefox 3.
>>>
>>> On Mon, Mar 30, 2009 at 3:55 PM, Ted Harding
>>> <Ted.Harding@manchester.ac.uk> wrote:
>>>> On 30-Mar-09 22:13:04, Jim Porzak wrote:
>>>>> Next week Wednesday evening, April 8th, Mike Driscoll will
be
>>>>> talking
>>>>> about "Building Web Dashboards using R"
>>>>> see: http://www.meetup.com/R-Users/calendar/9718968/ for
details &
>>>>> to
>>>>> RSVP.
>>>>>
>>>>> Also of interest, our member Ron Fredericks has just posted
a well
>>>>> edited video of the February kickoff panel discussion at
>>>>> Predictive
>>>>> Analytics World "The R and Science of Predictive
Analytics: Four
>>>>> Case
>>>>> Studies in R" with
>>>>> _ _ * Bo Cowgill, Google
>>>>> _ _ * Itamar Rosenn, Facebook
>>>>> _ _ * David Smith, Revolution Computing
>>>>> _ _ * Jim Porzak, The Generations Network
>>>>> and chaired by Michael Driscoll, Dataspora LLC
>>>>>
>>>>> see: http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>>>>
>>>>> Best,
>>>>> Jim Porzak
>>>>
>>>> It could be very interesting to watch that video! However, I
have
>>>> had a close look at the web page you cite:
>>>>
>>>> _http://www.lecturemaker.com/2009/02/r-kickoff-video/
>>>>
>>>> and cannot find a link to a video. Lots of links to non-video
>>>> things, but none that I could see to a video.
>>>>
>>>> There is a link on that page at:
>>>> _How Google and Facebook are using R
>>>> _by Michael E. Driscoll | February 19, 2009
>>>>
_<http://dataspora.com/blog/predictive-analytics-using-r/>
>>>>
>>>> Following that link leads to a page, on which the first link,
in:
>>>>
>>>> _<(March 26th Update: Video now available)>
>>>> _Last night, I moderated our Bay Area R Users Group kick-off
>>>> _event with a panel discussion entitled "The R and Science
of
>>>> _Predictive Analytics", co-located with the Predictive
Analytics
>>>> _World conference here in SF.
>>>>
>>>> leads you back to where you came from, and likewise the link at
>>>> the bottom of the page:
>>>>
>>>> _<A video of the event> is now available courtesy of Ron
Fredericks
>>>> _and LectureMaker.
>>>>
>>>> Could you help by describing where on that web page it can be
>>>> found?
>>>> With thanks,
>>>> Ted.
>>>>
>>>>
-------------------------------------------------------------------
>>>> -
>>>> E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
>>>> Fax-to-email: +44 (0)870 094 0861
>>>> Date: 30-Mar-09 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Time:
>>>> 23:55:07
>>>> ------------------------------ XFMail
>>>> ------------------------------
>>>>
>>>> ______________________________________________
>>>> R-help@r-project.org mailing list
>>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>>> PLEASE do read the posting guide
>>>> http://www.R-project.org/posting-guide.html
>>>> and provide commented, minimal, self-contained, reproducible
code.
>>>>
>>>
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> --------------------------------------------------------------------
> E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
> Fax-to-email: +44 (0)870 094 0861
> Date: 31-Mar-09 Time: 00:49:08
> ------------------------------ XFMail ------------------------------
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-
> guide.html
> and provide commented, minimal, self-contained, reproducible code.
######################################################################
Attention:\ This e-mail message is privileged and confid...{{dropped:9}}
------------------------------
Message: 83
Date: Tue, 31 Mar 2009 00:05:48 +0000 (GMT)
From: "stenka1@go.com" <stenka1@go.com>
Subject: [R] RMySQL compile
To: r-help@r-project.org
Message-ID: <938853546.111396.1238457948199.JavaMail.mail@webmail01>
Content-Type: text/plain; charset="UTF-8"
I am trying to install RMySQL on OpenSolaris and need to pass some options
to configure.
Tried the 3 recommended ways of doing it and nothing works, configure
cannot
find the headers and the libs.
I ran configure manually and that worked fine
./configure CC=cc CFLAGS="-dalign -KPIC
-xlic_lib=sunperf"
LDFLAGS="-L/usr/sfw/lib -L/usr/lib -L/usr/local/lib
-L/opt/csw/lib"
CPPFLAGS="-I/usr/local/lib/R/include
-I/usr/mysql/5.0/include/mysql
-I/opt/csw/include -I/usr/sfw/include -I/usr/X11/include
-I/usr/lib/gtk-2.0/include
-I/usr/local/include"
--with-mysql-inc="/usr/mysql/include/mysql"
--with-mysql-lib="/usr/mysql/lib/mysql"
checking mysql.h usability... yes
checking mysql.h presence... yes
checking for mysql.h... yes
configure: creating ./config.status
config.status: creating src/Makevars
but then there is no makefile. Pls advise how to run make manually or
help
with the syntax: I tried with quotes after the second = and the first
and
neither worked
R CMD INSTALL --configure-args=CC=cc --configure-args=CFLAGS="-dalign
-KPIC
-xlic_lib=sunperf" --configure-args=LDFLAGS="-L/usr/sfw/lib
-L/usr/lib
-L/usr/local/lib -L/opt/csw/lib"
--configure-args=CPPFLAGS="-I/usr/local/lib/R/include
-I/usr/mysql/5.0/include/mysql -I/opt/csw/include -I/usr/sfw/include
-I/usr/X11/include -I/usr/lib/gtk-2.0/include
-I/usr/local/include"
--configure-args="--with-mysql-inc=/usr/mysql/include/mysql"
--configure-args="--with-mysql-lib=/usr/mysql/lib/mysql"
RMySQL_0.7-3.tar.gz ## does not work
Stephen C. Bond
------------------------------
Message: 84
Date: Tue, 31 Mar 2009 01:58:44 +0100 (BST)
From: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Subject: Re: [R] advice for alternative to barchart
To: "kerfuffle ent owners (" <pswi@ceh.ac.uk>
Cc: r-help@r-project.org
Message-ID: <XFMail.090331015844.Ted.Harding@manchester.ac.uk>
Content-Type: text/plain; charset=iso-8859-1
On 30-Mar-09 23:04:15, kerfuffle wrote:> hi folks,
>
> I was wondering if anybody could give me some advice. I've created
> a stacked barchart, with 'car model' along the x axis, 'number
of
> cars' along the y axis. There are 45 individuals involved, each of
> which can own any number of cars, of any model (eg an individual
could own two cars of one model, and another car of a different
model).> I've got a legend by the side of the barchart which gives the name
> of the individual, which gives the colour to identify which bars
> belong to which individuals.
> The problem (as you've probably guessed) is that it's almost
> impossible to have a distinctive legend for 45 individuals. I can
> manage 30 distinctive colors, but as soon as I use shaded lines the
> number of distinct colours drops considerably because the legend
> boxes are so small. This is true even if I vary line density and
> angle. Therefore, after a long period of experimentation, I'm
> thinking of giving up on barchart.
>
> What I have in mind now is a plot where each 'bar' is a single
line,
> and the top of each 'bar' is a symbol (+, *, etc). I figure it
should
> be possible to find 45 different symbols. Does anyone have any
> advice? I'm sorry this is so open-ended, but I've played with
> stripchart and dotplot without a lot of joy. I figure this can't be
> that uncommon a need (barchart with a ridiculous number of groups),
> but I could well be wrong. Is there some way of altering the size
> of the legend boxes in the barchart? Using symbols in the barchart?
> Some way of using, say, 30 blocks of colour, and 15 cases of
> a dashed line?
>
> Any thoughts would be greatly appreciated.
> Paul
(I hope this has nothing to do with the image of blocked traffic
on the CEH website ... :))
Certainly, unreasonable use of barcharts is not uncommon. And it
invariably fails to communicate much! It seems you are trying to
simultaneously present comparisons between numbers of cars owned
by different owners, how these are distributed across models,
how ownership varies by model, and how ownership varies by owner.
This is overloading the communicative capabilities of the barchart.
Thinking about possible variants of the barchart idea which might
better fit your aims, I've thought of the following possibility.
You don't say how many models of car are involved, but say it
is 20. You don't say what the maximum number of cars of one model
owned by a single owner is, but say it is 5. You have 45 owners.
The design is as follows.
1. Put 20 vertical bars along the X-axis, one for each model.
2. Make them tall -- 45 times some unit (and the unit must be large
enough to be divided into 5 equal vertical parts large enough
to be easily distinuished visually).
3. Associate vertial unit i with owner i.
4. Fill in each total vertical bar with a discreet background colour.
5. For owner i and car model M, let NiM be the number of cars of
model M owned by owner i. Fill in the bottom NiM/5 fraction of
verttical block i of bar M with a vivid foreground colour.
6. At the far right, or left, level with each block i, put the
identity of owner i.
In other words, it is like a kind of "crossed bar chart", with
vertical bars corresponding to car models, and horizontal bars
corresponding to owners, and where vertical bar M meets horizontal
bar i the size of the intersection is proportional to the number
of cars of model M owner by owner i.
As described, the natural visual comparison which will be made
by the eye will be the vertical heights of the highlighted
areas corresponding to a single owner i -- i.e. the eye will
run horizontally and perceive the differences of height. Therefore
it directly tells you how model ownership varies by model, for
each owner.
It will not be easy to compaqre the differences of height, for
the different owners, for a single model M -- since this requires
the eye to run vertically, and it will not readily make the
comparison.
However, you could also add bars which stick out sideways from
the vertical bars, by amounts proportional to the ownership of
model M by owner i -- in the same way as the vertical divisions.
Then the eye can also run vertically and easily pick up the
different ownerships of a given model M by different owners i.
None of these gives you either
a) Comparison between models M of total ownership of a given
model over the different owners;
b) Comparison between owners i of their total numbers of cars
owned, over the different car models.
However, you can also adjoin, to the array of coloured blocks already
constructed, marginal bar charts:
A) For car models: Above the whole diagram, have a barchart with
horizontal axis and vertical bars, one for each car model M,
whose heights are proportional to the total number of cars of
model M owned by all owners together;
B) For owners: To the right of the whole diagram, have a barchart
with vertical axis and horzontal bars, whose lengths for each
owner i are proportional to the total cars owned by owner i.
The (A) answers (a), and (B) answers (b).
As to how best to achieve this in R, I'm not sure. In any case,
I don't draw sophisticated graphics like this directly in R,
since I would want totally precise control over sizes, shapes
and positions in the result. I in fact use the 'pic' component
of the 'groff' package (though these days it may be possible to
achive the same sort of result in TeX). However (staying on topic)
I would certainly get R to generate the data array from which
the graphic would be drawn by 'pic'. Other graphics software may
also allow this to be easily and precisly done (I would not like
to recommend Excel ... ).
If you were to post the data, or a suitable set of similar artificial
data (or mail to me privately) I would be happy to try my hand at
producing the sort of thing described above.
[[elided Yahoo spam]]
Ted.
--------------------------------------------------------------------
E-Mail: (Ted Harding) <Ted.Harding@manchester.ac.uk>
Fax-to-email: +44 (0)870 094 0861
Date: 31-Mar-09 Time: 01:58:41
------------------------------ XFMail ------------------------------
------------------------------
Message: 85
Date: Tue, 31 Mar 2009 12:09:41 +1100
From: "Debabrata Midya" <Debabrata.Midya@commerce.nsw.gov.au>
Subject: [R] To save Trellis Plots on A3 size paper (Portrait and
Landscape)
To: <r-help@r-project.org>
Message-ID: <49D20805.5860.00D0.0@commerce.nsw.gov.au>
Content-Type: text/plain
Dear R users,
Thanks in advance.
I am Deb, Statistician at NSW Department of Commerce, Sydney.
I am using R 2.8.1 on Windows XP.
I like to save Trellis Plots on A3 size paper (Portrait and
Landscape).
Currently, I am using the following command to save a Trellis Plot in
pdf [This is an example code]:
pdf("D:/Analysis/test.pdf")
dataFile <- expand.grid(xo= c("x1", "x2",
"x3"),
xc= c("c1", "c2", "c3",
"c4", "c5", "c6"),
year= c("2007", "2008",
"2009"),
quarter= c("Q1", "Q2",
"Q3", "Q4"),
id= c(1,2,3))
dataFile <- dataFile[order(dataFile$xo, dataFile$xc, dataFile$year,
dataFile$quarter, dataFile$id), ]
dataFile$s <- runif(nrow(dataFile), 50, 100)
dataFile$time <- paste(as.character(dataFile$year),
as.character(dataFile$quarter), sep=":")
barchart(s ~ time| xc + xo, data = dataFile[dataFile$xo == 'x1', ],
horizontal = FALSE,
groups = id,
layout = c(1, 6))
dev.off()
Once again, thank you very much for the time you have given.
I am looking forward for your reply.
Regards,
Debabrata Midya (Deb)
NSW Department of Commerce
Government Procurement Management
Level 11
McKell Building
2-24 Rawson Place
Sydney NSW 2000
Australia
******************************************************************************
This email message, including any attached files, is confidential and
intended solely for the use of the individual or entity to whom it is
addressed.
The NSW Department of Commerce prohibits the right to publish,
copy, distribute or disclose any information contained in this email,
or its attachments, by any party other than the intended recipient.
If you have received this email in error please notify the sender and
delete it from your system.
No employee or agent is authorised to conclude any binding
agreement on behalf of the NSW Department of Commerce by email. The
views or opinions presented in this email are solely those of the author
and do not necessarily represent those of the Department,
except where the sender expressly, and with authority, states them to be
the views of NSW Department of Commerce.
The NSW Department of Commerce accepts no liability for any loss or
damage arising from the use of this email and recommends that the
recipient check this email and any attached files for the presence of
viruses.
******************************************************************************
[[alternative HTML version deleted]]
------------------------------
Message: 86
Date: Mon, 30 Mar 2009 19:42:04 -0700 (PDT)
From: Felipe Carrillo <mazatlanmexico@yahoo.com>
Subject: Re: [R] two monitors
To: r-help@r-project.org, Veerappa Chetty <chettyvk@gmail.com>
Message-ID: <769311.279.qm@web56607.mail.re3.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Hi:
I use two monitors and I didn't have to do nothing to do what you want done.
Try to print an R graph then drag it to the second screen. If that doesn't
work, you may need to go to your settings and identify screen one and screen two
from the dialog box. Good luck
Felipe D. Carrillo
Supervisory Fishery Biologist
Department of the Interior
US Fish & Wildlife Service
California, USA
--- On Mon, 3/30/09, Veerappa Chetty <chettyvk@gmail.com> wrote:
> From: Veerappa Chetty <chettyvk@gmail.com>
> Subject: [R] two monitors
> To: r-help@r-project.org
> Date: Monday, March 30, 2009, 3:55 PM
> Hi, I have set up two monitors. I am using windows XP. I
> would like to keep
> one window- command line in one monitor and the script and
> graphs in the
> second monitor. How do I set it up?
> It works for word documents simply by dragging the
> document. It does not
> work if I drag and drop the scripts window. Is R not
> compatible for this?
> Thanks.
> Chetty
>
> --
> Professor of Family Medicine
> Boston University
> Tel: 617-414-6221, Fax:617-414-3345
> emails: chettyvk@gmail.com,vchetty@bu.edu
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained,
> reproducible code.
------------------------------
Message: 87
Date: Mon, 30 Mar 2009 19:45:03 -0700
From: Steven McKinney <smckinney@bccrc.ca>
Subject: Re: [R] what is R equivalent of Fortran DOUBLE PRECISION ?
To: <mauede@alice.it>, <r-help@r-project.org>
Cc: "John C. Nash" <nashjc@uottawa.ca>
Message-ID:
<0BE438149FF2254DB4199E2682C8DFEB0328A683@crcmail1.BCCRC.CA>
Content-Type: text/plain; charset="iso-8859-1"
>From the help page for "Foreign"
> ?Foreign
Numeric vectors in R will be passed as type double * to C (and as double
precision to Fortran)
Also, see the help page for "double"
> ?double
R has no single precision data type. All real numbers are stored in double
precision format.
The functions as.single and single are identical to as.double and double except
they set
the attribute Csingle that is used in the .C and .Fortran interface, and they
are intended
only to be used in that context.
HTH
Steven McKinney, Ph.D.
Statistician
Molecular Oncology and Breast Cancer Program
British Columbia Cancer Research Centre
email: smckinney +at+ bccrc +dot+ ca
tel: 604-675-8000 x7561
BCCRC
Molecular Oncology
675 West 10th Ave, Floor 4
Vancouver B.C.
V5Z 1L3
Canada
-----Original Message-----
From: r-help-bounces@r-project.org on behalf of mauede@alice.it
Sent: Mon 3/30/2009 3:07 AM
To: r-help@r-project.org
Cc: John C. Nash
Subject: [R] what is R equivalent of Fortran DOUBLE PRECISION ?
I noticed taht R cannot understand certain Fortran real constant formats. For
instance:
c14 <- as.double( 7.785205408500864D-02)
Error: unexpected symbol in " c14 <- as.double(
7.785205408500864D"
The above "D" is used in Fortran language to indicate the memory
starage mode. That is for instructing Fortran compiler
to store such a REAL constant in DOUBLE PRECISION... am I right ?
Since R cannot undestand numerical conatant post-fixed by the letter
"D", I wonder how I can instruct R interpreter to
store such a numerical constant reserving as muh memory as necessary so as to
accommodate a double precision number.
I noticed R accepts the folllowing syntax but I do not know if i have achieved
my goal thsi way:
> c14 <- as.double( 7.785205408500864E-02)
> typeof(c4)
[1] "double"
My questions are: what is the best precision I can get with R when dealing with
real number ?
Is R "double" type equvalent to Fortran DOUBLE PRECISION for internal
number representation ?
Thank you very much.
Maura
[[elided Yahoo spam]]
[[alternative HTML version deleted]]
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 88
Date: Mon, 30 Mar 2009 19:58:38 -0700 (PDT)
From: minben <minbenh@gmail.com>
Subject: [R] How to generate natural cubic spline in R?
To: r-help@r-project.org
Message-ID:
<f821254a-e11e-459e-80da-53fa7fece229@x29g2000prf.googlegroups.com>
Content-Type: text/plain; charset=ISO-8859-1
Suppose I have two var x and y,now I want to fits a natural cubic
spline in x to y,at the same time create new var containing the
smoothed values of y. How can I get it?
------------------------------
Message: 89
Date: Mon, 30 Mar 2009 23:43:38 -0400
From: David Winsemius <dwinsemius@comcast.net>
Subject: Re: [R] How to generate natural cubic spline in R?
To: minben <minbenh@gmail.com>
Cc: r-help@r-project.org
Message-ID: <CD1977E4-41E6-4CBC-84A4-B36875CC839D@comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
If one enters:
??"spline"
... You get quite a few matches. The one in the stats functions that
probably answers your specific questions is:
"splinefun {stats} R Documentation
Interpolating Splines Description
Perform cubic (or Hermite) spline interpolation of given data points,
returning either a list of points obtained by the interpolation or a
function performing the interpolation."
"splinefun returns a function with formal arguments x and deriv, the
latter defaulting to zero. This function can be used to evaluate the
interpolating cubic spline (deriv=0), or its derivatives (deriv=1,2,3)
at the points x, where the spline function interpolates the data
points originally specified. This is often more useful than spline."
Perhaps you need to review from you basic intro material regarding
help.search("text") # or
??"text" # possibilities.
--
David Winsemius
On Mar 30, 2009, at 10:58 PM, minben wrote:
> Suppose I have two var x and y,now I want to fits a natural cubic
> spline in x to y,at the same time create new var containing the
> smoothed values of y. How can I get it?
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
David Winsemius, MD
Heritage Laboratories
West Hartford, CT
------------------------------
Message: 90
Date: Tue, 31 Mar 2009 15:16:47 +1100
From: Gad Abraham <gabraham@csse.unimelb.edu.au>
Subject: Re: [R] Binning
To: Santosh <santosh2005@gmail.com>
Cc: r-help@r-project.org
Message-ID: <49D1992F.9020209@csse.unimelb.edu.au>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Santosh wrote:> Dear useRs...
>
> How do you all do binning of an independent variable using R? For example,
> observations of a dependent variable at times different from a nominal
time.
>
>
> Are there any R functions that help with binning?
Have a look at ?cut.
--
Gad Abraham
MEng Student, Dept. CSSE and NICTA
The University of Melbourne
Parkville 3010, Victoria, Australia
email: gabraham@csse.unimelb.edu.au
web: http://www.csse.unimelb.edu.au/~gabraham
------------------------------
Message: 91
Date: Tue, 31 Mar 2009 14:03:03 +0800
From: Yihui Xie <xieyihui@gmail.com>
Subject: Re: [R] How to get commands history as a character vector
instead of displaying them?
To: Wacek Kusnierczyk <Waclaw.Marcin.Kusnierczyk@idi.ntnu.no>
Cc: R Help <r-help@r-project.org>
Message-ID:
<89b6b8c90903302303l5526aa4ew653c13bc35d052eb@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Thanks, Wacek and Romain! Your solutions worked very well in RGui and
Rterm in interactive mode.
My final purpose was to obtain the file name of the postscript device
in Rweb (http://pbil.univ-lyon1.fr/Rweb/); now I found savehistory()
would not work because Rweb was non-interactive. I didn't realize it
until I try(savehistory()) and got an error message.
Now I found a solution by myself: we can list the *.ps files and pick
the most recently created (modified, visited, ...) one, e.g.
x = file.info(list.files(pattern = ".*\\.ps$"))
x = x[order(x$atime), ]
rownames(x)[nrow(x)]
Regards,
Yihui
--
Yihui Xie <xieyihui@gmail.com>
Phone: +86-(0)10-82509086 Fax: +86-(0)10-82509086
Mobile: +86-15810805877
Homepage: http://www.yihui.name
School of Statistics, Room 1037, Mingde Main Building,
Renmin University of China, Beijing, 100872, China
On Mon, Mar 23, 2009 at 6:13 PM, Wacek Kusnierczyk
<Waclaw.Marcin.Kusnierczyk@idi.ntnu.no> wrote:> Romain Francois wrote:
>> Yihui Xie wrote:
>>> Hi Everyone,
>>>
>>> I want to get the commands history as a character vector instead of
>>> just displaying them, but the function history() just returns NULL.
I
>>> checked the source code of 'history' and could not find a
solution.
[[elided Yahoo spam]]>>>
>> history eventually calls file.show, which will use the pager option to
>> determine how to show the file, so you can do something like that:
>>
>> history <- function( ... ){
>> old.op <- options( pager = function( files, header, title,
delete.file
>> ) readLines( files ) ); on.exit( options( old.op ) )
>> utils::history(...)
>> }
>> history( pattern = "png" )
>
> i think the following is an acceptable alternative:
>
> ? ?history = function() {
> ? ? ? file = tempfile()
> ? ? ? on.exit(unlink(file))
> ? ? ? savehistory(file)
> ? ? ? readLines(file) }
>
> the output is *lines* of text, but if you need whole executable
> expressions, you can parse the output:
>
> ? ?1 + 1
> ? ?ph = parse(text=history())
> ? ?as.list(ph)
> ? ?ph[length(ph)-1]
> ? ?# expression(1 + 1)
> ? ?eval(ph[length(ph)-1])
> ? ?# [1] 2
>
>
> vQ
>
------------------------------
Message: 92
Date: Tue, 31 Mar 2009 00:14:12 -0700 (PDT)
From: Bob Roberts <quagmire54321@yahoo.com>
Subject: [R] Convert Character to Date
To: r-help@r-project.org
Message-ID: <662076.42273.qm@web112213.mail.gq1.yahoo.com>
Content-Type: text/plain
Hello,
I have a date in the format Year-Month Name (e.g. 1990-January) and R classes
it as a character. I want to convert this character into a date format, but when
I try as.Date(1990-January, "%Y-%B"), I get back NA. The function
strptime also gives me NA back. Thanks.
[[alternative HTML version deleted]]
------------------------------
Message: 93
Date: Tue, 31 Mar 2009 18:28:46 +1100
From: <Bill.Venables@csiro.au>
Subject: Re: [R] Convert Character to Date
To: <quagmire54321@yahoo.com>, <r-help@r-project.org>
Message-ID:
<1BDAE2969943D540934EE8B4EF68F95FB22F48CDF7@EXNSW-MBX03.nexus.csiro.au>
Content-Type: text/plain; charset="us-ascii"
If you want the vector to be a Date you need to specify a date at least down to
the day. Otherwise the date is not well defined and becomes <NA> as you
noted.
Perhaps the easiest thing is to give it a particular day of the month, e.g. the
first, or the 15 (the "ides"), or ...
> x <- as.Date(paste("1990-January", 1, sep="-"),
format = "%Y-%B-%d")
> x
[1] 1990-01-01
Now if you want to display the date suppressing the dummy day, you can
> y <- format(x, "%Y-%B")
> y
[1] "1990-January"
Bill Venables
http://www.cmis.csiro.au/bill.venables/
-----Original Message-----
From: r-help-bounces@r-project.org [mailto:r-help-bounces@r-project.org] On
Behalf Of Bob Roberts
Sent: Tuesday, 31 March 2009 5:14 PM
To: r-help@r-project.org
Subject: [R] Convert Character to Date
Hello,
I have a date in the format Year-Month Name (e.g. 1990-January) and R classes
it as a character. I want to convert this character into a date format, but when
I try as.Date(1990-January, "%Y-%B"), I get back NA. The function
strptime also gives me NA back. Thanks.
[[alternative HTML version deleted]]
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 94
Date: Tue, 31 Mar 2009 03:36:03 -0400
From: Gabor Grothendieck <ggrothendieck@gmail.com>
Subject: Re: [R] Convert Character to Date
To: Bob Roberts <quagmire54321@yahoo.com>
Cc: r-help@r-project.org
Message-ID:
<971536df0903310036y1bf4a18fh2fe9ddf66396cced@mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
The yearmon class in the zoo package can represent year/months:
> library(zoo)
> ym <- as.yearmon("1990-January", "%Y-%B"); ym
[1] "Jan 1990"> as.Date(ym)
[1] "1990-01-01"
On Tue, Mar 31, 2009 at 3:14 AM, Bob Roberts <quagmire54321@yahoo.com>
wrote:> Hello,
> ? I have a date in the format Year-Month Name (e.g. 1990-January) and R
classes it as a character. I want to convert this character into a date format,
but when I try as.Date(1990-January, "%Y-%B"), I get back NA. The
function strptime also gives me NA back. Thanks.
>
>
>
>
> ? ? ? ?[[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 95
Date: Mon, 30 Mar 2009 20:16:04 -0700 (PDT)
From: MarcioRibeiro <mestat@pop.com.br>
Subject: [R] Package candisc
To: r-help@r-project.org
Message-ID: <22797571.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hi listers,
I am working on an canonical discriminant analysis, but I am having some
trouble to use the CANDISC function... I just installed the last R version
and installed the package CANDISC... But, I am getting the following message
because about a permission:
The downloaded packages are in
C:\Users\Marcio\AppData\Local\Temp\Rtmpz2kFUm\downloaded_packages
updating HTML package descriptions
Warning message:
In file.create(f.tg) :
cannot create file 'C:\PROGRA~1\R\R-28~1.1/doc/html/packages.html',
reason
'Permission denied'
Could anybody tell me if I am forgeting about something in order to use the
CANDISC function!
Thanks in advance,
Marcio
--
View this message in context:
http://www.nabble.com/Package-candisc-tp22797571p22797571.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 96
Date: Tue, 31 Mar 2009 02:37:58 +0100
From: "Matthew Dowle" <mdowle@mdowle.plus.com>
Subject: [R] [R-pkgs] data.table is on CRAN (enhanced data.frame for
time series joins and more)
To: <r-packages@r-project.org>
Message-ID: <B6BD106B14A14CA1BFA74D9956C560D0@ADAM>
Content-Type: text/plain; charset="us-ascii";
Format="flowed"
Dear all,
The data.table package was released back in August 2008. This email is to
publicise its existence in response to several suggestions to do so. It
seems I didn't send a general announcement about it at the time and
therefore perhaps, not surprisingly, not many people know about it. Glancing
at some r-help threads recently supports the idea of sending a public
announcement.
The main difference between data.frame and data.table is enhanced
functionality in [.data.table where most documentation for this package
lives i.e. help("[.data.table"). Selected extracts from the package
documentation follow.
The package builds on base R functionality to reduce 2 types of time :
1. programming time (easier to write, read, debug and maintain)
2. compute time
when combining database like operations (subset, with and by) and provides
similar joins that merge provides but faster. This is achieved by using R's
column based ordered in-memory data.frame, eval within the environment of a
list (i.e. with), the [.data.table mechanism to condense the features and
compiled C to make certain operations fast.
[.data.table is like [.data.frame but i and j can be expressions of column
names directly. Furthermore i may itself be a data.table which invokes a
fast table join using binary search in O(log n) time. Allowing i to be
data.table is consistent with subsetting an n-dimension array by an n-column
matrix in base R. data.tables do not have rownames but may instead have a
key of one or more columns using setkey. This key may be used for row
indexing instead of rownames.
Examples comparing [.data.frame and [.data.table :
DF = data.frame(a=1:5, b=6:10)
DT = data.table(a=1:5, b=6:10)
tt = subset(DF,a==3)
ss = DT[a==3] # just use the column name 'a' directly. No need to
remember the comma. The i argument is like the 'where' in SQL.
identical(as.data.table(tt), ss)
tt = with(subset(DF,a==3),a+b+1)
ss = DT[a==3,a+b+1] # j is like select in SQL and the select argument
of subset in base R. j can be an expression of column names directly,
including a data.table of multiple expressions. Here the j expression is
executed just for the rows matching the i argument.
identical(tt, ss)
# Examples above use vector scans i.e. the "a==3" expression first
creates a
logical vector as long as the total number of rows and then evaluates a==3
for every row.
# Examples below use binary search, invoked by passing in a data.table as
the i argument. Joins in SQL are performed in the where clause and the i
argument is like where, so this seems very natural (to me anyway!)
DT = data.table(a=letters[1:5], b=6:10)
setkey(DT,a)
identical(DT[J("d")], DT[4]) # binary search to row for
'd'
DT = data.table(id=rep(c("A","B"),each=3),
date=c(20080501L,20080502L,20080506L), v=1:6)
setkey(DT,id,date)
DT["A"] # all 3 rows for A
since mult
by default is "all"
DT[J("A",20080502L)] # row for A where date also
matches
exactly
DT[J("A",20080505L)] # NA since 5 May is missing
(outer join
by default)
DT[J("A",20080505L),nomatch=0] # inner join instead
dts = c(20080501L, 20080502L, 20080505L, 20080506L, 20080507L, 20080508L)
DT[J("A",dts)] # 3 of the dates in dts
match
exactly
DT[J("A",dts),roll=TRUE] # roll previous data
forward i.e.
return the prevailing observation
DT[J("A",dts),rolltolast=TRUE] # roll all but last
observation
forward
tables(mb=TRUE) # prints table names, number of rows, size in memory
Thanks to all those who have made suggestions and feedback so far. Further
comments and feedback on the package would be much appreciated.
Regards, Matthew
_______________________________________________
R-packages mailing list
R-packages@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-packages
------------------------------
Message: 97
Date: Tue, 31 Mar 2009 01:18:44 -0700 (PDT)
From: thoeb <t.hoebinger@gmail.com>
Subject: [R] Convert date to integer
To: r-help@r-project.org
Message-ID: <22800457.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Hello, I have a dataframe containing dates, times and other parameters. The
times have the format "h:m", e.g. 13:00 or 5:30, R classes them as
factors.
I want to change the times to integers e.g. 13:00 -> 1300. I tried to use
"chron" to create a chronological object, but it didn't work for
the times
(yust for the dates).
-----
Tamara Hoebinger
University of Vienna
--
View this message in context:
http://www.nabble.com/Convert-date-to-integer-tp22800457p22800457.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 98
Date: Tue, 31 Mar 2009 11:05:32 +0200 (CEST)
From: dbajic@cnb.csic.es
Subject: [R] summarize logical string
To: r-help@r-project.org
Message-ID:
<43373.150.244.85.246.1238490332.squirrel@webmail.cnb.csic.es>
Content-Type: text/plain;charset=iso-8859-1
Hello everyone,
I am a newbie, working on a gene clustering problem, and I have problems
in summarizing a logical string into number of repeats of each value. In
other words, how could I obtain from
0 1 1 1 0 0 0 0 1 1 0 1 0 0
this: 1 3 4 2 1 1 2
so a string that gives me the number of repeated values, no matter zeros
or ones.
I've been diving in the manuals and the mailing list but, nothing
interesting, apparently... I would be very grateful if anyone could give
me some advice.
Djordje Bajic
Logic of Genomic Systems Lab
Centro Nacional De Biotecnolog?a - CSIC
Cantoblanco, Madrid, Espa?a
------------------------------
Message: 99
Date: Tue, 31 Mar 2009 11:13:11 +0200
From: Dimitris Rizopoulos <d.rizopoulos@erasmusmc.nl>
Subject: Re: [R] summarize logical string
To: dbajic@cnb.csic.es
Cc: r-help@r-project.org
Message-ID: <49D1DEA7.1080606@erasmusmc.nl>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
you need rle(), e.g.,
x <- c(0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0)
rle(x)$lengths
I hope it helps.
Best,
Dimitris
dbajic@cnb.csic.es wrote:> Hello everyone,
>
> I am a newbie, working on a gene clustering problem, and I have problems
> in summarizing a logical string into number of repeats of each value. In
> other words, how could I obtain from
>
> 0 1 1 1 0 0 0 0 1 1 0 1 0 0
>
> this: 1 3 4 2 1 1 2
>
> so a string that gives me the number of repeated values, no matter zeros
> or ones.
>
> I've been diving in the manuals and the mailing list but, nothing
> interesting, apparently... I would be very grateful if anyone could give
> me some advice.
>
> Djordje Bajic
> Logic of Genomic Systems Lab
> Centro Nacional De Biotecnolog?a - CSIC
> Cantoblanco, Madrid, Espa?a
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Dimitris Rizopoulos
Assistant Professor
Department of Biostatistics
Erasmus University Medical Center
Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
Tel: +31/(0)10/7043478
Fax: +31/(0)10/7043014
------------------------------
Message: 100
Date: Tue, 31 Mar 2009 12:20:31 +0300
From: "Schragi Schwartz" <schragas@post.tau.ac.il>
Subject: [R] Efficient calculation of partial correlations in R
To: <r-help@r-project.org>
Cc: 'Dror Hollander' <dror.hollander@gmail.com>
Message-ID: <001601c9b1e1$f0b2d6e0$d21884a0$@tau.ac.il>
Content-Type: text/plain
Hello,
I'm looking for an efficient function for calculating partial correlations.
I'm currently using the pcor.test () function, which is equivalent to the
cor.test() function, and can receive only single vectors as input. I'm
looking for something which is equivalent to the cor() function, and can
receive matrixes as input (which should make the calculations much more
efficient).
Thanks,
Schragi
[[alternative HTML version deleted]]
------------------------------
Message: 101
Date: Tue, 31 Mar 2009 14:47:52 +0530
From: "pooja arora" <pooja_arora@persistent.co.in>
Subject: [R] how to increase the limit for max.print in R
To: <r-help@r-project.org>
Message-ID: <003d01c9b1e1$91dce2a0$b596a7e0$@co.in>
Content-Type: text/plain
Hi All,
I am using DNAcopy package in R for copy number analysis of 500K chip.
The final output which I get from DNA copy is too big to be printed in a
file.
So I am getting an error as "reached getOption("max.print") --
omitted
475569 rows "
Can somebody please provide me the pointers with how to increase the limit
for max.print .
Thanks,
Pooja
DISCLAIMER\ ==========\ \ \ [[alternative HTML version ...{{dropped:7}}
------------------------------
Message: 102
Date: Tue, 31 Mar 2009 09:18:02 +0000
From: Steve Murray <smurray444@hotmail.com>
Subject: [R] Row/columns names within 'assign' command
To: <r-help@r-project.org>
Message-ID: <BAY135-W5649AD88B266930B94C94C888A0@phx.gbl>
Content-Type: text/plain; charset="Windows-1252"
Dear all,
I am attempting to add row and column names to a series of tables (120 in total)
which have 2 variable parts to their name. These tables are created as follows:
# Create table indexes
index <- expand.grid(year = sprintf("%04d", seq(1986, 1995)), month
= sprintf("%02d", 1:12))
# Read in and assign file names to individual objects with variable name
components
for (i in seq(nrow(index))) {
assign(paste("Fekete_",index$year[i], index$month[i],
sep=''),
read.table(file=paste("C:\\Data\\comp_runoff_hd_",
index$year[i], index$month[i], ".asc", sep=""),
header=FALSE, sep=""))
# Create index of file names
files <- print(ls()[1:120], quote=FALSE) # This is the best way I could
manage to successfully attribute all the table names to a single list - I
realise it's horrible coding (especially as it relies on the first 120
objects stored in the memory actually being the objects I want to use)...
files
[1] "Fekete_198601" "Fekete_198602"
"Fekete_198603" "Fekete_198604"
[5] "Fekete_198605" "Fekete_198606"
"Fekete_198607" "Fekete_198608"
[9] "Fekete_198609" "Fekete_198610"
"Fekete_198611" "Fekete_198612"
[13] "Fekete_198701" "Fekete_198702"
"Fekete_198703" "Fekete_198704"
[17] "Fekete_198705" "Fekete_198706"
"Fekete_198707" "Fekete_198708" ...[truncated - there are
120 in total]
# Provide column and row names according to lat/longs.
rnames <- sprintf("%.2f", seq(from = -89.75, to = 89.75, length =
360))
columnnames <- sprintf("%.2f", seq(from = -179.75, to = 179.75,
length = 720))
for (i in 1:120) {
Fekete_table <- get(paste("Fekete_", index$year[i],
index$month[i], sep=''))
colnames(Fekete_table) <- columnnames
rownames(Fekete_table) <- rnames
assign(paste("Fekete_",index$year[i], index$month[i],
sep=''),
colnames(Fekete_table))
}
As you can see, I'm in a bit of a muddle during the column/row name
assignments. In fact, this loop simply writes over the existing data in the
tables and replaces it with all the column name values (whilst colnames remains
NULL).
The problem I've been having is that I can't seem to tell R to assign
these column/row heading values to the colnames/rownames within an assign
command - it seems to result in errors even when I try breaking this assignment
process down into steps.
How do I go about assigning rows and columns in this way, and how do I create a
better way of indexing the file names?
Many thanks for any help offered,
Steve
_________________________________________________________________
25GB of FREE Online Storage ? Find out more
------------------------------
Message: 103
Date: Tue, 31 Mar 2009 11:20:39 +0200 (CEST)
From: dbajic@cnb.csic.es
Subject: Re: [R] summarize logical string
To: "Dimitris Rizopoulos" <d.rizopoulos@erasmusmc.nl>
Cc: r-help@r-project.org
Message-ID:
<58815.150.244.85.246.1238491239.squirrel@webmail.cnb.csic.es>
Content-Type: text/plain;charset=iso-8859-1
Thank you very much, rapid and very helpful.
Best,
Djordje
> you need rle(), e.g.,
>
> x <- c(0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0)
> rle(x)$lengths
>
>
> I hope it helps.
>
> Best,
> Dimitris
>
>
> dbajic@cnb.csic.es wrote:
>> Hello everyone,
>>
>> I am a newbie, working on a gene clustering problem, and I have
problems
>> in summarizing a logical string into number of repeats of each value.
In
>> other words, how could I obtain from
>>
>> 0 1 1 1 0 0 0 0 1 1 0 1 0 0
>>
>> this: 1 3 4 2 1 1 2
>>
>> so a string that gives me the number of repeated values, no matter
zeros
>> or ones.
>>
>> I've been diving in the manuals and the mailing list but, nothing
>> interesting, apparently... I would be very grateful if anyone could
give
>> me some advice.
>>
>> Djordje Bajic
>> Logic of Genomic Systems Lab
>> Centro Nacional De Biotecnolog?a - CSIC
>> Cantoblanco, Madrid, Espa?a
>>
>> ______________________________________________
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> --
> Dimitris Rizopoulos
> Assistant Professor
> Department of Biostatistics
> Erasmus University Medical Center
>
> Address: PO Box 2040, 3000 CA Rotterdam, the Netherlands
> Tel: +31/(0)10/7043478
> Fax: +31/(0)10/7043014
>
------------------------------
Message: 104
Date: Tue, 31 Mar 2009 02:25:07 -0700 (PDT)
From: minben <minbenh@gmail.com>
Subject: [R] Does R support double-exponential smoothing?
To: r-help@r-project.org
Message-ID:
<a93fc941-c1c0-462e-8a86-bcb711753a33@w35g2000prg.googlegroups.com>
Content-Type: text/plain; charset=ISO-8859-1
I want to use double-exponential smoothing to forecast time series
datas,but I couldn't find it in the document,does R support this
method?
------------------------------
Message: 105
Date: Tue, 31 Mar 2009 06:29:01 -0300
From: Bernardo Rangel Tura <tura@centroin.com.br>
Subject: Re: [R] how to increase the limit for max.print in R
To: r-help@r-project.org
Message-ID: <1238491741.7970.3.camel@R1-Thux>
Content-Type: text/plain
On Tue, 2009-03-31 at 14:47 +0530, pooja arora wrote:> Hi All,
>
>
>
> I am using DNAcopy package in R for copy number analysis of 500K chip.
>
> The final output which I get from DNA copy is too big to be printed in a
> file.
>
> So I am getting an error as "reached getOption("max.print")
-- omitted
> 475569 rows "
>
> Can somebody please provide me the pointers with how to increase the limit
> for max.print .
>
>
>
> Thanks,
>
>
>
> Pooja
Hi Pooja,
You must use options command, something like this
options(max.print=5.5E5)
For more information type? ?options
--
Bernardo Rangel Tura, M.D,MPH,Ph.D
National Institute of Cardiology
Brazil
------------------------------
Message: 106
Date: Tue, 31 Mar 2009 09:45:00 +0000 (UTC)
From: Dieter Menne <dieter.menne@menne-biomed.de>
Subject: Re: [R] Convert date to integer
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20090331T094141-328@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
thoeb <t.hoebinger <at> gmail.com> writes:
> Hello, I have a dataframe containing dates, times and other parameters. The
> times have the format "h:m", e.g. 13:00 or 5:30, R classes them
as factors.
Probably you have read in the data from a file with read.table; check
stringsAsFactors in the docs to avoid the conversion from the beginning.
> I want to change the times to integers e.g. 13:00 -> 1300. I tried to
use
> "chron" to create a chronological object, but it didn't work
for the times
> (yust for the dates).
>
If that's all (no NA?) a simple replace might work
df = data.frame(tstr=c("13:00","5:30"))
df$tint = as.integer(gsub(":","",as.character(df$tstr)))
Dieter
------------------------------
Message: 107
Date: Tue, 31 Mar 2009 09:56:28 +0000 (UTC)
From: Dieter Menne <dieter.menne@menne-biomed.de>
Subject: Re: [R] To save Trellis Plots on A3 size paper (Portrait and
Landscape)
To: r-help@stat.math.ethz.ch
Message-ID: <loom.20090331T095617-117@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Debabrata Midya <Debabrata.Midya <at> commerce.nsw.gov.au> writes:
> I like to save Trellis Plots on A3 size paper (Portrait and
> Landscape).
>
Since a3 is not among the paper choices, you could give the width and height in
inches (brrrr...) explicitly.
Dieter
------------------------------
Message: 108
Date: Tue, 31 Mar 2009 02:57:51 -0700 (PDT)
From: Dieter Menne <dieter.menne@menne-biomed.de>
Subject: Re: [R] To save Trellis Plots on A3 size paper (Portrait and
Landscape)
To: r-help@r-project.org
Message-ID: <22801875.post@talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Debabrata Midya wrote:>
> I like to save Trellis Plots on A3 size paper (Portrait and Landscape).
>
>
Since a3 is not among the paper choices, you could give the width and height
in inches (brrrr...) explicitly.
Dieter
--
View this message in context:
http://www.nabble.com/To-save-Trellis-Plots-on-A3-size-paper-%28Portrait-and%09Landscape%29-tp22796457p22801875.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
_______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 73, Issue 32
**************************************
___________________________________________________________
好玩贺卡等你发,邮箱贺卡全新上线!
[[alternative HTML version deleted]]