Hello,
Are there any R functions available for performing a serial correlation test
for short
time series (e.g, series having between 10-14 observations)?
Many thanks!
Isabella R. Ghement, Ph.D.
Ghement Statistical Consulting Company
301-7031 Blundell Road, Richmond, B.C., Canada, V6Y 1J5
Tel: 604-767-1250
Fax: 604-270-3922
E-mail: isabella at ghement.ca
Web: www.ghement.ca
-----Original Message-----
From: r-help-bounces at r-project.org
[mailto:r-help-bounces at r-project.org]On Behalf Of
r-help-request at r-project.org
Sent: December 23, 2008 3:00 AM
To: r-help at r-project.org
Subject: R-help Digest, Vol 70, Issue 23
Send R-help mailing list submissions to
r-help at r-project.org
To subscribe or unsubscribe via the World Wide Web, visit
https://stat.ethz.ch/mailman/listinfo/r-help
or, via email, send a message with subject or body 'help' to
r-help-request at r-project.org
You can reach the person managing the list at
r-help-owner at r-project.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of R-help digest..."
Today's Topics:
1. row sum question (Keun-Hyung Choi)
2. Re: How can I get the interpolated data? (pinwheels)
3. Re: error bars (Gavin Simpson)
4. Re: sorting variable names containing digits (Gabor Grothendieck)
5. Re: row sum question (jim holtman)
6. row sum question (keunhchoi at gmail.com)
7. Hmisc Dotplot with confidence intervals and panel.points
problem (PSJ)
8. Re: Hmisc Dotplot with confidence intervals and panel.points
problem (Frank E Harrell Jr)
9. Re: Hmisc Dotplot with confidence intervals and panel.points
problem (PSJ)
10. AR(2) coefficient interpretation (Stephen Oman)
11. Re: row sum question (Gabor Grothendieck)
12. imputing the numerical columns of a dataframe, returning the
rest unchanged (Mark Heckmann)
13. Matching (vpas)
14. Re: Matching (Gabor Grothendieck)
15. post hoc comparisons on interaction means following lme
(Lawrence Hanser)
16. Re: Matching (Doran, Harold)
17. Treatment of Date ODBC objects in R (RODBC) (Ivan Alves)
18. Re: Treatment of Date ODBC objects in R (RODBC) (aavram at mac.com)
19. package sn (Adelchi Azzalini)
20. Re: queue simulation (Charles C. Berry)
21. Re: Globbing Files in R (Earl F Glynn)
22. Re: Globbing Files in R (Duncan Murdoch)
23. Re: Treatment of Date ODBC objects in R (RODBC) (Peter Dalgaard)
24. questions about read datafile into R (Lu, Zheng)
25. Re: questions about read datafile into R (Sarah Goslee)
26. How can I avoid nested 'for' loops or quicken the process?
(Brigid Mooney)
27. Re: How can I avoid nested 'for' loops or quicken the
process? (Charles C. Berry)
28. Error compiling R.2.8.1 with gcc 4.4 on Mac OS 10.5.6
(Mike Lawrence)
29. sem package fails when no of factors increase from 3 to 4
(Xiaoxu LI)
30. Re: queue simulation (Norm Matloff)
31. offlist Re: How can I avoid nested 'for' loops or quicken the
process? (David Winsemius)
32. Re: Error compiling R.2.8.1 with gcc 4.4 on Mac OS 10.5.6
(Peter Dalgaard)
33. Summary information by groups programming assitance
(Ranney, Steven)
34. Re: Summary information by groups programming assitance
(hadley wickham)
35. Re: Summary information by groups programming assitance
(S?ren H?jsgaard)
36. Re: Error compiling R.2.8.1 with gcc 4.4 on Mac OS 10.5.6
(Prof Brian Ripley)
37. Re: Treatment of Date ODBC objects in R (RODBC)
(Prof Brian Ripley)
38. Integrate function (glenn roberts)
39. newbie question on tcltk (eugen pircalabelu)
40. Error: cannot allocate vector of size 1.8 Gb (iamsilvermember)
41. question about read datafile (Lu, Zheng)
42. Re: newbie question on tcltk (Gabor Grothendieck)
43. Re: Integrate function (David Winsemius)
44. Error: cannot allocate vector of size 1.8 Gb (iamsilvermember)
45. Re: Summary information by groups programming assitance
(William Revelle)
46. Re: Error: cannot allocate vector of size 1.8 Gb (iamsilvermember)
47. Re: Integrate function (David Winsemius)
48. Re: Summary information by groups programming assitance
(Gabor Grothendieck)
49. Re: sem package fails when no of factors increase from 3 to 4
(John Fox)
50. Re: svyglm and sandwich estimator of variance (Thomas Lumley)
51. Re: question about read datafile (jim holtman)
52. Re: Summary information by groups programming assitance
(Ranney, Steven)
53. Problem in passing on an argument via ... how do I access it?
(Mark Heckmann)
54. Simulate dataset using Parallel Latent CTT model in R (Nidhivkk)
55. nlsrob fails with puzzling error message on input accepted by
nls (Oren Cheyette)
56. Re: Summary information by groups programming assitance
(hadley wickham)
57. Re: Summary information by groups programming assitance
(Gabor Grothendieck)
58. Re: Summary information by groups programming assitance
(Gabor Grothendieck)
59. Re: Problem in passing on an argument via ... how do I access
it? (David Winsemius)
60. sorting regression coefficients by p-value (Sharma, Dhruv)
61. Tabular output: from R to Excel or HTML (Stavros Macrakis)
62. Re: sorting regression coefficients by p-value (David Winsemius)
63. Re: Tabular output: from R to Excel or HTML (Tobias Verbeke)
64. Re: Tabular output: from R to Excel or HTML (David Winsemius)
65. Re: AR(2) coefficient interpretation (Stephen Oman)
66. newbie problem using Design.rcs (sp)
67. Re: QCA adn Fuzzy (ronggui)
68. Re: AR(2) coefficient interpretation (Prof Brian Ripley)
69. Re: Error: cannot allocate vector of size 1.8 Gb
(Prof Brian Ripley)
70. Borders for rectangles in lattice plot key
(Richard.Cotton at hsl.gov.uk)
----------------------------------------------------------------------
Message: 1
Date: Mon, 22 Dec 2008 10:36:22 +0900
From: "Keun-Hyung Choi" <khchoi at sfsu.edu>
Subject: [R] row sum question
To: <r-help at r-project.org>
Message-ID: <7CD8385847F744259756F6B6A816DD91 at JungranPC>
Content-Type: text/plain
Dear helpers,
I'm using R version 2.8.0.
Suppose that I have a small data set like below.
[,1] [,2] [,3] [,4]
a 1 1 0 0
b 0 1 1 0
c 1 1 1 0
d 0 1 1 1
First, I'd like to find row sum of values uniquely present in each row, but
only sequentially from the top row, meaning that if the value is shown in
the above row(s) already, the same value in the following row shouldn't be
added into the sum.
The result should be like this:
row.sum
[1] 2 1 0 1
And if a and c were swapped, the row.sum is 3 0 0 1
Second, I'd like to randomly reorder the rows, and repeat calculating
row.sum again, for many times less than all combinations possible (4! In
this case), kind of simulation, and store the results into a matrix.
Thanks.
Keun-Hyung
[[alternative HTML version deleted]]
------------------------------
Message: 2
Date: Sun, 21 Dec 2008 18:49:00 -0800 (PST)
From: pinwheels <cactus3 at 163.com>
Subject: Re: [R] How can I get the interpolated data?
To: r-help at r-project.org
Message-ID: <21122053.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
Thank you very much!
It's very helpful to me!
David Winsemius wrote:>
> If you look at the CR.rsm object with str() you will see that it
> inherits from the lm class of objects. Therefore the predict.lm method
> will be available and if you further look at:
>
> ?predict.lm
>
> You see that all you need to do is give it a dataframe that has the
> variables that match up with you model terms so this is a minimal
> example:
>
> > predict(CR.rsm, newdata=data.frame(x1=84,x2=171))
> 1
> 80.58806
>
> To get the entire range that you desire (and which the plotting
> function for GSM already produced) you need:
>
> ?expand.grid
>
> z <- predict(CR.rsm, expand.grid(x1=seq(86.88,len=21),
> x2=seq(175,179,len=21)))
>
> # or
> data.frame(expand.grid(x1=seq(86.88,len=21), x2=seq(175,179,len=21)),
> z = predict(CR.rsm, expand.grid(x1=seq(86.88,len=21),
> x2=seq(175,179,len=21))
> )
> )
>
> Since you are narrowing the range for your prediction, it's possible
> that you already realize that the original example plot was not just
> interpolating but also extrapolating considerably beyond the available
> data in places. That dataset only had 14 observations and rather
> sketchy or non-existent in the extremal regions of the x1 cross x2
> space.
>
> I greatly value the ability of the Hmisc/Design packages ability to
> restrict estimation and plotting to only those regions where the data
> will support estimates. I tried applying the perimeter function in
> Harrell's packages to your example, but the plot.Design function
> recognized that I was giving it a construct from a different package
> and refused to play.
>
> At any rate, HTH.
> --
> David Winsemius
> Heritage Labs
>
> On Dec 21, 2008, at 7:33 AM, pinwheels wrote:
>
>>
>> Hello,everybody!
>>
>> I am a beginner of R.
>>
>> And I want to ask a question. If anybody would help me, thank you
>> very much
>> ahead.
>> I want to plot something like a response surface, and I find the
"rsm"
>> package.
>>
>> Some commands are like this:
>>
>> #code head
>> library(rsm)
>> CR = coded.data(ChemReact, x1 ~ Time, x2 ~ Temp)
>> CR.rsm = rsm(Yield ~ SO(x1,x2), data = CR)
>> summary(CR.rsm)
>> contour(CR.rsm,x1~x2)
>> #code end
>>
>> What if I want the data interpolated, what should I do?
>> For example:
>> There is a data frame like:
>>
>> xa1<-seq(86,88,len=21)
>> xa2<-seq(175,179,len=41)
>> z<- ... # referring site(xa1,xa2) from the contour plotted above
>>
>> or
>>
>> xa1 xa2 z
>> 86 175 ???
>> 86.1 175 ???
>> ... ... ...
>> 86.7 177.3 ???
>> ... .... ...
>> 88 179 ???
>>
>> or something alike.
>>
>> How could I get the z value(???) from the CR.rsm or the plotted
>> contour?
>> --
>> View this message in context:
>>
http://www.nabble.com/How-can-I-get-the-interpolated-data--tp21114660p211146
60.html>> Sent from the R help mailing list archive at Nabble.com.
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
--
View this message in context:
http://www.nabble.com/How-can-I-get-the-interpolated-data--tp21114660p211220
53.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 3
Date: Mon, 22 Dec 2008 12:19:24 +0000
From: Gavin Simpson <gavin.simpson at ucl.ac.uk>
Subject: Re: [R] error bars
To: "Kelly-Gerreyn B.A." <b.kelly-gerreyn at noc.soton.ac.uk>
Cc: "'r-help at r-project.org'" <r-help at
r-project.org>
Message-ID: <1229948365.2909.19.camel at desktop.localhost>
Content-Type: text/plain; charset="us-ascii"
On Fri, 2008-12-19 at 13:06 +0000, Kelly-Gerreyn B.A.
wrote:> Dear Help
>
> I'm new to R (from matlab)...using windows XP.
>
> I've tried to work out, to no avail, 4 things:
>
> 1) rotating the numbers on axes...something to do with par(str) ??
?par and argument 'las' for basic control. There is a FAQ that explains
how to get more control of the rotating:
http://cran.r-project.org/doc/FAQ/R-FAQ.html#How-can-I-create-rotated-axis-l
abels_003f
>
> 2) how to close a window having opened one e.g. windows(7,7)
[ click on the close button in the window title bar? ;-) ]
More seriously, if you are looking for a R function to do it, dev.off()
closes the currently active device.
dev.list() shows the devices open on your system, dev.cur() tells you
which device is currently active and dev.set() switches between opened
devices; since from your question I'm guessing you have more than one
opened at a time.
>
> 3) how to manipulate the key (e.g. dots, lines etc) on the legend.
> Using pch just gives me the same key for all functions on a plot.
Provide a vector of plotting characters, e.g. 'pch = 1:6'. 'pch'
only
controls the plotting character; 'lwd' controls widths of lines,
'lty'
line types. All accept vectors of 'types' so you can specify exactly
what you need.
>
> i.e. legend ("right",
legend=c("Model","Obs"), pch= 16 )...in this
> case both Model and Obs are filled circles!
>
> 4) how to add error bars (SE or STD) to an xy plot
?arrows
arrows() is the easiest way to cook this up yourself from standard
graphics calls. Just draw an arrow from value+error to value-error on
the axis that has the error. For example, using dummy data:
dat <- data.frame(A = rnorm(10, mean = 3, sd = 2),
B = rnorm(10, 10, sd = 4),
C = rnorm(10, 35, sd = 6))
## something to plot
mns <- colMeans(dat)
## some uncertainty
err <- 2 * sd(dat)
## compute space for means +/- err
ylims <- range(c(mns + err, mns - err))
## plot
plot(mns, ylim = ylims)
## add error bars
arrows(1:3, mns + err, 1:3, mns - err, code = 3,
angle = 90, length = 0.1)
There are functions in several packages to simplify this process for
you. Do:
RSiteSearch("error bars", restrict="functions")
in your R session for lots of hits.
These are all fairly basic R fundamentals. Perhaps, if you haven't
already done so, take a look at the manual 'An Introduction to R' that
comes with R or one of the several user contributed documents:
http://cran.r-project.org/other-docs.html
HTH
G
>
> The last is the most important for me right now. All help much
> appreciated.
>
> Many thanks
>
> Boris
>
>
> Dr. Boris Kelly-Gerreyn
>
> Voiced with
Dragon<http://www.nuance.com/naturallyspeaking/preferred/>
<http://www.nuance.com/naturallyspeaking/preferred/> NaturallySpeaking
v9>
> <http://www.nuance.com/naturallyspeaking/preferred/>Ocean
Biogeochemistry
& Ecosystems <http://www.noc.soton.ac.uk/obe/>> National Oceanography Centre,
Southampton<http://www.noc.soton.ac.uk/>
> Waterfront Campus, European Way
> Southampton, SO14 3ZH, UK
> Tel : +44 (0) 2380 596334
> Sec : +44 (0) 2380 596015
> Fax : +44 (0) 2380 596247
> e-m: bag'@'noc.soton.ac.uk
>
> Friday Seminar <http://www.noc.soton.ac.uk/nocs/friday_seminars.php>
Series>
> <http://www.noc.soton.ac.uk/nocs/friday_seminars.php>This e-mail (and
any
attachments) is confidential and intended solely for the use of the
individual or entity to whom it is addressed. Both NERC and the University
of Southampton (who operate NOCS as a collaboration) are subject to the
Freedom of Information Act 2000. The information contained in this e-mail
and any reply you make may be disclosed unless it is legally exempt from
disclosure. Any material supplied to NOCS may be stored in the electronic
records management system of either the University or NERC as
appropriate.>
>
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
--
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Dr. Gavin Simpson [t] +44 (0)20 7679 0522
ECRC, UCL Geography, [f] +44 (0)20 7679 0565
Pearson Building, [e] gavin.simpsonATNOSPAMucl.ac.uk
Gower Street, London [w] http://www.ucl.ac.uk/~ucfagls/
UK. WC1E 6BT. [w] http://www.freshwaters.org.uk
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20081222/3da8af1b/attachm
ent-0001.bin>
------------------------------
Message: 4
Date: Mon, 22 Dec 2008 07:40:17 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] sorting variable names containing digits
To: "John Fox" <jfox at mcmaster.ca>
Cc: r-help at r-project.org
Message-ID:
<971536df0812220440w2090b69dq2f4683267419ab2e at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Note that mysort2 is slightly more general as it handles the case
that the strings begin with numerics:
> u <- c("51a2", "2a4")
> mysort(u)
[1] "51a2" "2a4"> mysort2(u)
[1] "2a4" "51a2"
On Mon, Dec 22, 2008 at 12:32 AM, John Fox <jfox at mcmaster.ca>
wrote:> Dear Gabor,
>
> Thank you (again) for this second suggestion, which does exactly what I
> want. At the risk of appearing ungrateful, and although the judgment is
> admittedly subjective, I don't find it simpler than mysort().
>
> For curiosity, I tried some timings of the two functions for the sample
> problems that I supplied:
>
>> system.time(for (i in 1:100) mysort(s))
> user system elapsed
> 1.498 0.006 1.503
>
>> system.time(for (i in 1:100) mysort2(s))
> user system elapsed
> 6.026 0.028 6.059
>
>> system.time(for (i in 1:100) mysort(t))
> user system elapsed
> 0.858 0.003 0.874
>
>> system.time(for (i in 1:100) mysort2(t))
> user system elapsed
> 2.736 0.014 2.757
>
> This is on a 2.4 GHz Core 2 Duo MacBook. I don't know of course
> whether this generalizes to other problems. I suspect that the
> recursive solution will look worse as the number of "components"
of the
> names increases, but of course names are unlikely to have a large
> number of components.
>
> Best,
> John
>
> On Sun, 21 Dec 2008 23:28:51 -0500
> "Gabor Grothendieck" <ggrothendieck at gmail.com> wrote:
>> Another possibility is to use strapply in gsubfn giving a solution
>> that is non-recursive and shorter:
>>
>> library(gsubfn)
>>
>> mysort2 <- function(s) {
>> L <- strapply(s, "([0-9]+)|([^0-9]+)",
>> ~ if (nchar(x)) sprintf("%9d", as.numeric(x))
else y)
>> L2 <- t(do.call(cbind, lapply(L, ts)))
>> L3 <- replace(L2, is.na(L2), "")
>> ord <- do.call(order, as.data.frame(L3, stringsAsFactors =
FALSE))
>> s[ord]
>> }
>>
>>
>> First strapply breaks up each string into a character vector of the
>> numeric
>> and non-numeric components. We pad each numeric component on the
>> left with spaces using sprintf so they are all 9 wide. The next line
>> turns that
>> into a matrix L2 and then we replace the NAs giving L3. Finally we
>> order it
>> and apply the ordering, ord, to get the sorted version.
>>
>> The gsubfn home page is at:
>> http://gsubfn.googlecode.com
>>
>> Here is some sample output:
>>
>> > mysort2(s)
>> [1] "var2" "var10a2" "x1a"
"x1b" "x02" "x02a"
>> "x02b" "y1a1" "y2"
"y10" "y10a1" "y10a2" "y10a10"
>> > mysort(s)
>> [1] "var2" "var10a2" "x1a"
"x1b" "x02" "x02a"
>> "x02b" "y1a1" "y2"
"y10" "y10a1" "y10a2" "y10a10"
>>
>> > mysort2(t)
>> [1] "q2.1.1" "q10.1.1" "q10.2.1"
"q10.10.2"
>> > mysort(t)
>> [1] "q2.1.1" "q10.1.1" "q10.2.1"
"q10.10.2"
>>
>>
>> On Sun, Dec 21, 2008 at 9:57 PM, John Fox <jfox at mcmaster.ca>
wrote:
>> > Dear Gabor,
>> >
>> > Thanks for this -- I was unaware of mixedsort(). As you point out,
>> > however, mixedsort() doesn't cover all of the cases in which
I'm
>> > interested and which are handled by mysort().
>> >
>> > Regards,
>> > John
>> >
>> > On Sun, 21 Dec 2008 20:51:17 -0500
>> > "Gabor Grothendieck" <ggrothendieck at gmail.com>
wrote:
>> >> mixedsort in gtools will give the same result as mysort(s) but
>> >> differs in the case of t.
>> >>
>> >> On Sun, Dec 21, 2008 at 8:33 PM, John Fox <jfox at
mcmaster.ca>
>> wrote:
>> >> > Dear r-helpers,
>> >> >
>> >> > I'm looking for a way of sorting variable names in a
"natural"
>> >> order, when
>> >> > the names are composed of digits and other characters. I
know
>> that
>> >> this is a
>> >> > vague idea, and that sorting character strings is a
complex
>> topic,
>> >> but
>> >> > perhaps a couple of examples will clarify what I mean:
>> >> >
>> >> >> s <- c("x1b", "x1a",
"x02b", "x02a", "x02", "y1a1",
"y10a2",
>> >> > + "y10a10", "y10a1",
"y2", "var10a2", "var2", "y10")
>> >> >
>> >> >> sort(s)
>> >> > [1] "var10a2" "var2"
"x02" "x02a" "x02b" "x1a"
>> >> > [7] "x1b" "y10"
"y10a1" "y10a10" "y10a2" "y1a1"
>> >> > [13] "y2"
>> >> >
>> >> >> mysort(s)
>> >> > [1] "var2" "var10a2"
"x1a" "x1b" "x02" "x02a"
>> >> > [7] "x02b" "y1a1"
"y2" "y10" "y10a1" "y10a2"
>> >> > [13] "y10a10"
>> >> >
>> >> >> t <- c("q10.1.1", "q10.2.1",
"q2.1.1", "q10.10.2")
>> >> >
>> >> >> sort(t)
>> >> > [1] "q10.1.1" "q10.10.2"
"q10.2.1" "q2.1.1"
>> >> >
>> >> >> mysort(t)
>> >> > [1] "q2.1.1" "q10.1.1"
"q10.2.1" "q10.10.2"
>> >> >
>> >> > Here, sort() is the standard R function and mysort() is a
>> >> replacement, which
>> >> > sorts the names into the order that seems natural to me,
at
>> least
>> >> in the
>> >> > cases that I've tried:
>> >> >
>> >> > mysort <- function(x){
>> >> > sort.helper <- function(x){
>> >> > prefix <- strsplit(x, "[0-9]")
>> >> > prefix <- sapply(prefix, "[", 1)
>> >> > prefix[is.na(prefix)] <- ""
>> >> > suffix <- strsplit(x, "[^0-9]")
>> >> > suffix <- as.numeric(sapply(suffix, "[",
2))
>> >> > suffix[is.na(suffix)] <- -Inf
>> >> > remainder <- sub("[^0-9]+", "",
x)
>> >> > remainder <- sub("[0-9]+", "",
remainder)
>> >> > if (all (remainder == "")) list(prefix,
suffix)
>> >> > else c(list(prefix, suffix), Recall(remainder))
>> >> > }
>> >> > ord <- do.call("order", sort.helper(x))
>> >> > x[ord]
>> >> > }
>> >> >
>> >> > I have a couple of applications in mind, one of which is
>> >> recognizing
>> >> > repeated-measures variables in "wide"
longitudinal datasets,
>> which
>> >> often are
>> >> > named in the form x1, x2, ... , xn.
>> >> >
>> >> > mysort(), which works by recursively slicing off pairs of
>> non-digit
>> >> and
>> >> > digit strings, seems more complicated than it should have
to be,
>> >> and I
>> >> > wonder whether anyone has a more elegant solution. I
don't think
>> >> that
>> >> > efficiency is a serious issue for the applications
I'm
>> considering,
>> >> but of
>> >> > course a more efficient solution would be of interest.
>> >> >
>> >> > Thanks,
>> >> > John
>> >> >
>> >> > ------------------------------
>> >> > John Fox, Professor
>> >> > Department of Sociology
>> >> > McMaster University
>> >> > Hamilton, Ontario, Canada
>> >> > web: socserv.mcmaster.ca/jfox
>> >> >
>> >> > ______________________________________________
>> >> > R-help at r-project.org mailing list
>> >> > https://stat.ethz.ch/mailman/listinfo/r-help
>> >> > PLEASE do read the posting guide
>> >> http://www.R-project.org/posting-guide.html
>> >> > and provide commented, minimal, self-contained,
reproducible
>> code.
>> >> >
>> >
>> > --------------------------------
>> > John Fox, Professor
>> > Department of Sociology
>> > McMaster University
>> > Hamilton, Ontario, Canada
>> > http://socserv.mcmaster.ca/jfox/
>> >
>
> --------------------------------
> John Fox, Professor
> Department of Sociology
> McMaster University
> Hamilton, Ontario, Canada
> http://socserv.mcmaster.ca/jfox/
>
------------------------------
Message: 5
Date: Mon, 22 Dec 2008 07:52:12 -0500
From: "jim holtman" <jholtman at gmail.com>
Subject: Re: [R] row sum question
To: khchoi at sfsu.edu
Cc: r-help at r-project.org
Message-ID:
<644e1f320812220452i6bc902f7j2ceb5a9fe8284bb8 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Will this do it for you:
> nrows <- 10
> ncols <- 10
> mat <- matrix(sample(0:1, nrows * ncols, TRUE), nrow=nrows, ncol=ncols)
> mat
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 1 0 1 1 1 0 0 0 0 0
[2,] 0 0 0 0 0 0 0 1 1 0
[3,] 0 1 0 0 1 1 1 1 1 1
[4,] 0 1 1 1 0 0 0 0 1 1
[5,] 0 0 1 0 1 1 1 0 0 0
[6,] 1 0 1 0 0 0 1 0 0 0
[7,] 0 0 1 0 0 1 0 0 1 1
[8,] 1 0 1 0 0 0 1 1 1 0
[9,] 0 1 0 1 0 1 1 0 0 0
[10,] 0 1 1 1 1 0 0 0 1 0> mask <- rep(1, ncols)
> for (i in 1:nrows){
+ print(sum(mask & mat[i,]))
+ mask <- mask & !mat[i,]
+ }
[1] 4
[1] 2
[1] 4
[1] 0
[1] 0
[1] 0
[1] 0
[1] 0
[1] 0
[1] 0
On Sun, Dec 21, 2008 at 8:36 PM, Keun-Hyung Choi <khchoi at sfsu.edu>
wrote:> Dear helpers,
>
>
>
> I'm using R version 2.8.0.
>
> Suppose that I have a small data set like below.
>
> [,1] [,2] [,3] [,4]
>
> a 1 1 0 0
>
> b 0 1 1 0
>
> c 1 1 1 0
>
> d 0 1 1 1
>
>
>
> First, I'd like to find row sum of values uniquely present in each row,
but> only sequentially from the top row, meaning that if the value is shown in
> the above row(s) already, the same value in the following row shouldn't
be
> added into the sum.
>
> The result should be like this:
>
>
>
> row.sum
>
> [1] 2 1 0 1
>
>
>
> And if a and c were swapped, the row.sum is 3 0 0 1
>
>
>
> Second, I'd like to randomly reorder the rows, and repeat calculating
> row.sum again, for many times less than all combinations possible (4! In
> this case), kind of simulation, and store the results into a matrix.
>
> Thanks.
>
> Keun-Hyung
>
>
>
>
>
>
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
------------------------------
Message: 6
Date: Mon, 22 Dec 2008 22:20:11 +0900
From: keunhchoi at gmail.com
Subject: [R] row sum question
To: r-help at r-project.org
Message-ID:
<2e4258c50812220520n56d7c66au5c7929f50c0f3321 at mail.gmail.com>
Content-Type: text/plain
Dear helpers,
I'm using R version 2.8.0.
Suppose that I have a small data set like below.
[,1] [,2] [,3] [,4]
a 1 1 0 0
b 0 1 1 0
c 1 1 1 0
d 0 1 1 1
First, I'd like to find sum of each row uniquely present in each row, but
only sequentially from the top row, meaning that if the value appears in the
following column(s), that shouldn't enter the sum.
The result should be like this:
row.sum
[1] 2 1 0 1
And if a and c were swapped, the row.sum is
row.sum
[1] 3 0 0 1
Second, I'd like to randomly reorder the rows, and repeat calculating
row.sum again, for many times less than all combinations possible (4! In
this case), kind of simulation, and store the results into a matrix.
Thanks.
Keun-Hyung
[[alternative HTML version deleted]]
------------------------------
Message: 7
Date: Mon, 22 Dec 2008 15:02:34 +0100
From: "PSJ" <psjan at lycos.de>
Subject: [R] Hmisc Dotplot with confidence intervals and panel.points
problem
To: r-help at r-project.org
Message-ID: <170108016590384 at lycos-europe.com>
Content-Type: text/plain; charset="windows-1252"
Hello useRs,
I have a question regarding the function Dotplot from the Hmisc package: I
want two things:
1) confidence intervals around the dots
2) some additional "annotation" points plotted in the graphic
I can easily achieve (1) by constructing an appropriate object with Cbind.
But for (2) when I use the panel=function argument the confidence intervals
of the original plot are gone.
This is my (simplified) code:
Dotplot(
resultrow ~ Cbind(ESTIM, L95, U95),
data=results,
abline=list(v=0),
scales=list(
y=list(
labels=paste(as.character(foo$bar),
at=results$nranalysis,
cex=0.5
)
),
xlab="some label",
ylab="",
xlim=c(-10,10),
main=paste("a title"),
subscripts = T,
panel=function(...)
{ #panel.xYplot(...)
panel.points(x=rep(9, length(toobig)), y=results$noanalysis[toobig],
pch=62)
panel.dotplot(...)
}
)
I tried a number modifications of this code but without success.
I'm sure I'm doing something wrong here, but I cannot figure it out...
any
ideas?
Thanks in advance!
Peter
Background information:
======================My problem is that some of the values to be plotted have a
big range, so
that one cannot see what is happening near zero (which is of more interest
for me), because the x-axis is properly scaled by R. So I use xlim to
restrict drawing to -10 to 10. But then some points and confidence intervals
are no more visible. So I decided to put a character ">" at x-value
9 for
the ones lying above 10 and the code above leads to the correct plotting of
the original data and markers in the right places but the confidence
intervals are gone. I guess that the confidence intervals are themselves
plotted by a panel function which in some way gets overwritten by my custom
one.
System Information:
==================> sessionInfo()
R version 2.7.2 (2008-08-25)
i386-pc-mingw32
locale:
LC_COLLATE=German_Germany.1252;LC_CTYPE=German_Germany.1252;LC_MONETARY=Germ
an_Germany.1252;LC_NUMERIC=C;LC_TIME=German_Germany.1252
attached base packages:
[1] stats graphics grDevices datasets tcltk utils methods
[8] base
other attached packages:
[1] svSocket_0.9-5 TinnR_1.0.2 R2HTML_1.59 Hmisc_3.4-4
loaded via a namespace (and not attached):
[1] cluster_1.11.11 grid_2.7.2 lattice_0.17-17 svMisc_0.9-5
[5] tools_2.7.2
LOVE at LYCOS ist jetzt ganz neu ? melde dich kostenlos an, lerne neue Leute
kennen und finde deinen Flirt! http://love.lycos.de/start.action
------------------------------
Message: 8
Date: Mon, 22 Dec 2008 08:18:34 -0600
From: Frank E Harrell Jr <f.harrell at vanderbilt.edu>
Subject: Re: [R] Hmisc Dotplot with confidence intervals and
panel.points problem
To: PSJ <psjan at lycos.de>
Cc: r-help at r-project.org
Message-ID: <494FA1BA.3060005 at vanderbilt.edu>
Content-Type: text/plain; charset=windows-1252; format=flowed
PSJ wrote:> Hello useRs,
> I have a question regarding the function Dotplot from the Hmisc package: I
want two things:> 1) confidence intervals around the dots
> 2) some additional "annotation" points plotted in the graphic
>
> I can easily achieve (1) by constructing an appropriate object with Cbind.
But for (2) when I use the panel=function argument the confidence intervals
of the original plot are gone.>
> This is my (simplified) code:
>
> Dotplot(
> resultrow ~ Cbind(ESTIM, L95, U95),
> data=results,
> abline=list(v=0),
> scales=list(
> y=list(
> labels=paste(as.character(foo$bar),
> at=results$nranalysis,
> cex=0.5
> )
> ),
> xlab="some label",
> ylab="",
> xlim=c(-10,10),
> main=paste("a title"),
> subscripts = T,
> panel=function(...)
> { #panel.xYplot(...)
> panel.points(x=rep(9, length(toobig)), y=results$noanalysis[toobig],
pch=62)> panel.dotplot(...)
> }
> )
> I tried a number modifications of this code but without success.
> I'm sure I'm doing something wrong here, but I cannot figure it
out... any
ideas?>
> Thanks in advance!
> Peter
Peter,
The panel function must run panel.Dotplot(...) and not panel.dotplot.
Inside panel you can run whatever else you need.
Frank
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Background information:
> ======================> My problem is that some of the values to be
plotted have a big range, so
that one cannot see what is happening near zero (which is of more interest
for me), because the x-axis is properly scaled by R. So I use xlim to
restrict drawing to -10 to 10. But then some points and confidence intervals
are no more visible. So I decided to put a character ">" at x-value
9 for
the ones lying above 10 and the code above leads to the correct plotting of
the original data and markers in the right places but the confidence
intervals are gone. I guess that the confidence intervals are themselves
plotted by a panel function which in some way gets overwritten by my custom
one.>
>
> System Information:
> ==================>> sessionInfo()
> R version 2.7.2 (2008-08-25)
> i386-pc-mingw32
>
> locale:
>
LC_COLLATE=German_Germany.1252;LC_CTYPE=German_Germany.1252;LC_MONETARY=Germ
an_Germany.1252;LC_NUMERIC=C;LC_TIME=German_Germany.1252>
> attached base packages:
> [1] stats graphics grDevices datasets tcltk utils methods
> [8] base
>
> other attached packages:
> [1] svSocket_0.9-5 TinnR_1.0.2 R2HTML_1.59 Hmisc_3.4-4
>
> loaded via a namespace (and not attached):
> [1] cluster_1.11.11 grid_2.7.2 lattice_0.17-17 svMisc_0.9-5
> [5] tools_2.7.2
>
> LOVE at LYCOS ist jetzt ganz neu ? melde dich kostenlos an, lerne neue
Leute
kennen und finde deinen Flirt!
http://love.lycos.de/start.action>
>
> ------------------------------------------------------------------------
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
--
Frank E Harrell Jr Professor and Chair School of Medicine
Department of Biostatistics Vanderbilt University
------------------------------
Message: 9
Date: Mon, 22 Dec 2008 16:03:10 +0100
From: "PSJ" <psjan at lycos.de>
Subject: Re: [R] Hmisc Dotplot with confidence intervals and
panel.points problem
To: r-help at r-project.org
Message-ID: <94268216243811 at lycos-europe.com>
Content-Type: text/plain; charset="windows-1252"
Dear Frank,
thanks for your fast answer...
> The panel function must run panel.Dotplot(...) and not panel.dotplot.
> Inside panel you can run whatever else you need.
>
> Frank
thanks a lot, this made it work!
Cheers,
Peter
------------------------------
Message: 10
Date: Mon, 22 Dec 2008 07:06:38 -0800 (PST)
From: Stephen Oman <stephen.oman at gmail.com>
Subject: [R] AR(2) coefficient interpretation
To: r-help at r-project.org
Message-ID: <21129322.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
I am a beginner in using R and I need help in the interpretation of AR
result
by R. I used 12 observations for my AR(2) model and it turned out the
intercept showed 5.23 while first and second AR coefficients showed 0.40 and
0.46. It is because my raw data are in million so it seems the intercept is
too small and it doesn't make sense. Did i make any mistake in my code? My
code is as follows:
r<-read.table("data.txt", dec=",", header=T)
attach(r)
fit<-arima(a, c(2,0,0))
Thank you for your help first.
--
View this message in context:
http://www.nabble.com/AR%282%29-coefficient-interpretation-tp21129322p211293
22.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 11
Date: Mon, 22 Dec 2008 10:48:19 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] row sum question
To: khchoi at sfsu.edu
Cc: r-help at r-project.org
Message-ID:
<971536df0812220748o194e5b18w5674d11d4c0b4ea9 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Assuming DF is a data frame like this:
DF <- data.frame(V1 = c(1, 0, 1, 0), V2 = c(1, 1, 1, 1),
V3 = c(0, 1, 1, 1), V4 = c(0, 0, 0, 1))
# try this:
head(rowSums((rbind(0, cummax(DF)) < rbind(cummax(DF), 0))), -1)
On Sun, Dec 21, 2008 at 8:36 PM, Keun-Hyung Choi <khchoi at sfsu.edu>
wrote:> Dear helpers,
>
>
>
> I'm using R version 2.8.0.
>
> Suppose that I have a small data set like below.
>
> [,1] [,2] [,3] [,4]
>
> a 1 1 0 0
>
> b 0 1 1 0
>
> c 1 1 1 0
>
> d 0 1 1 1
>
>
>
> First, I'd like to find row sum of values uniquely present in each row,
but> only sequentially from the top row, meaning that if the value is shown in
> the above row(s) already, the same value in the following row shouldn't
be
> added into the sum.
>
> The result should be like this:
>
>
>
> row.sum
>
> [1] 2 1 0 1
>
>
>
> And if a and c were swapped, the row.sum is 3 0 0 1
>
>
>
> Second, I'd like to randomly reorder the rows, and repeat calculating
> row.sum again, for many times less than all combinations possible (4! In
> this case), kind of simulation, and store the results into a matrix.
>
> Thanks.
>
> Keun-Hyung
>
>
>
>
>
>
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 12
Date: Mon, 22 Dec 2008 16:38:54 +0100
From: "Mark Heckmann" <mark.heckmann at gmx.de>
Subject: [R] imputing the numerical columns of a dataframe, returning
the rest unchanged
To: <r-help at R-project.org>
Message-ID: <4069F67C506548AA93EFD9890CE6ACA7 at TCPC000>
Content-Type: text/plain; charset="us-ascii"
Hi R-experts,
how can I apply a function to each numeric column of a data frame and return
the whole data frame with changes in numeric columns only?
In my case I want to do a median imputation of the numeric columns and
retain the other columns. My dataframe (DF) contains factors, characters and
numerics.
I tried the following but that does not work:
foo <- function(x){
if(is.numeric(x)==TRUE) return(impute(x))
else(return(x))
}
sapply(DF, foo)
day version ID V1 V2 V3
[1,] "4" "A" "1a" "1"
"5" "5"
[2,] "4" "A" "2a" "2"
"3" "5"
[3,] "4" "B" "3a" "3"
"5" "5"
All the variables are coerced to characters now ("day" and
"version" were
factors, "id" a character). I only want imputations on the numerics,
but the
rest to be returned unchanged.
Is there a function available. If not, how can I do it?
TIA and merry x-mas,
Mark
------------------------------
Message: 13
Date: Mon, 22 Dec 2008 08:03:09 -0800 (PST)
From: vpas <vic.pascow at gmail.com>
Subject: [R] Matching
To: r-help at r-project.org
Message-ID: <21130173.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
I understand this is an easy question, and have been playing around with
grep
and the match function, but was hoping for a little incite:
I have one .csv with the following data:
names values
A 1
B 2
C 3
D 4
The second .csv is:
names
A
C
I am hoping to match all of the rows that appear in the second .csv, making
a new file that would look like this:
names values
A 1
C 3
Here is what I have so far:
my.data <- read.csv("rows.csv",sep=",")
my.selection <- read.csv("select.csv",sep=",")
matched <- match(my.data[,1], my.selection[,1])
my.data <- my.data[matched]
write.table(as.matrix(my.data), "select_RESULTS.txt")
Unfortunately, this is throwing errors in row numbers...
--
View this message in context:
http://www.nabble.com/Matching-tp21130173p21130173.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 14
Date: Mon, 22 Dec 2008 11:25:01 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] Matching
To: vpas <vic.pascow at gmail.com>
Cc: r-help at r-project.org
Message-ID:
<971536df0812220825r4c2d911bh53cfdbb2352fa0d5 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Try this:
> Lines1 <- "names,values
+ A,1
+ B,2
+ C,3
+ D,4">
> Lines2 <- "names
+ A
+ C">
> DF1 <- read.csv(textConnection(Lines1))
> DF2 <- read.csv(textConnection(Lines2))
> merge(DF1, DF2)
names values
1 A 1
2 C 3
On Mon, Dec 22, 2008 at 11:03 AM, vpas <vic.pascow at gmail.com>
wrote:>
> I understand this is an easy question, and have been playing around with
grep> and the match function, but was hoping for a little incite:
>
> I have one .csv with the following data:
>
> names values
> A 1
> B 2
> C 3
> D 4
>
>
> The second .csv is:
>
> names
> A
> C
>
>
> I am hoping to match all of the rows that appear in the second .csv,
making> a new file that would look like this:
>
> names values
> A 1
> C 3
>
>
> Here is what I have so far:
>
> my.data <- read.csv("rows.csv",sep=",")
> my.selection <- read.csv("select.csv",sep=",")
> matched <- match(my.data[,1], my.selection[,1])
> my.data <- my.data[matched]
> write.table(as.matrix(my.data), "select_RESULTS.txt")
>
> Unfortunately, this is throwing errors in row numbers...
>
> --
> View this message in context:
http://www.nabble.com/Matching-tp21130173p21130173.html> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 15
Date: Mon, 22 Dec 2008 08:31:38 -0800
From: "Lawrence Hanser" <lhanser at gmail.com>
Subject: [R] post hoc comparisons on interaction means following lme
To: "r-help at r-project.org" <r-help at r-project.org>
Message-ID:
<f95c22480812220831n22ba7d98k5404ecdb2fcec0e6 at mail.gmail.com>
Content-Type: text/plain
Dear Colleagues,
I have scoured the help files and been unable to find an answer to my
question. Please forgive me if I have missed something obvious.
I have run the following two models, where "category" has 3 levels and
"comp" has 8 levels:
mod1 <- lmer(x~category+comp+(1|id),data=impchiefsrm)
mod2 <- lmer(x~category+comp+category*comp+(1|id),data=impchiefsrm)
followed by:
anova(mod1,mod2)
The anova shows that the interaction term specified in the second model is
significant when added to the main effects model.
Now I'd like to run post hoc comparisons using glht to discern where the
interaction means differ. For example, for post hoc comparisons on the
means of the main effect of "category" I can run:
summary(glht(mod2,linfct=mcp(category="Tukey")))
But this only gives me the mean comparisons for the "category" main
effect
means. Essentially I'd like to run the following:
summary(glht(mod2,linfct=mcp(category*comp="Tukey")))
to get the mean comparisons for the interaction means. Perhaps needless to
say, this command does not work.
Can someone tell me how to run multiple comparisons among the interaction's
means?
I suspect that specifying the correct contrasts would do it, but I can't
figure out how to setup the contrasts...
Thanks,
Larry
[[alternative HTML version deleted]]
------------------------------
Message: 16
Date: Mon, 22 Dec 2008 11:40:26 -0500
From: "Doran, Harold" <HDoran at air.org>
Subject: Re: [R] Matching
To: "vpas" <vic.pascow at gmail.com>, <r-help at
r-project.org>
Message-ID:
<ED7B522EE00C9A4FA515AA71724D61EE01D47106 at DC1EXCL01.air.org>
Content-Type: text/plain; charset="us-ascii"
You don't need grep for this. Use the merge() function and make sure the
arguments all.x and all.y are considered depending on whether this is a
left or right merge.
> -----Original Message-----
> From: r-help-bounces at r-project.org
> [mailto:r-help-bounces at r-project.org] On Behalf Of vpas
> Sent: Monday, December 22, 2008 11:03 AM
> To: r-help at r-project.org
> Subject: [R] Matching
>
>
> I understand this is an easy question, and have been playing
> around with grep and the match function, but was hoping for a
> little incite:
>
> I have one .csv with the following data:
>
> names values
> A 1
> B 2
> C 3
> D 4
>
>
> The second .csv is:
>
> names
> A
> C
>
>
> I am hoping to match all of the rows that appear in the
> second .csv, making a new file that would look like this:
>
> names values
> A 1
> C 3
>
>
> Here is what I have so far:
>
> my.data <- read.csv("rows.csv",sep=",")
> my.selection <- read.csv("select.csv",sep=",")
matched <-
> match(my.data[,1], my.selection[,1]) my.data <-
> my.data[matched] write.table(as.matrix(my.data),
"select_RESULTS.txt")
>
> Unfortunately, this is throwing errors in row numbers...
>
> --
> View this message in context:
> http://www.nabble.com/Matching-tp21130173p21130173.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 17
Date: Mon, 22 Dec 2008 17:40:57 +0100
From: Ivan Alves <papucho at mac.com>
Subject: [R] Treatment of Date ODBC objects in R (RODBC)
To: r-help at r-project.org
Message-ID: <CB45106D-68B0-4138-9CEA-478C92136264 at mac.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Dear all,
Retrieving an Oracle "Date" data type by means of RODBC (version
1.2-4) I get different classes in R depending on which operating
system I am in:
On MacOSX I get "Date" class
On Windows I get " "POSIXt" "POSIXct" class
The problem is material, as converting the "POSIXt"
"POSIXct" object
with as.Date() returns one day less ("2008-12-17 00:00:00 CET" is
returned as "2008-12-16").
I have 2 related questions:
1. Is there a way to control the conversion used by RODBC for types
"Date"? or is this controlled by the ODBC Driver (in my case the
Oracle driver in Windows and Actual on Mac OS X)?
2. What is the trick to get as.Date() to return the _intended_ date
(the date that the OS X environment "correctly" reads)?
Many thanks in advance for any guidance.
Best regards,
Ivan
------------------------------
Message: 18
Date: Mon, 22 Dec 2008 16:52:13 +0000
From: aavram at mac.com
Subject: Re: [R] Treatment of Date ODBC objects in R (RODBC)
To: "Ivan Alves" <papucho at mac.com>, r-help-bounces at
r-project.org,
r-help at r-project.org
Message-ID:
<1947895168-1229964736-cardhu_decombobulator_blackberry.rim.net-2090033294-
@bxe348.bisx.prod.on.blackberry>
Content-Type: text/plain
I tend to avoid the issue by asking Oracle for a character string
representation of the date. I use sql like this:
to_char( thedatefield, 'yyyymmdd' ) as thedate
Then in R:
d <- as.Date( as.character( thedate, '%Y%m%d') )
Hope this helps,
Avram
------Original Message------
From: Ivan Alves
Sender: r-help-bounces at r-project.org
To: r-help at r-project.org
Sent: Dec 22, 2008 8:40 AM
Subject: [R] Treatment of Date ODBC objects in R (RODBC)
Dear all,
Retrieving an Oracle "Date" data type by means of RODBC (version
1.2-4) I get different classes in R depending on which operating
system I am in:
On MacOSX I get "Date" class
On Windows I get " "POSIXt" "POSIXct" class
The problem is material, as converting the "POSIXt"
"POSIXct" object
with as.Date() returns one day less ("2008-12-17 00:00:00 CET" is
returned as "2008-12-16").
I have 2 related questions:
1. Is there a way to control the conversion used by RODBC for types
"Date"? or is this controlled by the ODBC Driver (in my case the
Oracle driver in Windows and Actual on Mac OS X)?
2. What is the trick to get as.Date() to return the _intended_ date
(the date that the OS X environment "correctly" reads)?
Many thanks in advance for any guidance.
Best regards,
Ivan
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Sent from my Verizon Wireless BlackBerry
------------------------------
Message: 19
Date: Mon, 22 Dec 2008 18:20:21 +0100
From: Adelchi Azzalini <azzalini at stat.unipd.it>
Subject: [R] package sn
To: r-help at r-project.org
Message-ID: <20081222182021.1f9adfeb.azzalini at stat.unipd.it>
Content-Type: text/plain; charset=ISO-8859-1
This message is of interest only to users of package "sn".
In early 2007, I have posted an announcement on the package web site
http://azzalini.stat.unipd.it/SN/announce2.html
about a forthcoming "version 1". This will be a deeply revised
version,
with substantial chances both in the internal working and in the
calling statements. At that time, it did not make much sense to
spread the information so widely, since the appearance of "version 1"
was so far ahead in time.
"Version 1" is still under construction, but there is a fair chance
that it will be made available in 2009. So, at this stage, I thought
appropriate to remind people that sooner or later the existing functions
may disappear, at least with their present names.
best wishes
Adelchi Azzalini
--
Adelchi Azzalini <azzalini at stat.unipd.it>
Dipart.Scienze Statistiche, Universit? di Padova, Italia
tel. +39 049 8274147, http://azzalini.stat.unipd.it/
------------------------------
Message: 20
Date: Mon, 22 Dec 2008 09:27:11 -0800
From: "Charles C. Berry" <cberry at tajo.ucsd.edu>
Subject: Re: [R] queue simulation
To: "Gerard M. Keogh" <GMKeogh at justice.ie>
Cc: r-help at r-project.org
Message-ID: <Pine.LNX.4.64.0812220924331.24942 at tajo.ucsd.edu>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
On Mon, 22 Dec 2008, Gerard M. Keogh wrote:
>
> Hi all,
>
>
> I have a multiple queing situation I'd like to simulate to get some
idea
of> the distributions - waiting times and allocations etc.
> Does R has a package available for this - many years ago there used to be
a> language called "simscript" for discrete event simulation and I
was
> wondering if R has an equivalent (or hopefully with graphics, something
> better!).
>
> Apologies if there is an easy answer to this on the help - I've looked
but
> didn't see anything obvious to me.
Not
RSiteSearch("discrete event simulation",restric='functions')
??
HTH,
Chuck
>
> Thanks and a Happy Christmas.
>
> Gerard
>
>
>
****************************************************************************
******> The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer. It is the policy of the Department of Justice, Equality and Law
Reform and the Agencies and Offices using its IT services to disallow the
sending of offensive material.> Should you consider that the material contained in this message is
offensive you should contact the sender immediately and also
mailminder[at]justice.ie.>
> Is le haghaidh an duine n? an eintitis ar a bhfuil s? d?rithe, agus le
haghaidh an duine n? an eintitis sin amh?in, a bhearta?tear an fhaisn?is a
tarchuireadh agus f?adfaidh s? go bhfuil ?bhar faoi r?n agus/n? faoi
phribhl?id inti. Toirmisctear aon athbhreithni?, atarchur n? leathadh a
dh?anamh ar an bhfaisn?is seo, aon ?s?id eile a bhaint aisti n? aon ghn?omh
a dh?anamh ar a hiontaoibh, ag daoine n? ag eintitis seachas an faighteoir
beartaithe. M? fuair t? ? seo tr? dhearmad, t?igh i dteagmh?il leis an
seolt?ir, le do thoil, agus scrios an t-?bhar as aon r?omhaire. Is ? beartas
na Roinne Dl? agus Cirt, Comhionannais agus Athch?irithe Dl?, agus na
nOif?g? agus na nGn?omhaireachta? a ?s?ideann seirbh?s? TF na Roinne,
seoladh ?bhair chol?il a dh?chead?.> M?s rud ? go measann t? gur ?bhar col?il at? san ?bhar at? sa
teachtaireacht seo is ceart duit dul i dteagmh?il leis an seolt?ir
l?ithreach agus le mailminder[ag]justice.ie chomh maith.>
****************************************************************************
*******>
>
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
Charles C. Berry (858) 534-2098
Dept of Family/Preventive
Medicine
E mailto:cberry at tajo.ucsd.edu UC San Diego
http://famprevmed.ucsd.edu/faculty/cberry/ La Jolla, San Diego 92093-0901
------------------------------
Message: 21
Date: Mon, 22 Dec 2008 12:14:45 -0600
From: Earl F Glynn <efglynn at gmail.com>
Subject: Re: [R] Globbing Files in R
To: r-help at stat.math.ethz.ch
Message-ID: <giolev$sra$1 at ger.gmane.org>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Gundala Viswanath wrote:
> Typically Perl's idiom would be:
>
> __BEGIN__
> @files = glob("/mydir/*.txt");
>
> foreach my $file (@files) {
> # process the file
> }
> __END__
Something like this has been suggested in R-help before:
files <- dir()
results <- lapply(files, yourprocessing())
The dir function has path and pattern arguments to select the set of
files you want.
This works fine when there are no problems, but often I'll use a for
loop so problem files can be dealt with differently when necessary.
Perhaps something like this:
ProcessList <- dir(pattern="InPerson*")
for (i in 1:length(ProcessList))
{
filename <- ProcessList[i]
. . .
}
efg
Earl F Glynn
Overland Park, KS
------------------------------
Message: 22
Date: Mon, 22 Dec 2008 13:22:27 -0500
From: Duncan Murdoch <murdoch at stats.uwo.ca>
Subject: Re: [R] Globbing Files in R
To: Earl F Glynn <efglynn at gmail.com>
Cc: r-help at stat.math.ethz.ch
Message-ID: <494FDAE3.2060600 at stats.uwo.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On 12/22/2008 1:14 PM, Earl F Glynn wrote:> Gundala Viswanath wrote:
>
>> Typically Perl's idiom would be:
>>
>> __BEGIN__
>> @files = glob("/mydir/*.txt");
>>
>> foreach my $file (@files) {
>> # process the file
>> }
>> __END__
>
> Something like this has been suggested in R-help before:
>
> files <- dir()
> results <- lapply(files, yourprocessing())
>
> The dir function has path and pattern arguments to select the set of
> files you want.
>
> This works fine when there are no problems, but often I'll use a for
> loop so problem files can be dealt with differently when necessary.
>
> Perhaps something like this:
>
> ProcessList <- dir(pattern="InPerson*")
>
> for (i in 1:length(ProcessList))
Remember to use seq_along() instead: ProcessList might be length 0.
Duncan Murdoch
> {
> filename <- ProcessList[i]
> . . .
> }
>
>
> efg
>
> Earl F Glynn
> Overland Park, KS
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 23
Date: Mon, 22 Dec 2008 19:29:33 +0100
From: Peter Dalgaard <p.dalgaard at biostat.ku.dk>
Subject: Re: [R] Treatment of Date ODBC objects in R (RODBC)
To: Ivan Alves <papucho at mac.com>
Cc: r-help at r-project.org
Message-ID: <494FDC8D.5090506 at biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Ivan Alves wrote:> Dear all,
>
> Retrieving an Oracle "Date" data type by means of RODBC (version
1.2-4)
> I get different classes in R depending on which operating system I am in:
>
> On MacOSX I get "Date" class
> On Windows I get " "POSIXt" "POSIXct" class
>
> The problem is material, as converting the "POSIXt"
"POSIXct" object
> with as.Date() returns one day less ("2008-12-17 00:00:00 CET" is
> returned as "2008-12-16").
This is in a sense correct since CET is one hour ahead of GMT (two hours
in Summer). What is a bit puzzling is that
> ISOdate(2008,12,24)
[1] "2008-12-24 12:00:00 GMT"
> class(ISOdate(2008,12,24))
[1] "POSIXt" "POSIXct"
> as.POSIXct("2008-12-24")
[1] "2008-12-24 CET"
> as.POSIXct("2008-12-24")+1
[1] "2008-12-24 00:00:01 CET"
I.e. we have two ways of converting a timeless date to POSIXct, and they
differ in noon/midnight, and in whether local timezone matters or not.
I believe Brian did this, and he usually does things for a reason....
>
> I have 2 related questions:
>
> 1. Is there a way to control the conversion used by RODBC for types
> "Date"? or is this controlled by the ODBC Driver (in my case the
Oracle
> driver in Windows and Actual on Mac OS X)?
>
> 2. What is the trick to get as.Date() to return the _intended_ date (the
> date that the OS X environment "correctly" reads)?
Add 12 hours, maybe? (43200 seconds)
Or play around with the timezone, but that seems painful.
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 24
Date: Mon, 22 Dec 2008 14:38:29 -0500
From: "Lu, Zheng" <Zheng.Lu at mpi.com>
Subject: [R] questions about read datafile into R
To: <r-help at r-project.org>
Message-ID:
<429BD5F1CFD42940B99E159DE356A37A0609D68A at US-BE3.corp.mpi.com>
Content-Type: text/plain
Dear all:
I have been thinking to import below one data file (.txt)into R by
read.table(..,skip=1, header=T). But How can I deal with the repeated
rows of TABLE NO.1 and names of data variables in the middle of this
data file. The similar block will be repeated 100 times, here only show
4 of them and within each block, data records also can vary, here only
paste 4 rows for example. I appreciate your consideration and help in
this holiday season. Happy Holiday!
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3918E+02
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6267E+02
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.1781E+02
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 5.7557E+01
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 8.8583E+01
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.7342E+02
3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.0179E+02
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.4389E+02
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6147E+02
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2634E+02
3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.0733E+02
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2003E+02
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.2116E+02
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3642E+02
3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.7881E+02
...
...
...
zheng
This e-mail, including any attachments, is a confidential business
communication, and may contain information that is confidential, proprietary
and/or privileged. This e-mail is intended only for the individual(s) to
whom it is addressed, and may not be saved, copied, printed, disclosed or
used by anyone else. If you are not the(an) intended recipient, please
immediately delete this e-mail from your computer system and notify the
sender. Thank you.
[[alternative HTML version deleted]]
------------------------------
Message: 25
Date: Mon, 22 Dec 2008 15:07:10 -0500
From: "Sarah Goslee" <sarah.goslee at gmail.com>
Subject: Re: [R] questions about read datafile into R
To: "Lu, Zheng" <Zheng.Lu at mpi.com>, "r-help at
r-project.org"
<r-help at r-project.org>
Message-ID:
<efb536d50812221207m380eb1d0qf3846cc6b7cab772 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
This kind of problem - reading a messy data file - was discussed at
great length just last week. The ideas from that discussion may
help you with your version:
http://tolstoy.newcastle.edu.au/R/e5/help/08/12/10404.html
Sarah
On Mon, Dec 22, 2008 at 2:38 PM, Lu, Zheng <Zheng.Lu at mpi.com>
wrote:> Dear all:
>
>
>
> I have been thinking to import below one data file (.txt)into R by
> read.table(..,skip=1, header=T). But How can I deal with the repeated
> rows of TABLE NO.1 and names of data variables in the middle of this
> data file. The similar block will be repeated 100 times, here only show
> 4 of them and within each block, data records also can vary, here only
> paste 4 rows for example. I appreciate your consideration and help in
> this holiday season. Happy Holiday!
>
>
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3918E+02
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6267E+02
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.1781E+02
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 5.7557E+01
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 8.8583E+01
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.7342E+02
>
> 3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.0179E+02
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.4389E+02
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6147E+02
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2634E+02
>
> 3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.0733E+02
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2003E+02
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.2116E+02
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3642E+02
>
> 3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.7881E+02
>
> ...
>
> ...
>
> ...
>
>
>
> zheng
>
>
>
>
>
>
--
Sarah Goslee
http://www.functionaldiversity.org
------------------------------
Message: 26
Date: Mon, 22 Dec 2008 15:20:18 -0500
From: "Brigid Mooney" <bkmooney at gmail.com>
Subject: [R] How can I avoid nested 'for' loops or quicken the
process?
To: r-help at r-project.org
Message-ID:
<85f3856f0812221220s9d936e8y5f9034c3584c3fc6 at mail.gmail.com>
Content-Type: text/plain
Hi All,
I'm still pretty new to using R - and I was hoping I might be able to get
some advice as to how to use 'apply' or a similar function instead of
using
nested for loops.
Right now I have a script which uses nested for loops similar to this:
i <- 1
for(a in Alpha) { for (b in Beta) { for (c in Gamma) { for (d in Delta) {
for (e in Epsilon)
{
Output[i] <- MyFunction(X, Y, a, b, c, d, e)
i <- i+1
}}}}}
Where Output[i] is a data frame, X and Y are data frames, and Alpha, Beta,
Gamma, Delta, and Epsilon are all lists, some of which are numeric, some
logical (TRUE/FALSE).
Any advice on how to implement some sort of solution that might be quicker
than these nested 'for' loops would be greatly appreciated.
Thanks!
[[alternative HTML version deleted]]
------------------------------
Message: 27
Date: Mon, 22 Dec 2008 12:33:55 -0800
From: "Charles C. Berry" <cberry at tajo.ucsd.edu>
Subject: Re: [R] How can I avoid nested 'for' loops or quicken the
process?
To: Brigid Mooney <bkmooney at gmail.com>
Cc: r-help at r-project.org
Message-ID: <Pine.LNX.4.64.0812221228560.25834 at tajo.ucsd.edu>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Mon, 22 Dec 2008, Brigid Mooney wrote:
> Hi All,
>
> I'm still pretty new to using R - and I was hoping I might be able to
get
> some advice as to how to use 'apply' or a similar function instead
of
using> nested for loops.
Unfortunately, you have given nothing that is reproducible.
The details of MyFunction and the exact structure of the list objects are
crucial.
Check out the _Posting Guide_ for hints on how to formulate a question
that will elecit an answer that helps you.
HTH,
Chuck
>
> Right now I have a script which uses nested for loops similar to this:
>
> i <- 1
> for(a in Alpha) { for (b in Beta) { for (c in Gamma) { for (d in Delta) {
> for (e in Epsilon)
> {
> Output[i] <- MyFunction(X, Y, a, b, c, d, e)
> i <- i+1
> }}}}}
>
>
> Where Output[i] is a data frame, X and Y are data frames, and Alpha, Beta,
> Gamma, Delta, and Epsilon are all lists, some of which are numeric, some
> logical (TRUE/FALSE).
>
> Any advice on how to implement some sort of solution that might be quicker
> than these nested 'for' loops would be greatly appreciated.
>
> Thanks!
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
Charles C. Berry (858) 534-2098
Dept of Family/Preventive
Medicine
E mailto:cberry at tajo.ucsd.edu UC San Diego
http://famprevmed.ucsd.edu/faculty/cberry/ La Jolla, San Diego 92093-0901
------------------------------
Message: 28
Date: Mon, 22 Dec 2008 16:43:43 -0400
From: "Mike Lawrence" <mike at thatmike.com>
Subject: [R] Error compiling R.2.8.1 with gcc 4.4 on Mac OS 10.5.6
To: r-help at stat.math.ethz.ch
Cc: gkhanna at umassd.edu
Message-ID:
<8ae7763a0812221243i2a88614dg8ad1818fba09747 at mail.gmail.com>
Content-Type: text/plain
Hi all,
I've encountered a build error with the latest R source (2.8.1). This is a
relatively fresh install of OS Leopard (10.5.6), latest developer tools
installed, gcc/g++/gfortran version 4.4 installed (via
http://hpc.sourceforge.net/, after which I updated the gcc & g++ symlinks to
link to the 4.4 versions; gfortran used the 4.4 version without updating the
symlink).
Ultimately I wanted to instalI pnmath, so as per a previous thread (
http://www.nabble.com/Parallel-R-tt18173953.html#a18196319) I built with:
LIBS=-lgomp ./configure --with-blas='-framework vecLib'
make -j4
The configure runs without a hitch, but make fails, throwing an error
seemingly related to qdCocoa:
making qdCocoa.d from qdCocoa.m
<built-in>:0: internal compiler error: Abort trap
Below is the output of configure, followed by the output of make (error is
in the last 10 lines). Any suggestions to fix this would be greatly
appreciated.
mal:R-2.8.1 mike$ LIBS=-lgomp ./configure --with-blas='-framework
vecLib'
checking build system type... i386-apple-darwin9.6.0
checking host system type... i386-apple-darwin9.6.0
loading site script './config.site'
loading build specific script './config.site'
checking for pwd... /bin/pwd
checking whether builddir is srcdir... yes
checking for working aclocal... found
checking for working autoconf... found
checking for working automake... found
checking for working autoheader... found
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking whether ln -s works... yes
checking for bison... bison -y
checking for ar... ar
checking for a BSD-compatible install... /usr/bin/install -c
checking for sed... /usr/bin/sed
checking for less... /usr/bin/less
checking for perl... /usr/bin/perl
checking whether perl version is at least 5.8.0... yes
checking for dvips... no
checking for tex... no
checking for latex... no
configure: WARNING: you cannot build DVI versions of the R manuals
checking for makeindex... no
checking for pdftex... no
checking for pdflatex... no
configure: WARNING: you cannot build PDF versions of the R manuals
checking for makeinfo... /usr/bin/makeinfo
checking whether makeinfo version is at least 4.7... yes
checking for texi2dvi... /usr/bin/texi2dvi
checking for unzip... /usr/bin/unzip
checking for zip... /usr/bin/zip
checking for gzip... /usr/bin/gzip
checking for firefox... no
checking for mozilla... no
checking for galeon... no
checking for opera... no
checking for xdg-open... no
checking for kfmclient... no
checking for gnome-moz-remote... no
checking for open... /usr/bin/open
using default browser ... /usr/bin/open
checking for acroread... no
checking for acroread4... no
checking for evince... no
checking for xpdf... no
checking for gv... no
checking for gnome-gv... no
checking for ggv... no
checking for kghostview... no
checking for open... /usr/bin/open
checking for pkg-config... no
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -E
checking whether gcc needs -traditional... no
checking how to run the C preprocessor... gcc -E
checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking how to run the C++ preprocessor... g++ -E
checking whether __attribute__((visibility())) is supported... no
checking whether gcc accepts -fvisibility... yes
checking whether gfortran accepts -fvisibility... yes
checking for gcc... gcc
checking whether we are using the GNU Objective C compiler... no
checking whether gcc accepts -g... no
checking for cached ObjC++ compiler... none
checking whether g++ can compile ObjC++... no
checking whether can compile ObjC++... no
checking for Objective C++ compiler... no working compiler found
checking for a sed that does not truncate output... (cached) /usr/bin/sed
checking for fgrep... /usr/bin/grep -F
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -p
checking the name lister (/usr/bin/nm -p) interface... BSD nm
checking the maximum length of command line arguments... 196608
checking whether the shell understands some XSI constructs... yes
checking whether the shell understands "+="... yes
checking for /usr/bin/ld option to reload object files... -r
checking how to recognize dependent libraries... pass_all
checking for strip... strip
checking for ranlib... ranlib
checking command to parse /usr/bin/nm -p output from gcc object... ok
checking for dsymutil... dsymutil
checking for nmedit... nmedit
checking for -single_module linker flag... yes
checking for -exported_symbols_list linker flag... yes
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for dlfcn.h... yes
checking whether we are using the GNU C++ compiler... (cached) yes
checking whether g++ accepts -g... (cached) yes
checking how to run the C++ preprocessor... g++ -E
checking whether we are using the GNU Fortran 77 compiler... (cached) yes
checking whether gfortran accepts -g... (cached) yes
checking for objdir... .libs
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fno-common -DPIC
checking if gcc PIC flag -fno-common -DPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking if gcc supports -c -o file.o... (cached) yes
checking whether the gcc linker (/usr/bin/ld) supports shared libraries...
yes
checking dynamic linker characteristics... darwin9.6.0 dyld
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking for ld used by g++... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking whether the g++ linker (/usr/bin/ld) supports shared libraries...
yes
checking for g++ option to produce PIC... -fno-common -DPIC
checking if g++ PIC flag -fno-common -DPIC works... yes
checking if g++ static flag -static works... no
checking if g++ supports -c -o file.o... yes
checking if g++ supports -c -o file.o... (cached) yes
checking whether the g++ linker (/usr/bin/ld) supports shared libraries...
yes
checking dynamic linker characteristics... darwin9.6.0 dyld
checking how to hardcode library paths into programs... immediate
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking for gfortran option to produce PIC... -fno-common
checking if gfortran PIC flag -fno-common works... yes
checking if gfortran static flag -static works... no
checking if gfortran supports -c -o file.o... yes
checking if gfortran supports -c -o file.o... (cached) yes
checking whether the gfortran linker (/usr/bin/ld) supports shared
libraries... yes
checking dynamic linker characteristics... darwin9.6.0 dyld
checking how to hardcode library paths into programs... immediate
checking for sin in -lm... yes
checking for library containing dlopen... none required
checking readline/history.h usability... yes
checking readline/history.h presence... yes
checking for readline/history.h... yes
checking readline/readline.h usability... yes
checking readline/readline.h presence... yes
checking for readline/readline.h... yes
checking for rl_callback_read_char in -lreadline... yes
checking for history_truncate_file... yes
checking whether rl_completion_matches exists and is declared... no
checking for ANSI C header files... (cached) yes
checking whether time.h and sys/time.h may both be included... yes
checking for dirent.h that defines DIR... yes
checking for library containing opendir... none required
checking for sys/wait.h that is POSIX.1 compatible... yes
checking arpa/inet.h usability... yes
checking arpa/inet.h presence... yes
checking for arpa/inet.h... yes
checking dl.h usability... no
checking dl.h presence... no
checking for dl.h... no
checking for dlfcn.h... (cached) yes
checking elf.h usability... no
checking elf.h presence... no
checking for elf.h... no
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking floatingpoint.h usability... no
checking floatingpoint.h presence... no
checking for floatingpoint.h... no
checking fpu_control.h usability... no
checking fpu_control.h presence... no
checking for fpu_control.h... no
checking glob.h usability... yes
checking glob.h presence... yes
checking for glob.h... yes
checking grp.h usability... yes
checking grp.h presence... yes
checking for grp.h... yes
checking limits.h usability... yes
checking limits.h presence... yes
checking for limits.h... yes
checking locale.h usability... yes
checking locale.h presence... yes
checking for locale.h... yes
checking netdb.h usability... yes
checking netdb.h presence... yes
checking for netdb.h... yes
checking netinet/in.h usability... yes
checking netinet/in.h presence... yes
checking for netinet/in.h... yes
checking pwd.h usability... yes
checking pwd.h presence... yes
checking for pwd.h... yes
checking stdbool.h usability... yes
checking stdbool.h presence... yes
checking for stdbool.h... yes
checking for strings.h... (cached) yes
checking sys/param.h usability... yes
checking sys/param.h presence... yes
checking for sys/param.h... yes
checking sys/select.h usability... yes
checking sys/select.h presence... yes
checking for sys/select.h... yes
checking sys/socket.h usability... yes
checking sys/socket.h presence... yes
checking for sys/socket.h... yes
checking for sys/stat.h... (cached) yes
checking sys/resource.h usability... yes
checking sys/resource.h presence... yes
checking for sys/resource.h... yes
checking sys/time.h usability... yes
checking sys/time.h presence... yes
checking for sys/time.h... yes
checking sys/times.h usability... yes
checking sys/times.h presence... yes
checking for sys/times.h... yes
checking sys/utsname.h usability... yes
checking sys/utsname.h presence... yes
checking for sys/utsname.h... yes
checking time.h usability... yes
checking time.h presence... yes
checking for time.h... yes
checking for unistd.h... (cached) yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking for inttypes.h... (cached) yes
checking stdarg.h usability... yes
checking stdarg.h presence... yes
checking for stdarg.h... yes
checking for stdint.h... (cached) yes
checking for string.h... (cached) yes
checking whether setjmp.h is POSIX.1 compatible... yes
checking whether sigsetjmp is declared... yes
checking whether siglongjmp is declared... yes
checking for GNU C library with version >= 2... no
checking return type of signal handlers... void
checking for pid_t... yes
checking for size_t... yes
checking whether SIZE_MAX is declared... yes
checking for blkcnt_t... yes
checking for type of socket length... socklen_t *
checking for stack_t... yes
checking for intptr_t... yes
checking for uintptr_t... yes
checking whether byte ordering is bigendian... no
checking for an ANSI C-conforming const... yes
checking for gcc option to accept ISO C99... -std=gnu99
checking for gcc -std=gnu99 option to accept ISO Standard C... (cached)
-std=gnu99
checking for inline... inline
checking for int... yes
checking size of int... 4
checking for long... yes
checking size of long... 4
checking for long long... yes
checking size of long long... 8
checking for double... yes
checking size of double... 8
checking for long double... yes
checking size of long double... 16
checking for size_t... (cached) yes
checking size of size_t... 4
checking whether we can compute C Make dependencies... yes, using gcc
-std=gnu99 -MM
checking whether gcc -std=gnu99 supports -c -o FILE.lo... yes
checking how to get verbose linking output from gfortran... -v
checking for Fortran 77 libraries of gfortran... -L/usr/local/lib
-L/usr/local/lib/gcc/i386-apple-darwin9.4.0/4.4.0
-L/usr/local/lib/gcc/i386-apple-darwin9.4.0/4.4.0/../../.. -lgfortranbegin
-lgfortran
checking how to get verbose linking output from gcc -std=gnu99... -v
checking for C libraries of gcc -std=gnu99... -lcrt1.10.5.o
-L/usr/local/lib -L/usr/local/lib/gcc/i386-apple-darwin9.4.0/4.4.0
-L/usr/local/lib/gcc/i386-apple-darwin9.4.0/4.4.0/../../.. -lgcc_s.10.5
-lSystem
checking for dummy main to link with Fortran 77 libraries... none
checking for Fortran 77 name-mangling scheme... lower case, underscore, no
extra underscore
checking whether gfortran appends underscores to external names... yes
checking whether gfortran appends extra underscores to external names... no
checking whether mixed C/Fortran code can be run... yes
checking whether gfortran and gcc -std=gnu99 agree on int and double... yes
checking whether gfortran and gcc -std=gnu99 agree on double complex... yes
checking whether g++ accepts -M for generating dependencies... yes
checking whether we can compute ObjC Make dependencies... yes, using cpp -M
checking for working Foundation implementation... no
checking whether C runtime needs -D__NO_MATH_INLINES... no
checking for xmkmf... /usr/X11/bin/xmkmf
checking whether linker supports dynamic lookup... yes
checking for off_t... yes
checking for working alloca.h... yes
checking for alloca... yes
checking whether alloca is declared... yes
checking whether expm1 exists and is declared... yes
checking whether hypot exists and is declared... yes
checking whether log1p exists and is declared... yes
checking whether log2 exists and is declared... yes
checking whether log10 exists and is declared... yes
checking whether rint exists and is declared... yes
checking for fseeko... yes
checking for ftello... yes
checking for isblank... yes
checking for matherr... yes
checking whether fcntl exists and is declared... yes
checking whether getgrgid exists and is declared... yes
checking whether getpwuid exists and is declared... yes
checking whether sigaction exists and is declared... yes
checking whether sigaltstack exists and is declared... yes
checking whether sigemptyset exists and is declared... yes
checking whether va_copy exists and is declared... yes
checking whether __va_copy exists and is declared... yes
checking whether fdopen exists and is declared... yes
checking whether popen exists and is declared... yes
checking whether setenv exists and is declared... yes
checking whether system exists and is declared... yes
checking whether unsetenv exists and is declared... yes
checking whether strcoll exists and is declared... yes
checking whether getrlimit exists and is declared... yes
checking whether getrusage exists and is declared... yes
checking whether chmod exists and is declared... yes
checking whether mkfifo exists and is declared... yes
checking whether stat exists and is declared... yes
checking whether umask exists and is declared... yes
checking whether gettimeofday exists and is declared... yes
checking whether times exists and is declared... yes
checking whether time exists and is declared... yes
checking whether access exists and is declared... yes
checking whether chdir exists and is declared... yes
checking whether execv exists and is declared... yes
checking whether ftruncate exists and is declared... yes
checking whether getcwd exists and is declared... yes
checking whether getuid exists and is declared... yes
checking whether symlink exists and is declared... yes
checking whether sysconf exists and is declared... yes
checking for putenv... yes
checking whether putenv is declared... yes
checking for vasprintf... yes
checking whether vasprintf is declared... yes
checking for mempcpy... no
checking for realpath... yes
checking whether realpath is declared... yes
checking whether glob exists and is declared... yes
checking for isnan... yes
checking whether isfinite is declared... yes
checking whether isnan is declared... yes
checking whether you have IEEE 754 floating-point arithmetic... yes
checking whether putenv("FOO") can unset an environment variable... no
checking whether putenv("FOO=") can unset an environment variable...
no
checking for nl_langinfo and CODESET... yes
checking for acosh... yes
checking for asinh... yes
checking for atanh... yes
checking for mkdtemp... yes
checking for snprintf... yes
checking for strdup... yes
checking for strncasecmp... yes
checking for vsnprintf... yes
checking whether acosh is declared... yes
checking whether asinh is declared... yes
checking whether atanh is declared... yes
checking whether mkdtemp is declared... yes
checking whether snprintf is declared... yes
checking whether strdup is declared... yes
checking whether strncasecmp is declared... yes
checking whether vsnprintf is declared... yes
checking for library containing connect... none required
checking for library containing gethostbyname... none required
checking for library containing xdr_string... none required
checking for __setfpucw... no
checking for working calloc... yes
checking for working isfinite... yes
checking for working log1p... yes
checking whether ftell works correctly on files opened for append... yes
checking for working sigaction... yes
checking whether mktime sets errno... no
checking whether C99 double complex is supported...
checking complex.h usability... yes
checking complex.h presence... yes
checking for complex.h... yes
checking for double complex... yes
checking whether cexp exists and is declared... yes
checking whether clog exists and is declared... yes
checking whether csqrt exists and is declared... yes
checking whether cpow exists and is declared... yes
checking whether ccos exists and is declared... yes
checking whether csin exists and is declared... yes
checking whether ctan exists and is declared... yes
checking whether cacos exists and is declared... yes
checking whether casin exists and is declared... yes
checking whether catan exists and is declared... yes
checking whether ccosh exists and is declared... yes
checking whether csinh exists and is declared... yes
checking whether ctanh exists and is declared... yes
checking whether cacosh exists and is declared... yes
checking whether casinh exists and is declared... yes
checking whether catanh exists and is declared... yes
checking whether C99 double complex is compatible with Rcomplex... yes
yes
checking for cblas_cdotu_sub in vecLib framework... -framework vecLib
checking for dgemm_ in -framework vecLib... yes
checking whether double complex BLAS can be used... no
checking whether the BLAS is complete... yes
checking iconv.h usability... yes
checking iconv.h presence... yes
checking for iconv.h... yes
checking for iconv... in libiconv
checking whether iconv accepts "UTF-8", "latin1" and
"UCS-*"... yes
checking for iconvlist... yes
checking wchar.h usability... yes
checking wchar.h presence... yes
checking for wchar.h... yes
checking wctype.h usability... yes
checking wctype.h presence... yes
checking for wctype.h... yes
checking whether mbrtowc exists and is declared... yes
checking whether wcrtomb exists and is declared... yes
checking whether wcscoll exists and is declared... yes
checking whether wcsftime exists and is declared... yes
checking whether wcstod exists and is declared... yes
checking whether mbstowcs exists and is declared... yes
checking whether wcstombs exists and is declared... yes
checking whether wctrans exists and is declared... yes
checking whether iswblank exists and is declared... yes
checking whether wctype exists and is declared... yes
checking whether iswctype exists and is declared... yes
checking for wctrans_t... yes
checking for mbstate_t... yes
checking for X... libraries /usr/X11/lib, headers /usr/X11/include
checking whether -R must be followed by a space... neither works
checking for gethostbyname... yes
checking for connect... yes
checking for remove... yes
checking for shmat... yes
checking for IceConnectionNumber in -lICE... yes
checking X11/Intrinsic.h usability... yes
checking X11/Intrinsic.h presence... yes
checking for X11/Intrinsic.h... yes
checking for XtToolkitInitialize in -lXt... yes
using X11 ... yes
checking for KeySym... yes
checking X11/Xmu/Atoms.h usability... yes
checking X11/Xmu/Atoms.h presence... yes
checking for X11/Xmu/Atoms.h... yes
checking for XmuInternAtom in -lXmu... yes
configure: not checking for cairo as pkg-config is not present
checking for CFStringGetSystemEncoding in CoreFoundation framework...
-framework CoreFoundation
checking for tclConfig.sh... no
checking for tclConfig.sh in library (sub)directories...
/usr/lib/tclConfig.sh
checking for tkConfig.sh... no
checking for tkConfig.sh in library (sub)directories... /usr/lib/tkConfig.sh
checking tcl.h usability... yes
checking tcl.h presence... yes
checking for tcl.h... yes
checking tk.h usability... yes
checking tk.h presence... yes
checking for tk.h... yes
checking whether compiling/linking Tcl/Tk code works... yes
checking for BSD networking... yes
checking if jpeglib version >= 6b... yes
checking for jpeg_destroy_compress in -ljpeg... yes
checking for main in -lz... yes
checking if libpng version >= 1.0.5... no
checking rpc/types.h usability... yes
checking rpc/types.h presence... yes
checking for rpc/types.h... yes
checking for rpc/xdr.h... yes
checking for XDR support... yes
checking whether zlib support needs to be compiled... yes
checking mmap support for zlib... yes
checking whether bzip2 support needs to be compiled... yes
checking whether PCRE support needs to be compiled... yes
checking tiffio.h usability... no
checking tiffio.h presence... no
checking for tiffio.h... no
checking for TIFFOpen in -ltiff... no
checking whether leap seconds are treated according to POSIX... yes
checking for setitimer... yes
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... no
checking for _LARGEFILE_SOURCE value needed for large files... no
checking whether KERN_USRSTACK sysctl is supported... yes
checking for lpr... lpr
checking for paperconf... false
checking for java... /usr/bin/java
checking for javac... /usr/bin/javac
checking for javah... /usr/bin/javah
checking for jar... /usr/bin/jar
checking whether Java compiler works... yes
checking whether Java compiler works for version 1.4... yes
checking whether Java interpreter works... yes
checking Java environment...
/System/Library/Frameworks/JavaVM.framework/Versions/1.5.0/Home
checking for cached Java settings... no
checking whether JNI programs can be compiled... yes
checking for gfortran... gfortran
checking whether we are using the GNU Fortran compiler... yes
checking whether gfortran accepts -g... yes
checking whether we are using the GNU Fortran compiler... (cached) yes
checking whether gfortran accepts -g... (cached) yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
checking for gfortran option to produce PIC... -fno-common
checking if gfortran PIC flag -fno-common works... yes
checking if gfortran static flag -static works... no
checking if gfortran supports -c -o file.o... yes
checking if gfortran supports -c -o file.o... (cached) yes
checking whether the gfortran linker (/usr/bin/ld) supports shared
libraries... yes
checking dynamic linker characteristics... darwin9.6.0 dyld
checking how to hardcode library paths into programs... immediate
checking for Fortran flag to compile .f90 files... none
checking for Fortran flag to compile .f95 files... none
checking for recommended packages... yes
checking whether NLS is requested... yes
checking whether make sets $(MAKE)... yes
checking for msgfmt... no
checking for gmsgfmt... :
checking for xgettext... no
checking for msgmerge... no
checking whether we are using the GNU C Library 2 or newer... no
checking for ranlib... (cached) ranlib
checking for simple visibility declarations... yes
checking for inline... inline
checking for stdint.h... yes
checking for stdlib.h... (cached) yes
checking for unistd.h... (cached) yes
checking for getpagesize... yes
checking for working mmap... yes
checking whether integer division by zero raises SIGFPE... yes
checking for inttypes.h... yes
checking for unsigned long long int... yes
checking for inttypes.h... (cached) yes
checking whether the inttypes.h PRIxNN macros are broken... no
checking for ld used by GCC... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for shared library run path origin... done
checking whether imported symbols can be declared weak... no
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for pthread_kill in -lpthread... yes
checking for pthread_rwlock_t... yes
checking for multithread API to use... posix
checking argz.h usability... no
checking argz.h presence... no
checking for argz.h... no
checking for inttypes.h... (cached) yes
checking for limits.h... (cached) yes
checking for unistd.h... (cached) yes
checking for sys/param.h... (cached) yes
checking for getcwd... yes
checking for getegid... yes
checking for geteuid... yes
checking for getgid... yes
checking for getuid... yes
checking for mempcpy... (cached) no
checking for munmap... yes
checking for stpcpy... yes
checking for strcasecmp... yes
checking for strdup... (cached) yes
checking for strtoul... yes
checking for tsearch... yes
checking for argz_count... no
checking for argz_stringify... no
checking for argz_next... no
checking for __fsetlocking... no
checking whether feof_unlocked is declared... yes
checking whether fgets_unlocked is declared... no
checking for iconv... yes
checking for iconv declaration...
extern size_t iconv (iconv_t cd, char * *inbuf, size_t
*inbytesleft, char * *outbuf, size_t *outbytesleft);
checking for NL_LOCALE_NAME macro... no
checking for bison... bison
checking version of bison... 2.3, ok
checking for long long int... yes
checking for long double... yes
checking for wchar_t... yes
checking for wint_t... yes
checking for intmax_t... yes
checking whether printf() supports POSIX/XSI format strings... yes
checking whether we are using the GNU C Library 2.1 or newer... no
checking for stdint.h... (cached) yes
checking for SIZE_MAX... yes
checking for stdint.h... (cached) yes
checking for CFPreferencesCopyAppValue... yes
checking for CFLocaleCopyCurrent... yes
checking for ptrdiff_t... yes
checking stddef.h usability... yes
checking stddef.h presence... yes
checking for stddef.h... yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking for asprintf... yes
checking for fwprintf... yes
checking for putenv... (cached) yes
checking for setenv... yes
checking for setlocale... yes
checking for snprintf... (cached) yes
checking for wcslen... yes
checking whether _snprintf is declared... no
checking whether _snwprintf is declared... no
checking whether getc_unlocked is declared... yes
checking for nl_langinfo and CODESET... (cached) yes
checking for LC_MESSAGES... yes
checking for CFPreferencesCopyAppValue... (cached) yes
checking for CFLocaleCopyCurrent... (cached) yes
checking whether included gettext is requested... no
checking for GNU gettext in libc... no
checking for GNU gettext in libintl... no
checking whether to use NLS... yes
checking where the gettext function comes from... included intl directory
configure: creating ./config.status
config.status: creating Makeconf
config.status: creating Makefile
config.status: creating doc/Makefile
config.status: creating doc/html/Makefile
config.status: creating doc/html/search/Makefile
config.status: creating doc/manual/Makefile
config.status: creating etc/Makefile
config.status: creating etc/Makeconf
config.status: creating etc/Renviron
config.status: creating etc/ldpaths
config.status: creating m4/Makefile
config.status: creating po/Makefile.in
config.status: creating share/Makefile
config.status: creating src/Makefile
config.status: creating src/appl/Makefile
config.status: creating src/extra/Makefile
config.status: creating src/extra/blas/Makefile
config.status: creating src/extra/bzip2/Makefile
config.status: creating src/extra/intl/Makefile
config.status: creating src/extra/pcre/Makefile
config.status: creating src/extra/xdr/Makefile
config.status: creating src/extra/zlib/Makefile
config.status: creating src/include/Makefile
config.status: creating src/include/Rmath.h0
config.status: creating src/include/R_ext/Makefile
config.status: creating src/library/Recommended/Makefile
config.status: creating src/library/Makefile
config.status: creating src/library/base/DESCRIPTION
config.status: creating src/library/base/Makefile
config.status: creating src/library/datasets/DESCRIPTION
config.status: creating src/library/datasets/Makefile
config.status: creating src/library/graphics/DESCRIPTION
config.status: creating src/library/graphics/Makefile
config.status: creating src/library/grDevices/DESCRIPTION
config.status: creating src/library/grDevices/Makefile
config.status: creating src/library/grDevices/src/Makefile
config.status: creating src/library/grid/DESCRIPTION
config.status: creating src/library/grid/Makefile
config.status: creating src/library/grid/src/Makefile
config.status: creating src/library/methods/DESCRIPTION
config.status: creating src/library/methods/Makefile
config.status: creating src/library/methods/src/Makefile
config.status: creating src/library/profile/Makefile
config.status: creating src/library/stats/DESCRIPTION
config.status: creating src/library/stats/Makefile
config.status: creating src/library/stats/src/Makefile
config.status: creating src/library/stats4/DESCRIPTION
config.status: creating src/library/stats4/Makefile
config.status: creating src/library/splines/DESCRIPTION
config.status: creating src/library/splines/Makefile
config.status: creating src/library/splines/src/Makefile
config.status: creating src/library/tcltk/DESCRIPTION
config.status: creating src/library/tcltk/Makefile
config.status: creating src/library/tcltk/src/Makefile
config.status: creating src/library/tools/DESCRIPTION
config.status: creating src/library/tools/Makefile
config.status: creating src/library/tools/src/Makefile
config.status: creating src/library/utils/DESCRIPTION
config.status: creating src/library/utils/Makefile
config.status: creating src/main/Makefile
config.status: creating src/modules/Makefile
config.status: creating src/modules/X11/Makefile
config.status: creating src/modules/internet/Makefile
config.status: creating src/modules/lapack/Makefile
config.status: creating src/modules/vfonts/Makefile
config.status: creating src/nmath/Makefile
config.status: creating src/nmath/standalone/Makefile
config.status: creating src/scripts/Makefile
config.status: creating src/scripts/COMPILE
config.status: creating src/scripts/INSTALL
config.status: creating src/scripts/REMOVE
config.status: creating src/scripts/R.sh
config.status: creating src/scripts/Rdconv
config.status: creating src/scripts/Rprof
config.status: creating src/scripts/SHLIB
config.status: creating src/scripts/Sd2Rd
config.status: creating src/scripts/build
config.status: creating src/scripts/check
config.status: creating src/unix/Makefile
config.status: creating tests/Makefile
config.status: creating tests/Embedding/Makefile
config.status: creating tests/Examples/Makefile
config.status: creating tests/Native/Makefile
config.status: creating tools/Makefile
config.status: creating src/include/config.h
config.status: executing libtool commands
config.status: executing po-directories commands
config.status: creating po/POTFILES
config.status: creating po/Makefile
config.status: executing stamp-h commands
R is now configured for i386-apple-darwin9.6.0
Source directory: .
Installation directory: /Library/Frameworks
C compiler: gcc -std=gnu99 -g -O2
Fortran 77 compiler: gfortran -g -O2
C++ compiler: g++ -g -O2
Fortran 90/95 compiler: gfortran -g -O2
Obj-C compiler:
Interfaces supported: X11, aqua, tcltk
External libraries: readline, BLAS(vecLib)
Additional capabilities: JPEG, iconv, MBCS, NLS
Options enabled: framework, R profiling, Java
Recommended packages: yes
configure: WARNING: you cannot build DVI versions of the R manuals
configure: WARNING: you cannot build PDF versions of the R manuals
mal:R-2.8.1 mike$ make -j4
make[1]: Nothing to be done for `R'.
make[1]: Nothing to be done for `R'.
make[2]: Nothing to be done for `R'.
creating src/scripts/R.fe
mkdir ../../bin
mkdir ../../include
mkdir ../../../include/R_ext
making bzlib.d from bzlib.c
making bzcompress.d from bzcompress.c
making blocksort.d from blocksort.c
making crctable.d from crctable.c
making decompress.d from decompress.c
making huffman.d from huffman.c
making randtable.d from randtable.c
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c blocksort.c -o
blocksort.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bzlib.c -o bzlib.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bzcompress.c -o
bzcompress.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c crctable.c -o
crctable.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c decompress.c -o
decompress.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c huffman.c -o
huffman.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c randtable.c -o
randtable.o
rm -f libbz2.a
ar cr libbz2.a blocksort.o bzlib.o bzcompress.o crctable.o decompress.o
huffman.o randtable.o
ranlib libbz2.a
making pcre_chartables.d from pcre_chartables.c
making pcre_compile.d from pcre_compile.c
making pcre_config.d from pcre_config.c
making pcre_exec.d from pcre_exec.c
making pcre_fullinfo.d from pcre_fullinfo.c
making pcre_get.d from pcre_get.c
making pcre_globals.d from pcre_globals.c
making pcre_info.d from pcre_info.c
making pcre_maketables.d from pcre_maketables.c
making pcre_newline.d from pcre_newline.c
making pcre_ord2utf8.d from pcre_ord2utf8.c
making pcre_refcount.d from pcre_refcount.c
making pcre_study.d from pcre_study.c
making pcre_tables.d from pcre_tables.c
making pcre_try_flipped.d from pcre_try_flipped.c
making pcre_ucd.d from pcre_ucd.c
making pcre_valid_utf8.d from pcre_valid_utf8.c
making pcre_version.d from pcre_version.c
making pcre_xclass.d from pcre_xclass.c
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_chartables.c
-o pcre_chartables.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_compile.c -o
pcre_compile.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_config.c -o
pcre_config.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_exec.c -o
pcre_exec.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_fullinfo.c -o
pcre_fullinfo.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_get.c -o
pcre_get.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_globals.c -o
pcre_globals.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_info.c -o
pcre_info.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_maketables.c
-o pcre_maketables.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_newline.c -o
pcre_newline.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_ord2utf8.c -o
pcre_ord2utf8.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_refcount.c -o
pcre_refcount.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_study.c -o
pcre_study.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_tables.c -o
pcre_tables.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_try_flipped.c
-o pcre_try_flipped.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_ucd.c -o
pcre_ucd.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_valid_utf8.c
-o pcre_valid_utf8.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_version.c -o
pcre_version.o
gcc -std=gnu99 -I. -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre_xclass.c -o
pcre_xclass.o
rm -f libpcre.a
ar cr libpcre.a pcre_chartables.o pcre_compile.o pcre_config.o pcre_exec.o
pcre_fullinfo.o pcre_get.o pcre_globals.o pcre_info.o pcre_maketables.o
pcre_newline.o pcre_ord2utf8.o pcre_refcount.o pcre_study.o pcre_tables.o
pcre_try_flipped.o pcre_ucd.o pcre_valid_utf8.o pcre_version.o pcre_xclass.o
ranlib libpcre.a
making adler32.d from adler32.c
making compress.d from compress.c
making crc32.d from crc32.c
making deflate.d from deflate.c
making gzio.d from gzio.c
making infback.d from infback.c
making inffast.d from inffast.c
making inflate.d from inflate.c
making inftrees.d from inftrees.c
making trees.d from trees.c
making uncompr.d from uncompr.c
making zutil.d from zutil.c
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c adler32.c -o adler32.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c compress.c -o compress.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c crc32.c -o crc32.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c deflate.c -o deflate.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c gzio.c -o gzio.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c infback.c -o infback.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c inffast.c -o inffast.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c inflate.c -o inflate.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c inftrees.c -o inftrees.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c trees.c -o trees.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c uncompr.c -o uncompr.o
gcc -std=gnu99 -I. -DUSE_MMAP -I. -I../../../src/include
-I../../../src/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-c zutil.c -o zutil.o
rm -f libz.a
ar cr libz.a adler32.o compress.o crc32.o deflate.o gzio.o infback.o
inffast.o inflate.o inftrees.o trees.o uncompr.o zutil.o
ranlib libz.a
making bindtextdom.d from bindtextdom.c
making dcgettext.d from dcgettext.c
making dgettext.d from dgettext.c
making gettext.d from gettext.c
making finddomain.d from finddomain.c
making loadmsgcat.d from loadmsgcat.c
making textdomain.d from textdomain.c
making l10nflist.d from l10nflist.c
making explodename.d from explodename.c
making dcigettext.d from dcigettext.c
making dcngettext.d from dcngettext.c
making dngettext.d from dngettext.c
making ngettext.d from ngettext.c
making plural-exp.d from plural-exp.c
making plural.d from plural.c
making langprefs.d from langprefs.c
making localcharset.d from localcharset.c
making localename.d from localename.c
making osdep.d from osdep.c
making printf.d from printf.c
making intl-compat.d from intl-compat.c
making hash-string.d from hash-string.c
making lock.d from lock.c
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c bindtextdom.c -o bindtextdom.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c dcgettext.c -o dcgettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c dgettext.c -o dgettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c gettext.c -o gettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c finddomain.c -o finddomain.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c loadmsgcat.c -o loadmsgcat.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c textdomain.c -o textdomain.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c l10nflist.c -o l10nflist.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c explodename.c -o explodename.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c dcigettext.c -o dcigettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c dcngettext.c -o dcngettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c dngettext.c -o dngettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c ngettext.c -o ngettext.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c plural.c -o plural.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c plural-exp.c -o plural-exp.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c langprefs.c -o langprefs.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c localcharset.c -o localcharset.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c localename.c -o localename.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c printf.c -o printf.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c osdep.c -o osdep.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c intl-compat.c -o intl-compat.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c hash-string.c -o hash-string.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include -I.
-I/usr/local/include -DLOCALEDIR=\"\"
-DLOCALEALIAS_PATH=\"\" -DIN_LIBINTL
-DHAVE_CONFIG_H
-I/System/Library/Frameworks/CoreFoundation.framework/Headers -fPIC -g -O2
-c lock.c -o lock.o
lock.c:363: warning: braces around scalar initializer
lock.c:363: warning: (near initialization for 'fresh_once.__sig')
lock.c:363: warning: braces around scalar initializer
lock.c:363: warning: (near initialization for 'fresh_once.__sig')
lock.c:363: warning: excess elements in scalar initializer
lock.c:363: warning: (near initialization for 'fresh_once.__sig')
rm -f libintl.a
ar cr libintl.a bindtextdom.o dcgettext.o dgettext.o gettext.o finddomain.o
loadmsgcat.o textdomain.o l10nflist.o explodename.o dcigettext.o
dcngettext.o dngettext.o ngettext.o plural.o plural-exp.o langprefs.o
localcharset.o localename.o printf.o osdep.o intl-compat.o hash-string.o
lock.o
ranlib: file: libintl.a(printf.o) has no symbols
ranlib: file: libintl.a(osdep.o) has no symbols
ranlib libintl.a
ranlib: file: libintl.a(printf.o) has no symbols
ranlib: file: libintl.a(osdep.o) has no symbols
making binning.d from binning.c
making bakslv.d from bakslv.c
making cpoly.d from cpoly.c
making cumsum.d from cumsum.c
making fmin.d from fmin.c
making integrate.d from integrate.c
making fft.d from fft.c
making interv.d from interv.c
making lbfgsb.d from lbfgsb.c
making maxcol.d from maxcol.c
making machar.d from machar.c
making pretty.d from pretty.c
making rcont.d from rcont.c
making rowsum.d from rowsum.c
making stem.d from stem.c
making strsignif.d from strsignif.c
making tabulate.d from tabulate.c
making uncmin.d from uncmin.c
making zeroin.d from zeroin.c
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bakslv.c -o bakslv.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c binning.c -o
binning.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c cpoly.c -o cpoly.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c cumsum.c -o cumsum.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fft.c -o fft.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fmin.c -o fmin.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c integrate.c -o
integrate.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c interv.c -o interv.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c lbfgsb.c -o lbfgsb.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c machar.c -o machar.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c maxcol.c -o maxcol.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pretty.c -o pretty.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rcont.c -o rcont.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rowsum.c -o rowsum.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c stem.c -o stem.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c strsignif.c -o
strsignif.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c tabulate.c -o
tabulate.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c uncmin.c -o uncmin.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c zeroin.c -o zeroin.o
gfortran -fPIC -g -O2 -c ch2inv.f -o ch2inv.o
gfortran -fPIC -g -O2 -c chol.f -o chol.o
gfortran -fPIC -g -O2 -c dchdc.f -o dchdc.o
gfortran -fPIC -g -O2 -c dpbfa.f -o dpbfa.o
gfortran -fPIC -g -O2 -c dpbsl.f -o dpbsl.o
gfortran -fPIC -g -O2 -c dpoco.f -o dpoco.o
gfortran -fPIC -g -O2 -c dpodi.f -o dpodi.o
gfortran -fPIC -g -O2 -c dpofa.f -o dpofa.o
gfortran -fPIC -g -O2 -c dposl.f -o dposl.o
gfortran -fPIC -g -O2 -c dqrdc.f -o dqrdc.o
gfortran -fPIC -g -O2 -c dqrdc2.f -o dqrdc2.o
gfortran -fPIC -g -O2 -c dqrls.f -o dqrls.o
gfortran -fPIC -g -O2 -c dqrsl.f -o dqrsl.o
gfortran -fPIC -g -O2 -c dqrutl.f -o dqrutl.o
gfortran -fPIC -g -O2 -c dsvdc.f -o dsvdc.o
gfortran -fPIC -g -O2 -c dtrco.f -o dtrco.o
gfortran -fPIC -g -O2 -c dtrsl.f -o dtrsl.o
gfortran -fPIC -g -O2 -c eigen.f -o eigen.o
rm -f libappl.a
ar cr libappl.a bakslv.o binning.o cpoly.o cumsum.o fft.o fmin.o integrate.o
interv.o lbfgsb.o machar.o maxcol.o pretty.o rcont.o rowsum.o stem.o
strsignif.o tabulate.o uncmin.o zeroin.o ch2inv.o chol.o dchdc.o dpbfa.o
dpbsl.o dpoco.o dpodi.o dpofa.o dposl.o dqrdc.o dqrdc2.o dqrls.o dqrsl.o
dqrutl.o dsvdc.o dtrco.o dtrsl.o eigen.o
ranlib libappl.a
making d1mach.d from d1mach.c
making mlutils.d from mlutils.c
making i1mach.d from i1mach.c
making fmax2.d from fmax2.c
making fmin2.d from fmin2.c
making fprec.d from fprec.c
making fround.d from fround.c
making ftrunc.d from ftrunc.c
making sign.d from sign.c
making fsign.d from fsign.c
making imax2.d from imax2.c
making imin2.d from imin2.c
making chebyshev.d from chebyshev.c
making log1p.d from log1p.c
making expm1.d from expm1.c
making lgammacor.d from lgammacor.c
making gammalims.d from gammalims.c
making stirlerr.d from stirlerr.c
making bd0.d from bd0.c
making gamma.d from gamma.c
making lgamma.d from lgamma.c
making gamma_cody.d from gamma_cody.c
making beta.d from beta.c
making lbeta.d from lbeta.c
making polygamma.d from polygamma.c
making bessel_i.d from bessel_i.c
making bessel_j.d from bessel_j.c
making bessel_k.d from bessel_k.c
making bessel_y.d from bessel_y.c
making choose.d from choose.c
making snorm.d from snorm.c
making sexp.d from sexp.c
making dgamma.d from dgamma.c
making qgamma.d from qgamma.c
making pgamma.d from pgamma.c
making rgamma.d from rgamma.c
making dbeta.d from dbeta.c
making pbeta.d from pbeta.c
making qbeta.d from qbeta.c
making rbeta.d from rbeta.c
making dunif.d from dunif.c
making punif.d from punif.c
making qunif.d from qunif.c
making runif.d from runif.c
making dnorm.d from dnorm.c
making pnorm.d from pnorm.c
making qnorm.d from qnorm.c
making rnorm.d from rnorm.c
making dlnorm.d from dlnorm.c
making plnorm.d from plnorm.c
making qlnorm.d from qlnorm.c
making rlnorm.d from rlnorm.c
making df.d from df.c
making qf.d from qf.c
making pf.d from pf.c
making rf.d from rf.c
making dnf.d from dnf.c
making dt.d from dt.c
making pt.d from pt.c
making qt.d from qt.c
making rt.d from rt.c
making dnt.d from dnt.c
making dchisq.d from dchisq.c
making pchisq.d from pchisq.c
making qchisq.d from qchisq.c
making rchisq.d from rchisq.c
making rnchisq.d from rnchisq.c
making dbinom.d from dbinom.c
making pbinom.d from pbinom.c
making qbinom.d from qbinom.c
making rbinom.d from rbinom.c
making rmultinom.d from rmultinom.c
making dcauchy.d from dcauchy.c
making pcauchy.d from pcauchy.c
making rcauchy.d from rcauchy.c
making dexp.d from dexp.c
making qcauchy.d from qcauchy.c
making pexp.d from pexp.c
making qexp.d from qexp.c
making dgeom.d from dgeom.c
making rexp.d from rexp.c
making pgeom.d from pgeom.c
making qgeom.d from qgeom.c
making dhyper.d from dhyper.c
making rgeom.d from rgeom.c
making phyper.d from phyper.c
making qhyper.d from qhyper.c
making rhyper.d from rhyper.c
making pnbinom.d from pnbinom.c
making dnbinom.d from dnbinom.c
making qnbinom.d from qnbinom.c
making rnbinom.d from rnbinom.c
making dpois.d from dpois.c
making ppois.d from ppois.c
making qpois.d from qpois.c
making rpois.d from rpois.c
making dweibull.d from dweibull.c
making qweibull.d from qweibull.c
making pweibull.d from pweibull.c
making rweibull.d from rweibull.c
making dlogis.d from dlogis.c
making plogis.d from plogis.c
making qlogis.d from qlogis.c
making rlogis.d from rlogis.c
making dnchisq.d from dnchisq.c
making pnchisq.d from pnchisq.c
making qnchisq.d from qnchisq.c
making dnbeta.d from dnbeta.c
making pnbeta.d from pnbeta.c
making qnbeta.d from qnbeta.c
making pnf.d from pnf.c
making pnt.d from pnt.c
making qnf.d from qnf.c
making qnt.d from qnt.c
making ptukey.d from ptukey.c
making qtukey.d from qtukey.c
making toms708.d from toms708.c
making wilcox.d from wilcox.c
making signrank.d from signrank.c
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c mlutils.c -o
mlutils.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c d1mach.c -o d1mach.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c i1mach.c -o i1mach.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fmax2.c -o fmax2.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fmin2.c -o fmin2.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fprec.c -o fprec.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fround.c -o fround.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c ftrunc.c -o ftrunc.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sign.c -o sign.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fsign.c -o fsign.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c imax2.c -o imax2.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c imin2.c -o imin2.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c chebyshev.c -o
chebyshev.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c log1p.c -o log1p.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c expm1.c -o expm1.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c lgammacor.c -o
lgammacor.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c gammalims.c -o
gammalims.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c stirlerr.c -o
stirlerr.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bd0.c -o bd0.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c gamma.c -o gamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c lgamma.c -o lgamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c gamma_cody.c -o
gamma_cody.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c beta.c -o beta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c lbeta.c -o lbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c polygamma.c -o
polygamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bessel_i.c -o
bessel_i.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bessel_j.c -o
bessel_j.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bessel_k.c -o
bessel_k.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bessel_y.c -o
bessel_y.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c choose.c -o choose.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c snorm.c -o snorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sexp.c -o sexp.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dgamma.c -o dgamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pgamma.c -o pgamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qgamma.c -o qgamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rgamma.c -o rgamma.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dbeta.c -o dbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pbeta.c -o pbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qbeta.c -o qbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rbeta.c -o rbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dunif.c -o dunif.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c punif.c -o punif.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qunif.c -o qunif.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c runif.c -o runif.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dnorm.c -o dnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pnorm.c -o pnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qnorm.c -o qnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rnorm.c -o rnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dlnorm.c -o dlnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c plnorm.c -o plnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qlnorm.c -o qlnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rlnorm.c -o rlnorm.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c df.c -o df.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pf.c -o pf.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qf.c -o qf.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rf.c -o rf.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dnf.c -o dnf.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dt.c -o dt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pt.c -o pt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qt.c -o qt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rt.c -o rt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dnt.c -o dnt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dchisq.c -o dchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pchisq.c -o pchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qchisq.c -o qchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rchisq.c -o rchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rnchisq.c -o
rnchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dbinom.c -o dbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pbinom.c -o pbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qbinom.c -o qbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rbinom.c -o rbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rmultinom.c -o
rmultinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dcauchy.c -o
dcauchy.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcauchy.c -o
pcauchy.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qcauchy.c -o
qcauchy.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rcauchy.c -o
rcauchy.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dexp.c -o dexp.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pexp.c -o pexp.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qexp.c -o qexp.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rexp.c -o rexp.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dgeom.c -o dgeom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pgeom.c -o pgeom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qgeom.c -o qgeom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rgeom.c -o rgeom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dhyper.c -o dhyper.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c phyper.c -o phyper.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qhyper.c -o qhyper.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rhyper.c -o rhyper.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dnbinom.c -o
dnbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pnbinom.c -o
pnbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qnbinom.c -o
qnbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rnbinom.c -o
rnbinom.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dpois.c -o dpois.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c ppois.c -o ppois.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qpois.c -o qpois.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rpois.c -o rpois.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dweibull.c -o
dweibull.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pweibull.c -o
pweibull.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qweibull.c -o
qweibull.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rweibull.c -o
rweibull.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dlogis.c -o dlogis.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c plogis.c -o plogis.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qlogis.c -o qlogis.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rlogis.c -o rlogis.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dnchisq.c -o
dnchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pnchisq.c -o
pnchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qnchisq.c -o
qnchisq.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dnbeta.c -o dnbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pnbeta.c -o pnbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qnbeta.c -o qnbeta.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pnf.c -o pnf.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pnt.c -o pnt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qnf.c -o qnf.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qnt.c -o qnt.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c ptukey.c -o ptukey.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qtukey.c -o qtukey.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c toms708.c -o
toms708.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c wilcox.c -o wilcox.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c signrank.c -o
signrank.o
rm -rf libnmath.a
ar cr libnmath.a mlutils.o d1mach.o i1mach.o fmax2.o fmin2.o fprec.o
fround.o ftrunc.o sign.o fsign.o imax2.o imin2.o chebyshev.o log1p.o expm1.o
lgammacor.o gammalims.o stirlerr.o bd0.o gamma.o lgamma.o gamma_cody.o
beta.o lbeta.o polygamma.o bessel_i.o bessel_j.o bessel_k.o bessel_y.o
choose.o snorm.o sexp.o dgamma.o pgamma.o qgamma.o rgamma.o dbeta.o pbeta.o
qbeta.o rbeta.o dunif.o punif.o qunif.o runif.o dnorm.o pnorm.o qnorm.o
rnorm.o dlnorm.o plnorm.o qlnorm.o rlnorm.o df.o pf.o qf.o rf.o dnf.o dt.o
pt.o qt.o rt.o dnt.o dchisq.o pchisq.o qchisq.o rchisq.o rnchisq.o dbinom.o
pbinom.o qbinom.o rbinom.o rmultinom.o dcauchy.o pcauchy.o qcauchy.o
rcauchy.o dexp.o pexp.o qexp.o rexp.o dgeom.o pgeom.o qgeom.o rgeom.o
dhyper.o phyper.o qhyper.o rhyper.o dnbinom.o pnbinom.o qnbinom.o rnbinom.o
dpois.o ppois.o qpois.o rpois.o dweibull.o pweibull.o qweibull.o rweibull.o
dlogis.o plogis.o qlogis.o rlogis.o dnchisq.o pnchisq.o qnchisq.o dnbeta.o
pnbeta.o qnbeta.o pnf.o pnt.o qnf.o qnt.o ptukey.o qtukey.o toms708.o
wilcox.o signrank.o
ranlib: file: libnmath.a(mlutils.o) has no symbols
ranlib: file: libnmath.a(expm1.o) has no symbols
ranlib libnmath.a
ranlib: file: libnmath.a(mlutils.o) has no symbols
ranlib: file: libnmath.a(expm1.o) has no symbols
config.status: creating src/unix/Makefile
making dynload.d from dynload.c
making edit.d from edit.c
making stubs.d from stubs.c
making system.d from system.c
making sys-unix.d from sys-unix.c
making sys-std.d from sys-std.c
making X11.d from X11.c
making aqua.d from aqua.c
making Rembedded.d from Rembedded.c
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
dynload.c -o dynload.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
edit.c -o edit.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
stubs.c -o stubs.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
system.c -o system.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
sys-unix.c -o sys-unix.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
sys-std.c -o sys-std.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
X11.c -o X11.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
aqua.c -o aqua.o
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
Rembedded.c -o Rembedded.o
rm -rf libunix.a
ar cr libunix.a dynload.o edit.o stubs.o system.o sys-unix.o sys-std.o X11.o
aqua.o Rembedded.o
ranlib libunix.a
gcc -std=gnu99 -I. -I../../src/include -I../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2
-DR_HOME='"/Users/mike/Downloads/R-2.8.1"' -o Rscript \
./Rscript.c
config.status: creating src/main/Makefile
making CConverters.d from CConverters.c
making CommandLineArgs.d from CommandLineArgs.c
making Rdynload.d from Rdynload.c
making Renviron.d from Renviron.c
making RNG.d from RNG.c
making apse.d from apse.c
making apply.d from apply.c
making arithmetic.d from arithmetic.c
making array.d from array.c
making attrib.d from attrib.c
making base.d from base.c
making bind.d from bind.c
making builtin.d from builtin.c
making character.d from character.c
making coerce.d from coerce.c
making colors.d from colors.c
making complex.d from complex.c
making connections.d from connections.c
making context.d from context.c
making cov.d from cov.c
making cum.d from cum.c
making dcf.d from dcf.c
making datetime.d from datetime.c
making debug.d from debug.c
making deparse.d from deparse.c
making devices.d from devices.c
making deriv.d from deriv.c
making dotcode.d from dotcode.c
making dounzip.d from dounzip.c
making dstruct.d from dstruct.c
making duplicate.d from duplicate.c
making engine.d from engine.c
making envir.d from envir.c
making errors.d from errors.c
making format.d from format.c
making eval.d from eval.c
making fourier.d from fourier.c
making gevents.d from gevents.c
making gram.d from gram.c
making gram-ex.d from gram-ex.c
making graphics.d from graphics.c
making identical.d from identical.c
making inlined.d from inlined.c
making internet.d from internet.c
making iosupport.d from iosupport.c
making lapack.d from lapack.c
making list.d from list.c
making localecharset.d from localecharset.c
making logic.d from logic.c
making main.d from main.c
making mapply.d from mapply.c
making match.d from match.c
making memory.d from memory.c
making names.d from names.c
making model.d from model.c
making objects.d from objects.c
making optim.d from optim.c
making optimize.d from optimize.c
making options.d from options.c
making par.d from par.c
making paste.d from paste.c
making pcre.d from pcre.c
making platform.d from platform.c
making plot.d from plot.c
making plot3d.d from plot3d.c
making plotmath.d from plotmath.c
making print.d from print.c
making printarray.d from printarray.c
making printvector.d from printvector.c
making printutils.d from printutils.c
making qsort.d from qsort.c
making random.d from random.c
making regex.d from regex.c
making registration.d from registration.c
making relop.d from relop.c
making rlocale.d from rlocale.c
making scan.d from scan.c
making saveload.d from saveload.c
making seq.d from seq.c
making serialize.d from serialize.c
making size.d from size.c
making sort.d from sort.c
making source.d from source.c
making split.d from split.c
making sprintf.d from sprintf.c
making startup.d from startup.c
making subassign.d from subassign.c
making subscript.d from subscript.c
making subset.d from subset.c
making summary.d from summary.c
making sysutils.d from sysutils.c
making unique.d from unique.c
making util.d from util.c
making version.d from version.c
making vfonts.d from vfonts.c
making Rmain.d from Rmain.c
making alloca.d from alloca.c
making acosh.d from acosh.c
making asinh.d from asinh.c
making atanh.d from atanh.c
making mkdtemp.d from mkdtemp.c
making snprintf.d from snprintf.c
making strdup.d from strdup.c
making strncasecmp.d from strncasecmp.c
making vsnprintf.d from vsnprintf.c
making xspline.d from xspline.c
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c CConverters.c -o
CConverters.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c CommandLineArgs.c -o
CommandLineArgs.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c Rdynload.c -o
Rdynload.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c Renviron.c -o
Renviron.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c RNG.c -o RNG.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c apply.c -o apply.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c apse.c -o apse.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c arithmetic.c -o
arithmetic.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c array.c -o array.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c attrib.c -o attrib.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c base.c -o base.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c bind.c -o bind.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c builtin.c -o
builtin.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c character.c -o
character.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c coerce.c -o coerce.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c colors.c -o colors.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c complex.c -o
complex.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c connections.c -o
connections.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c context.c -o
context.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c cov.c -o cov.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c cum.c -o cum.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dcf.c -o dcf.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c datetime.c -o
datetime.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c debug.c -o debug.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c deparse.c -o
deparse.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c deriv.c -o deriv.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c devices.c -o
devices.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dotcode.c -o
dotcode.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dounzip.c -o
dounzip.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c dstruct.c -o
dstruct.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c duplicate.c -o
duplicate.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c engine.c -o engine.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c envir.c -o envir.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c errors.c -o errors.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c eval.c -o eval.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c format.c -o format.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c fourier.c -o
fourier.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c gevents.c -o
gevents.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c gram.c -o gram.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c gram-ex.c -o
gram-ex.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c graphics.c -o
graphics.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c identical.c -o
identical.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c inlined.c -o
inlined.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c internet.c -o
internet.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c iosupport.c -o
iosupport.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c lapack.c -o lapack.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c list.c -o list.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c localecharset.c -o
localecharset.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c logic.c -o logic.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c main.c -o main.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c mapply.c -o mapply.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c match.c -o match.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c memory.c -o memory.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c model.c -o model.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c names.c -o names.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c objects.c -o
objects.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c optim.c -o optim.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c optimize.c -o
optimize.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c options.c -o
options.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c par.c -o par.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c paste.c -o paste.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c pcre.c -o pcre.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c platform.c -o
platform.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c plot.c -o plot.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c plot3d.c -o plot3d.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c plotmath.c -o
plotmath.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c print.c -o print.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c printarray.c -o
printarray.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c printvector.c -o
printvector.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c printutils.c -o
printutils.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c qsort.c -o qsort.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c random.c -o random.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c regex.c -o regex.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c registration.c -o
registration.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c relop.c -o relop.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c rlocale.c -o
rlocale.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c saveload.c -o
saveload.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c scan.c -o scan.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c seq.c -o seq.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c serialize.c -o
serialize.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c size.c -o size.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sort.c -o sort.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c source.c -o source.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c split.c -o split.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sprintf.c -o
sprintf.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c startup.c -o
startup.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c subassign.c -o
subassign.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c subscript.c -o
subscript.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c subset.c -o subset.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c summary.c -o
summary.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sysutils.c -o
sysutils.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c unique.c -o unique.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c util.c -o util.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c version.c -o
version.o
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c vfonts.c -o vfonts.o
gfortran -fPIC -g -O2 -c xxxpr.f -o xxxpr.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib
-install_name libR.dylib -compatibility_version 2.8.0 -current_version
2.8.1 -headerpad_max_install_names -o libR.dylib CConverters.o
CommandLineArgs.o Rdynload.o Renviron.o RNG.o apply.o apse.o arithmetic.o
array.o attrib.o base.o bind.o builtin.o character.o coerce.o colors.o
complex.o connections.o context.o cov.o cum.o dcf.o datetime.o debug.o
deparse.o deriv.o devices.o dotcode.o dounzip.o dstruct.o duplicate.o
engine.o envir.o errors.o eval.o format.o fourier.o gevents.o gram.o
gram-ex.o graphics.o identical.o inlined.o internet.o iosupport.o lapack.o
list.o localecharset.o logic.o main.o mapply.o match.o memory.o model.o
names.o objects.o optim.o optimize.o options.o par.o paste.o pcre.o
platform.o plot.o plot3d.o plotmath.o print.o printarray.o printvector.o
printutils.o qsort.o random.o regex.o registration.o relop.o rlocale.o
saveload.o scan.o seq.o serialize.o size.o sort.o source.o split.o sprintf.o
startup.o subassign.o subscript.o subset.o summary.o sysutils.o unique.o
util.o version.o vfonts.o xxxpr.o ../unix/Rembedded.o ../unix/libunix.a
../appl/libappl.a ../nmath/libnmath.a ../extra/zlib/libz.a
../extra/bzip2/libbz2.a ../extra/pcre/libpcre.a ../extra/intl/libintl.a
-framework vecLib -lgfortran -Wl,-framework -Wl,CoreFoundation -lreadline
-lm -lgomp -liconv
mkdir /Users/mike/Downloads/R-2.8.1/bin/exec
mkdir /Users/mike/Downloads/R-2.8.1/lib
gcc -std=gnu99 -I../../src/extra/zlib -I../../src/extra/bzip2
-I../../src/extra/pcre -I. -I../../src/include -I../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c Rmain.c -o Rmain.o
gcc -std=gnu99 -L/usr/local/lib -o R.bin Rmain.o -L../../lib -lR
making rotated.d from rotated.c
making dataentry.d from dataentry.c
making devX11.d from devX11.c
making rbitmap.d from rbitmap.c
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
dataentry.c -o dataentry.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
devX11.c -o devX11.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
rotated.c -o rotated.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/X11/include -I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c
rbitmap.c -o rbitmap.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib -o
R_X11.so dataentry.o devX11.o rotated.o rbitmap.o -lSM -lICE -L/usr/X11/lib
-lX11 -lXt -lXmu -ljpeg -L../../../lib -lR -Wl,-framework
-Wl,CoreFoundation
mkdir /Users/mike/Downloads/R-2.8.1/modules
making internet.d from internet.c
making Rsock.d from Rsock.c
making nanoftp.d from nanoftp.c
making nanohttp.d from nanohttp.c
making sock.d from sock.c
making sockconn.d from sockconn.c
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c Rsock.c -o Rsock.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c internet.c -o
internet.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c nanoftp.c -o
nanoftp.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c nanohttp.c -o
nanohttp.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sock.c -o sock.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c sockconn.c -o
sockconn.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib -o
internet.so Rsock.o internet.o nanoftp.o nanohttp.o sock.o sockconn.o
-L../../../lib -lR -Wl,-framework -Wl,CoreFoundation
making Lapack.d from Lapack.c
making vecLibg95c.d from vecLibg95c.c
gfortran -fPIC -g -O2 -ffloat-store -c dlamch.f -o dlamch.o
gfortran -fPIC -g -O2 -c dlapack0.f -o dlapack0.o
gfortran -fPIC -g -O2 -c dlapack1.f -o dlapack1.o
gfortran -fPIC -g -O2 -c dlapack2.f -o dlapack2.o
gfortran -fPIC -g -O2 -c dlapack3.f -o dlapack3.o
gfortran -fPIC -g -O2 -c dlapack4.f -o dlapack4.o
gfortran -fPIC -g -O2 -c cmplx.f -o cmplx.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib -o
libRlapack.dylib dlamch.o dlapack0.o dlapack1.o dlapack2.o dlapack3.o
dlapack4.o cmplx.o -install_name libRlapack.dylib -compatibility_version
2.8.0 -current_version 2.8.1 -headerpad_max_install_names -framework vecLib
-lgfortran -L../../../lib -lR
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c Lapack.c -o Lapack.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c vecLibg95c.c -o
vecLibg95c.o
gfortran -fPIC -g -O2 -c vecLibg95f.f -o vecLibg95f.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib -o
lapack.so Lapack.o vecLibg95c.o vecLibg95f.o -L../../../lib -lR
-Wl,-framework -Wl,CoreFoundation -L../../../lib -lRlapack -framework
vecLib -lgfortran
/Users/mike/Downloads/R-2.8.1/lib/libRlapack.dylib is unchanged
making g_alab_her.d from g_alab_her.c
making g_cntrlify.d from g_cntrlify.c
making g_fontdb.d from g_fontdb.c
making g_her_glyph.d from g_her_glyph.c
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c g_alab_her.c -o
g_alab_her.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c g_cntrlify.c -o
g_cntrlify.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c g_fontdb.c -o
g_fontdb.o
gcc -std=gnu99 -I. -I../../../src/include -I../../../src/include
-I/usr/local/include -DHAVE_CONFIG_H -fPIC -g -O2 -c g_her_glyph.c -o
g_her_glyph.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib -o
vfonts.so g_alab_her.o g_cntrlify.o g_fontdb.o g_her_glyph.o -L../../../lib
-lR -Wl,-framework -Wl,CoreFoundation
mkdir ../../library
building system startup profile
mkdir ../../../library/base
mkdir ../../../library/base/R
building package 'base'
mkdir ../../../library/base/demo
mkdir ../../../library/base/po
mkdir ../../../library/base/man
building package 'tools'
mkdir ../../../library/tools
mkdir ../../../library/tools/R
mkdir ../../../library/tools/po
mkdir ../../../library/tools/man
making Rmd5.d from Rmd5.c
making text.d from text.c
making init.d from init.c
making md5.d from md5.c
make[5]: `Makedeps' is up to date.
gcc -std=gnu99 -I../../../../include -I/usr/local/include -fPIC -g -O2
-c text.c -o text.o
gcc -std=gnu99 -I../../../../include -I/usr/local/include -fPIC -g -O2
-c init.c -o init.o
gcc -std=gnu99 -I../../../../include -I/usr/local/include -fPIC -g -O2
-c Rmd5.c -o Rmd5.o
gcc -std=gnu99 -I../../../../include -I/usr/local/include -fPIC -g -O2
-c md5.c -o md5.o
gcc -std=gnu99 -dynamiclib -Wl,-headerpad_max_install_names -undefined
dynamic_lookup -single_module -multiply_defined suppress -L/usr/local/lib -o
tools.so text.o init.o Rmd5.o md5.o -L../../../../lib -lR -Wl,-framework
-Wl,CoreFoundation
mkdir ../../../../library/tools/libs
building package 'utils'
mkdir ../../../library/utils
mkdir ../../../library/utils/R
mkdir ../../../library/utils/po
mkdir ../../../library/utils/Sweave
mkdir ../../../library/utils/misc
mkdir ../../../library/utils/man
building package 'grDevices'
mkdir ../../../library/grDevices
mkdir ../../../library/grDevices/R
mkdir ../../../library/grDevices/po
mkdir ../../../library/grDevices/afm
mkdir ../../../library/grDevices/enc
mkdir ../../../library/grDevices/man
making chull.d from chull.c
making devPicTeX.d from devPicTeX.c
making devNull.d from devNull.c
making devPS.d from devPS.c
making devQuartz.d from devQuartz.c
making init.d from init.c
making qdBitmap.d from qdBitmap.c
making qdPDF.d from qdPDF.c
making qdCocoa.d from qdCocoa.m
<built-in>:0: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://gcc.gnu.org/bugs.html> for instructions.
make[4]: *** [qdCocoa.d] Error 1
make[4]: *** Waiting for unfinished jobs....
make[3]: *** [all] Error 1
make[2]: *** [R] Error 1
make[1]: *** [R] Error 1
make: *** [R] Error 1
--
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University
www.thatmike.com
Looking to arrange a meeting? Do so at:
http://www.timetomeet.info/with/mike/
~ Certainty is folly... I think. ~c
[[alternative HTML version deleted]]
------------------------------
Message: 29
Date: Tue, 23 Dec 2008 04:57:43 +0800
From: "Xiaoxu LI" <lixiaoxu at gmail.com>
Subject: [R] sem package fails when no of factors increase from 3 to 4
To: r-help at r-project.org
Message-ID:
<fb71248a0812221257t3b19e152r6f17606b98d7864a at mail.gmail.com>
Content-Type: text/plain
#### I checked through every 3 factor * 3 loading case.
#### While, 4 factor * 3 loading failed.
#### the data is 6 factor * 3 loading
require(sem);
cor18<-read.moments();
1
.68 1
.60 .58 1
.01 .10 .07 1
.12 .04 .06 .29 1
.06 .06 .01 .35 .24 1
.09 .13 .10 .05 .03 .07 1
.04 .08 .16 .10 .12 .06 .25 1
.06 .09 .02 .02 .09 .16 .29 .36 1
.23 .26 .19 .05 .04 .04 .08 .09 .09 1
.11 .13 .12 .03 .05 .03 .02 .06 .06 .40 1
.16 .09 .09 .10 .10 .02 .04 .12 .15 .29 .20 1
.24 .26 .22 .14 .06 .10 .06 .07 .08 .03 .04 .02 1
.21 .22 .29 .07 .05 .17 .12 .06 .06 .03 .12 .04 .55 1
.29 .28 .26 .06 .07 .05 .06 .15 .20 .10 .03 .12 .64 .61 1
.15 .16 .19 .18 .08 .07 .08 .10 .06 .15 .16 .07 .25 .25 .16 1
.24 .20 .16 .13 .15 .18 .19 .18 .14 .11 .07 .16 .19 .21 .22 .35 1
.14 .25 .12 .09 .11 .09 .09 .11 .21 .17 .09 .05 .21 .23 .18 .39 .48 1
mod3.1_9<-specify.model();
X1 <-> X1,TD11,NA
X2 <-> X2,TD22,NA
X3 <-> X3,TD33,NA
X4 <-> X4,TD44,NA
X5 <-> X5,TD55,NA
X6 <-> X6,TD66,NA
X7 <-> X7,TD77,NA
X8 <-> X8,TD88,NA
X9 <-> X9,TD99,NA
X1 <- xi1,LY11, NA
X2 <- xi1,LY21, NA
X3 <- xi1,LY31, NA
X4 <- xi2,LY42, NA
X5 <- xi2,LY52, NA
X6 <- xi2,LY62, NA
X7 <- xi3,LY73, NA
X8 <- xi3,LY83, NA
X9 <- xi3,LY93, NA
xi1 <-> xi1,NA,1
xi2 <-> xi2,NA,1
xi3 <-> xi3,NA,1
xi1 <-> xi2 ,PH12,NA
xi1 <-> xi3 ,PH13,NA
xi2 <-> xi3 ,PH23,NA
mod3.1_6AND10_12<-specify.model();
X1 <-> X1,TD11,NA
X2 <-> X2,TD22,NA
X3 <-> X3,TD33,NA
X4 <-> X4,TD44,NA
X5 <-> X5,TD55,NA
X6 <-> X6,TD66,NA
X10 <-> X10,TD77,NA
X11 <-> X11,TD88,NA
X12 <-> X12,TD99,NA
X1 <- xi1,LY11, NA
X2 <- xi1,LY21, NA
X3 <- xi1,LY31, NA
X4 <- xi2,LY42, NA
X5 <- xi2,LY52, NA
X6 <- xi2,LY62, NA
X10 <- xi3,LY73, NA
X11 <- xi3,LY83, NA
X12 <- xi3,LY93, NA
xi1 <-> xi1,NA,1
xi2 <-> xi2,NA,1
xi3 <-> xi3,NA,1
xi1 <-> xi2 ,PH12,NA
xi1 <-> xi3 ,PH13,NA
xi2 <-> xi3 ,PH23,NA
mod3.1_3AND7_9AND10_12<-specify.model();
X1 <-> X1,TD11,NA
X2 <-> X2,TD22,NA
X3 <-> X3,TD33,NA
X10 <-> X10,TD44,NA
X11 <-> X11,TD55,NA
X12 <-> X12,TD66,NA
X7 <-> X7,TD77,NA
X8 <-> X8,TD88,NA
X9 <-> X9,TD99,NA
X1 <- xi1,LY11, NA
X2 <- xi1,LY21, NA
X3 <- xi1,LY31, NA
X10 <- xi2,LY42, NA
X11 <- xi2,LY52, NA
X12 <- xi2,LY62, NA
X7 <- xi3,LY73, NA
X8 <- xi3,LY83, NA
X9 <- xi3,LY93, NA
xi1 <-> xi1,NA,1
xi2 <-> xi2,NA,1
xi3 <-> xi3,NA,1
xi1 <-> xi2 ,PH12,NA
xi1 <-> xi3 ,PH13,NA
xi2 <-> xi3 ,PH23,NA
mod3.4_6AND7_9AND10_12<-specify.model();
X10 <-> X10,TD11,NA
X11 <-> X11,TD22,NA
X12 <-> X12,TD33,NA
X4 <-> X4,TD44,NA
X5 <-> X5,TD55,NA
X6 <-> X6,TD66,NA
X7 <-> X7,TD77,NA
X8 <-> X8,TD88,NA
X9 <-> X9,TD99,NA
X10 <- xi1,LY11, NA
X11 <- xi1,LY21, NA
X12 <- xi1,LY31, NA
X4 <- xi2,LY42, NA
X5 <- xi2,LY52, NA
X6 <- xi2,LY62, NA
X7 <- xi3,LY73, NA
X8 <- xi3,LY83, NA
X9 <- xi3,LY93, NA
xi1 <-> xi1,NA,1
xi2 <-> xi2,NA,1
xi3 <-> xi3,NA,1
xi1 <-> xi2 ,PH12,NA
xi1 <-> xi3 ,PH13,NA
xi2 <-> xi3 ,PH23,NA
mod4<-specify.model();
X1 <-> X1,TD11,NA
X2 <-> X2,TD22,NA
X3 <-> X3,TD33,NA
X4 <-> X4,TD44,NA
X5 <-> X5,TD55,NA
X6 <-> X6,TD66,NA
X7 <-> X7,TD77,NA
X8 <-> X8,TD88,NA
X9 <-> X9,TD99,NA
X10 <-> X10,TDaa,NA
X11 <-> X11,TDbb,NA
X12 <-> X12,TDcc,NA
X1 <- xi1,LY11, NA
X2 <- xi1,LY21, NA
X3 <- xi1,LY31, NA
X4 <- xi2,LY42, NA
X5 <- xi2,LY52, NA
X6 <- xi2,LY62, NA
X7 <- xi3,LY73, NA
X8 <- xi3,LY83, NA
X9 <- xi3,LY93, NA
X10 <- xi4,LXa4,NA
X11 <- xi4,LXb4,NA
X12 <- xi4,LXc4,NA
xi1 <-> xi1,NA,1
xi2 <-> xi2,NA,1
xi3 <-> xi3,NA,1
xi4 <-> xi4,NA,1
xi1 <-> xi2 ,PH12,NA
xi1 <-> xi3 ,PH13,NA
xi2 <-> xi3 ,PH23,NA
xi4 <-> xi1,PH41,NA
xi4 <-> xi2,PH42,NA
xi4 <-> xi3,PH43,NA
summary(sem(mod3.1_9,cor18,500))$RMSEA;
summary(sem(mod3.1_6AND10_12,cor18,500))$RMSEA;
summary(sem(mod3.1_3AND7_9AND10_12,cor18,500))$RMSEA;
summary(sem(mod3.4_6AND7_9AND10_12,cor18,500))$RMSEA;
summary(sem(mod4,cor18,500))$RMSEA;##fail
[[alternative HTML version deleted]]
------------------------------
Message: 30
Date: Mon, 22 Dec 2008 13:20:36 -0800
From: Norm Matloff <matloff at cs.ucdavis.edu>
Subject: Re: [R] queue simulation
To: r-help at r-project.org
Message-ID: <20081222212036.GD9407 at laura.cs.ucdavis.edu>
Content-Type: text/plain; charset=us-ascii
> Date: 22-Dec-2008 10:11:28 GMT
> From: "Gerard M. Keogh" <GMKeogh at justice.ie>
> Subject: [R] queue simulation
> To: r-help at r-project.org
>
> Hi all,
>
>
> I have a multiple queing situation I'd like to simulate to get some
idea
of> the distributions - waiting times and allocations etc.
> Does R has a package available for this - many years ago there used to be
a> language called "simscript" for discrete event simulation and I
was
> wondering if R has an equivalent (or hopefully with graphics, something
> better!).
To my knowledge, this doesn't exist, but one never knows. I look
forward to hearing the other responses.
Discrete-event simulation (DES) is generally done under one of two main
world views--event-oriented, process-oriented. The more popular is
probably the process-oriented view, but it requires something like
threading, which would be problematic in R.
It would be easy to take the event-oriented view, as it would just
require coding up some kind of priority queue routine. In fact, a
couple of weeks ago I made a note to myself to do this as an example of
how one could do linked data structures in R. Again, this view is
considered a poor way to program DES, but if you are interested, feel
free to contact me.
I very much like (and am somewhat involved in the development of) SimPy,
a Python-based DES package. You could use SimPy for your simulation and
use RPy to access R from SimPy, to take advantage of R's graphics and
statistics facilities.
SimPy is at http://simpy.sourceforge.net/ Also, I have a tutorial on it
at http://heather.cs.ucdavis.edu/~matloff/simcourse.html
Norm Matloff
University of California, Davis
------------------------------
Message: 31
Date: Mon, 22 Dec 2008 16:32:01 -0500
From: David Winsemius <dwinsemius at comcast.net>
Subject: [R] offlist Re: How can I avoid nested 'for' loops or quicken
the process?
To: Brigid Mooney <bkmooney at gmail.com>
Cc: "r-help at R-project.org List" <r-help at r-project.org>
Message-ID: <3F7EAF23-2344-4C6B-9DDE-86B0B442931B at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
I do agree with Dr Berry that your question failed on several grounds
in adherence to the Posting Guide, so this is off list.
Maybe this will give you guidance that you can apply to your next
question to the list:
> alist <- list("a","b","c")
> blist <- list("ab","ac","ad")
> expand.grid(alist, blist)
Var1 Var2
1 a ab
2 b ab
3 c ab
4 a ac
5 b ac
6 c ac
7 a ad
8 b ad
9 c ad
> apply( expand.grid(alist, blist), 1, function(x) paste(x[1], x[2],
sep=""))
[1] "aab" "bab" "cab" "aac"
"bac" "cac" "aad" "bad" "cad"
> clist <- list("AA","BB")
> apply(expand.grid(alist, blist, clist),1,function(x) paste(x[1],
x[2], x[3], sep=""))
[1] "aabAA" "babAA" "cabAA" "aacAA"
"bacAA" "cacAA" "aadAA" "badAA"
"cadAA" "aabBB"
[11] "babBB" "cabBB" "aacBB" "bacBB"
"cacBB" "aadBB" "badBB" "cadBB"
> dlist <- list(TRUE,FALSE)
> apply(expand.grid(alist, blist, clist, dlist),1,function(x)
paste(x[1], x[2], x[3], (x[4]), sep=""))[8:12]
[1] "badAATRUE" "cadAATRUE" "aabBBTRUE"
"babBBTRUE" "cabBBTRUE"
This could get unwieldily if the length of the lists are appreciable,
since the number of rows will be the product of all the lengths. On
the other hand you could create a dataframe indexed by the variables
in expand.grid's output:
> master.df <- data.frame( expand.grid(alist, blist, clist, dlist),
results = apply(expand.grid(alist, blist,
clist,dlist),1,
function(x) paste(x[1], x[2],
x[3], (x[4]), sep="")))
--
David Winsemius
On Dec 22, 2008, at 3:33 PM, Charles C. Berry wrote:
> On Mon, 22 Dec 2008, Brigid Mooney wrote:
>
>> Hi All,
>>
>> I'm still pretty new to using R - and I was hoping I might be able
>> to get
>> some advice as to how to use 'apply' or a similar function
instead
>> of using
>> nested for loops.
>
> Unfortunately, you have given nothing that is reproducible.
>
> The details of MyFunction and the exact structure of the list
> objects are crucial.
>
> Check out the _Posting Guide_ for hints on how to formulate a
> question that will elecit an answer that helps you.
>
> HTH,
>
> Chuck
>
>
>>
>> Right now I have a script which uses nested for loops similar to
>> this:
>>
>> i <- 1
>> for(a in Alpha) { for (b in Beta) { for (c in Gamma) { for (d in
>> Delta) {
>> for (e in Epsilon)
>> {
>> Output[i] <- MyFunction(X, Y, a, b, c, d, e)
>> i <- i+1
>> }}}}}
>>
>>
>> Where Output[i] is a data frame, X and Y are data frames, and
>> Alpha, Beta,
>> Gamma, Delta, and Epsilon are all lists, some of which are numeric,
>> some
>> logical (TRUE/FALSE).
>>
>> Any advice on how to implement some sort of solution that might be
>> quicker
>> than these nested 'for' loops would be greatly appreciated.
>>
>> Thanks!
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> Charles C. Berry (858) 534-2098
> Dept of Family/Preventive
> Medicine
> E mailto:cberry at tajo.ucsd.edu UC San Diego
> http://famprevmed.ucsd.edu/faculty/cberry/ La Jolla, San Diego
> 92093-0901
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 32
Date: Mon, 22 Dec 2008 22:34:16 +0100
From: Peter Dalgaard <p.dalgaard at biostat.ku.dk>
Subject: Re: [R] Error compiling R.2.8.1 with gcc 4.4 on Mac OS 10.5.6
To: Mike Lawrence <mike at thatmike.com>
Cc: r-help at stat.math.ethz.ch, gkhanna at umassd.edu
Message-ID: <495007D8.9020401 at biostat.ku.dk>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Mike Lawrence wrote:> Hi all,
> I've encountered a build error with the latest R source (2.8.1). This
is a
> relatively fresh install of OS Leopard (10.5.6), latest developer tools
> installed, gcc/g++/gfortran version 4.4 installed (via
> http://hpc.sourceforge.net/, after which I updated the gcc & g++
symlinks
to> link to the 4.4 versions; gfortran used the 4.4 version without updating
the> symlink).
>
> Ultimately I wanted to instalI pnmath, so as per a previous thread (
> http://www.nabble.com/Parallel-R-tt18173953.html#a18196319) I built with:
>
> LIBS=-lgomp ./configure --with-blas='-framework vecLib'
> make -j4
>
> The configure runs without a hitch, but make fails, throwing an error
> seemingly related to qdCocoa:
> making qdCocoa.d from qdCocoa.m
> <built-in>:0: internal compiler error: Abort trap
>
> Below is the output of configure, followed by the output of make (error is
> in the last 10 lines). Any suggestions to fix this would be greatly
> appreciated.
Ouch. As you probably realize, this is very Mac-specific, and the actual
bug is in GCC and has nothing to do with R.
I don't think there's any obvious way to fix this, but there might be a
temporary workaround. I'd consider making the .d file by hand, so that
make doesn't even try to build it. You can probably rather easily create
one based on one of the other .d files in the same directory (.d files
are usually overkill, it is probably not necessary to get it completely
right.)
Next question is whether this GCC can handle .m files at all....
> making qdBitmap.d from qdBitmap.c
> making qdPDF.d from qdPDF.c
> making qdCocoa.d from qdCocoa.m
> <built-in>:0: internal compiler error: Abort trap
> Please submit a full bug report,
> with preprocessed source if appropriate.
> See <http://gcc.gnu.org/bugs.html> for instructions.
> make[4]: *** [qdCocoa.d] Error 1
> make[4]: *** Waiting for unfinished jobs....
> make[3]: *** [all] Error 1
> make[2]: *** [R] Error 1
> make[1]: *** [R] Error 1
> make: *** [R] Error 1
>
>
>
--
O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
(*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907
------------------------------
Message: 33
Date: Mon, 22 Dec 2008 14:51:16 -0700
From: "Ranney, Steven" <steven.ranney at montana.edu>
Subject: [R] Summary information by groups programming assitance
To: <r-help at r-project.org>
Message-ID:
<677B91F53FD4074CB84B79754E8521C403CA2901 at GEMSTONES.msu.montana.edu>
Content-Type: text/plain; charset="us-ascii"
All -
I have data that looks like
psd Species Lake Length Weight St.weight Wr
Wr.1 vol
432 substock SMB Clear 150 41.00 0.01 95.12438
95.10118 0.0105
433 substock SMB Clear 152 39.00 0.01 86.72916
86.70692 0.0105
434 substock SMB Clear 152 40.00 3.11 88.95298
82.03689 3.2655
435 substock SMB Clear 159 48.00 0.04 92.42095
92.34393 0.0420
436 substock SMB Clear 159 48.00 0.01 92.42095
92.40170 0.0105
437 substock SMB Clear 165 47.00 0.03 80.38023
80.32892 0.0315
438 substock SMB Clear 171 62.00 0.21 94.58105
94.26070 0.2205
439 substock SMB Clear 178 70.00 0.01 93.91912
93.90571 0.0105
440 substock SMB Clear 179 76.00 1.38 100.15760
98.33895 1.4490
441 S-Q SMB Clear 180 75.00 0.01 97.09330
97.08035 0.0105
442 S-Q SMB Clear 180 92.00 0.02 119.10111
119.07522 0.0210
...
[truncated]
where psd and lake are categorical variables, with five and four
categories, respectively. I'd like to find the maximum vol and the
lengths associated with each maximum vol by each category by each lake.
In other words, I'd like to have a data frame that looks something like
Lake Category Length vol
Clear substock 152 3.2655
Clear S-Q 266 11.73
Clear Q-P 330 14.89
...
Pickerel substock 170 3.4965
Pickerel S-Q 248 10.69
Pickerel Q-P 335 25.62
Pickerel P-M 415 32.62
Pickerel M-T 442 17.25
In order to originally get this, I used
with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length, psd),max))
with(smb[Lake=="Pickerel",], tapply(vol, list(Length, psd),max))
with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
and pulled the values I needed out by hand and put them into a .csv.
Unfortunately, I've got a number of other data sets upon which I'll need
to do the same analysis. Finding a programmable alternative would
provide a much easier (and likely less error prone) method to achieve
the same results. Ideally, the "Length" and "vol" data
would be in a
data frame such that I could then analyze with nls.
Does anyone have any thoughts as to how I might accomplish this?
Thanks in advance,
Steven Ranney
------------------------------
Message: 34
Date: Mon, 22 Dec 2008 15:59:15 -0600
From: "hadley wickham" <h.wickham at gmail.com>
Subject: Re: [R] Summary information by groups programming assitance
To: "Ranney, Steven" <steven.ranney at montana.edu>
Cc: r-help at r-project.org
Message-ID:
<f8e6ff050812221359n41771862x5e7b46057896f3d0 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Mon, Dec 22, 2008 at 3:51 PM, Ranney, Steven
<steven.ranney at montana.edu> wrote:> All -
>
> I have data that looks like
>
> psd Species Lake Length Weight St.weight Wr
> Wr.1 vol
> 432 substock SMB Clear 150 41.00 0.01 95.12438
> 95.10118 0.0105
> 433 substock SMB Clear 152 39.00 0.01 86.72916
> 86.70692 0.0105
> 434 substock SMB Clear 152 40.00 3.11 88.95298
> 82.03689 3.2655
> 435 substock SMB Clear 159 48.00 0.04 92.42095
> 92.34393 0.0420
> 436 substock SMB Clear 159 48.00 0.01 92.42095
> 92.40170 0.0105
> 437 substock SMB Clear 165 47.00 0.03 80.38023
> 80.32892 0.0315
> 438 substock SMB Clear 171 62.00 0.21 94.58105
> 94.26070 0.2205
> 439 substock SMB Clear 178 70.00 0.01 93.91912
> 93.90571 0.0105
> 440 substock SMB Clear 179 76.00 1.38 100.15760
> 98.33895 1.4490
> 441 S-Q SMB Clear 180 75.00 0.01 97.09330
> 97.08035 0.0105
> 442 S-Q SMB Clear 180 92.00 0.02 119.10111
> 119.07522 0.0210
> ...
> [truncated]
>
> where psd and lake are categorical variables, with five and four
> categories, respectively. I'd like to find the maximum vol and the
> lengths associated with each maximum vol by each category by each lake.
> In other words, I'd like to have a data frame that looks something like
>
> Lake Category Length vol
> Clear substock 152 3.2655
> Clear S-Q 266 11.73
> Clear Q-P 330 14.89
> ...
> Pickerel substock 170 3.4965
> Pickerel S-Q 248 10.69
> Pickerel Q-P 335 25.62
> Pickerel P-M 415 32.62
> Pickerel M-T 442 17.25
>
>
> In order to originally get this, I used
>
> with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
> with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length,
psd),max))
> with(smb[Lake=="Pickerel",], tapply(vol, list(Length, psd),max))
> with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
>
> and pulled the values I needed out by hand and put them into a .csv.
> Unfortunately, I've got a number of other data sets upon which I'll
need
> to do the same analysis. Finding a programmable alternative would
> provide a much easier (and likely less error prone) method to achieve
> the same results. Ideally, the "Length" and "vol" data
would be in a
> data frame such that I could then analyze with nls.
>
> Does anyone have any thoughts as to how I might accomplish this?
You might want to have a look at the plyr package,
http://had.co.nz/plyr, which provides a set of tools to make tasks
like this easy. The are a number of similar examples in the
introductory pdf that should get you started.
Regards,
Hadley
--
http://had.co.nz/
------------------------------
Message: 35
Date: Mon, 22 Dec 2008 23:25:46 +0100
From: S?ren H?jsgaard <Soren.Hojsgaard at agrsci.dk>
Subject: Re: [R] Summary information by groups programming assitance
To: "Ranney, Steven" <steven.ranney at montana.edu>,
<r-help at r-project.org>
Message-ID:
<C83C5E3DEEE97E498B74729A33F6EAEC05630089 at DJFPOST01.djf.agrsci.dk>
Content-Type: text/plain; charset="iso-8859-1"
Maybe summaryBy (or lapplyBy/splitBy) in the doBy package might help you.
Regards
S?ren
________________________________
Fra: r-help-bounces at r-project.org p? vegne af Ranney, Steven
Sendt: ma 22-12-2008 22:51
Til: r-help at r-project.org
Emne: [R] Summary information by groups programming assitance
All -
I have data that looks like
psd Species Lake Length Weight St.weight Wr
Wr.1 vol
432 substock SMB Clear 150 41.00 0.01 95.12438
95.10118 0.0105
433 substock SMB Clear 152 39.00 0.01 86.72916
86.70692 0.0105
434 substock SMB Clear 152 40.00 3.11 88.95298
82.03689 3.2655
435 substock SMB Clear 159 48.00 0.04 92.42095
92.34393 0.0420
436 substock SMB Clear 159 48.00 0.01 92.42095
92.40170 0.0105
437 substock SMB Clear 165 47.00 0.03 80.38023
80.32892 0.0315
438 substock SMB Clear 171 62.00 0.21 94.58105
94.26070 0.2205
439 substock SMB Clear 178 70.00 0.01 93.91912
93.90571 0.0105
440 substock SMB Clear 179 76.00 1.38 100.15760
98.33895 1.4490
441 S-Q SMB Clear 180 75.00 0.01 97.09330
97.08035 0.0105
442 S-Q SMB Clear 180 92.00 0.02 119.10111
119.07522 0.0210
...
[truncated]
where psd and lake are categorical variables, with five and four
categories, respectively. I'd like to find the maximum vol and the
lengths associated with each maximum vol by each category by each lake.
In other words, I'd like to have a data frame that looks something like
Lake Category Length vol
Clear substock 152 3.2655
Clear S-Q 266 11.73
Clear Q-P 330 14.89
...
Pickerel substock 170 3.4965
Pickerel S-Q 248 10.69
Pickerel Q-P 335 25.62
Pickerel P-M 415 32.62
Pickerel M-T 442 17.25
In order to originally get this, I used
with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length, psd),max))
with(smb[Lake=="Pickerel",], tapply(vol, list(Length, psd),max))
with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
and pulled the values I needed out by hand and put them into a .csv.
Unfortunately, I've got a number of other data sets upon which I'll need
to do the same analysis. Finding a programmable alternative would
provide a much easier (and likely less error prone) method to achieve
the same results. Ideally, the "Length" and "vol" data
would be in a
data frame such that I could then analyze with nls.
Does anyone have any thoughts as to how I might accomplish this?
Thanks in advance,
Steven Ranney
______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 36
Date: Mon, 22 Dec 2008 22:27:35 +0000 (GMT)
From: Prof Brian Ripley <ripley at stats.ox.ac.uk>
Subject: Re: [R] Error compiling R.2.8.1 with gcc 4.4 on Mac OS 10.5.6
To: Peter Dalgaard <p.dalgaard at biostat.ku.dk>
Cc: r-help at stat.math.ethz.ch, gkhanna at umassd.edu
Message-ID:
<alpine.LFD.2.00.0812222216480.21026 at gannet.stats.ox.ac.uk>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
On Mon, 22 Dec 2008, Peter Dalgaard wrote:
> Mike Lawrence wrote:
>> Hi all,
>> I've encountered a build error with the latest R source (2.8.1).
This is
a>> relatively fresh install of OS Leopard (10.5.6), latest developer tools
>> installed, gcc/g++/gfortran version 4.4 installed (via
>> http://hpc.sourceforge.net/, after which I updated the gcc & g++
symlinks
>> to
>> link to the 4.4 versions; gfortran used the 4.4 version without
updating
>> the
>> symlink).
>>
>> Ultimately I wanted to instalI pnmath, so as per a previous thread (
>> http://www.nabble.com/Parallel-R-tt18173953.html#a18196319) I built
with:
>>
>> LIBS=-lgomp ./configure --with-blas='-framework vecLib'
>> make -j4
>>
>> The configure runs without a hitch, but make fails, throwing an error
>> seemingly related to qdCocoa:
>> making qdCocoa.d from qdCocoa.m
>> <built-in>:0: internal compiler error: Abort trap
>>
>> Below is the output of configure, followed by the output of make (error
is>> in the last 10 lines). Any suggestions to fix this would be greatly
>> appreciated.
>
> Ouch. As you probably realize, this is very Mac-specific, and the actual
bug> is in GCC and has nothing to do with R.
>
> I don't think there's any obvious way to fix this, but there might
be a
> temporary workaround. I'd consider making the .d file by hand, so that
make> doesn't even try to build it. You can probably rather easily create one
based> on one of the other .d files in the same directory (.d files are usually
> overkill, it is probably not necessary to get it completely right.)
>
> Next question is whether this GCC can handle .m files at all....
It can: at least the released versions can provided the ObjC language is
installed. Most likely if making the .d file is an error then actual
compilation will be too.
Note that this is a non-Apple build of a non-FSF version of a non-released
version of gcc. I think the workaround is to use Apple compilers for
Mac-specific code. So either use --with-aqua with Apple compilers, or
--without-aqua with others. From memory (I am not on my Mac right now),
the latter will not build the quartz code at all: certainly
src/library/grDevices/src/Makefile suggests so.
I'd suggest reporting Mac-specific problems on R-sig-mac, and problems
with building R on experimental systems (including unreleased compilers)
to the system's developers or perhaps R-devel.
>
>
>
>> making qdBitmap.d from qdBitmap.c
>> making qdPDF.d from qdPDF.c
>> making qdCocoa.d from qdCocoa.m
>> <built-in>:0: internal compiler error: Abort trap
>> Please submit a full bug report,
>> with preprocessed source if appropriate.
>> See <http://gcc.gnu.org/bugs.html> for instructions.
>> make[4]: *** [qdCocoa.d] Error 1
>> make[4]: *** Waiting for unfinished jobs....
>> make[3]: *** [all] Error 1
>> make[2]: *** [R] Error 1
>> make[1]: *** [R] Error 1
>> make: *** [R] Error 1
>>
>>
>>
>
>
> --
> O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
> c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
> (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
> ~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 37
Date: Mon, 22 Dec 2008 22:31:23 +0000 (GMT)
From: Prof Brian Ripley <ripley at stats.ox.ac.uk>
Subject: Re: [R] Treatment of Date ODBC objects in R (RODBC)
To: Peter Dalgaard <p.dalgaard at biostat.ku.dk>
Cc: r-help at r-project.org
Message-ID:
<alpine.LFD.2.00.0812222227460.21026 at gannet.stats.ox.ac.uk>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
On Mon, 22 Dec 2008, Peter Dalgaard wrote:
> Ivan Alves wrote:
>> Dear all,
>>
>> Retrieving an Oracle "Date" data type by means of RODBC
(version 1.2-4) I
>> get different classes in R depending on which operating system I am in:
>>
>> On MacOSX I get "Date" class
>> On Windows I get " "POSIXt" "POSIXct" class
>>
>> The problem is material, as converting the "POSIXt"
"POSIXct" object with
>> as.Date() returns one day less ("2008-12-17 00:00:00 CET" is
returned as
>> "2008-12-16").
>
> This is in a sense correct since CET is one hour ahead of GMT (two hours
in> Summer). What is a bit puzzling is that
>
>> ISOdate(2008,12,24)
> [1] "2008-12-24 12:00:00 GMT"
>> class(ISOdate(2008,12,24))
> [1] "POSIXt" "POSIXct"
>> as.POSIXct("2008-12-24")
> [1] "2008-12-24 CET"
>> as.POSIXct("2008-12-24")+1
> [1] "2008-12-24 00:00:01 CET"
>
> I.e. we have two ways of converting a timeless date to POSIXct, and they
> differ in noon/midnight, and in whether local timezone matters or not.
>
> I believe Brian did this, and he usually does things for a reason....
Well, one is explicitly a way to set a date, and midday seems a good
choice as most timezones are within +/- 12 hours. OTOH as.POSIXct is
using a format with a missing time, and the POSIX convention is that that
missing times are zero.
The difference is between no time and missing time.
>
>
>>
>> I have 2 related questions:
>>
>> 1. Is there a way to control the conversion used by RODBC for types
"Date"?>> or is this controlled by the ODBC Driver (in my case the Oracle driver
in
>> Windows and Actual on Mac OS X)?
>>
>> 2. What is the trick to get as.Date() to return the _intended_ date
(the
>> date that the OS X environment "correctly" reads)?
>
> Add 12 hours, maybe? (43200 seconds)
>
> Or play around with the timezone, but that seems painful.
>
> --
> O__ ---- Peter Dalgaard ?ster Farimagsgade 5, Entr.B
> c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
> (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918
> ~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 38
Date: Mon, 22 Dec 2008 23:07:29 +0000
From: glenn roberts <g1enn.roberts at btinternet.com>
Subject: [R] Integrate function
To: "r-help at R-project.org" <r-help at R-project.org>
Message-ID: <C575CE31.4DB%g1enn.roberts at btinternet.com>
Content-Type: text/plain
Quick One if any one can help please.
On use of integration function integrate9; how do I get the function to
return just the value with no messages please
Glenn
[[alternative HTML version deleted]]
------------------------------
Message: 39
Date: Mon, 22 Dec 2008 14:57:45 -0800 (PST)
From: eugen pircalabelu <eugen_pircalabelu at yahoo.com>
Subject: [R] newbie question on tcltk
To: R-help <r-help at stat.math.ethz.ch>
Message-ID: <620247.52862.qm at web38608.mail.mud.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Hi List,
Can anyone tell me how could i put the "BACK" button in the following
code,
just under the "AAA" menu? I want this button to go back to the
previous
page, and since it has nothing to do with the "1" and "2"
buttons, i want it
somehow separated from these two buttons, but i don't know how. I searched
the web for some examples but my results were unsatisfactory.
Thank you and have a great day ahead.
#my code
library(tcltk)
rm(list=ls())
top <- tktoplevel(padx=70, pady=70)
frameOverall <- tkframe(top)
frameUpper <- tkframe(frameOverall,relief="groove",borderwidth=2)
back_but <- tkbutton (frameUpper, text = "BACK", width=20,
height=1,
default="active", overrelief="flat",anchor="w",
borderwidth=4 )
tkgrid(frameUpper)
tkgrid(frameOverall)
tkpack (frameUpper, back_but, side='left', anchor='n')
tkgrid(tklabel(top,text=" "))
fontHeading <-
tkfont.create(family="arial",size=14,weight="bold")
other_window <- function ()
{
tkdestroy(top)
tt <- tktoplevel(padx=100, pady=100)
b1 <- tkbutton (tt, text = "3.", width=20, font=fontHeading,
command=function () tkdestroy (tt) )
tkgrid(b1)
tkgrid(tklabel(tt,text=" "))
b2 <- tkbutton (tt, text = "4.", width=20, font=fontHeading,
command=function () tkdestroy (tt) )
tkgrid(b2)
tkgrid(tklabel(tt,text=" "))
}
ok.but1 <- tkbutton (top, text = "1.", width=20,
font=fontHeading,default="active", overrelief="flat",
anchor="w",command=other_window );tkgrid(ok.but1)
tkgrid(tklabel(top,text=" "))
ok.but2 <- tkbutton(top, text = "2. ", width=20, font=fontHeading,
default="active", overrelief="flat", anchor="w");
tkgrid(ok.but2 )
tkgrid(tklabel(top,text=" "))
topMenu <- tkmenu(top)
tkconfigure(top, menu = topMenu)
fileMenu <- tkmenu(topMenu, tearoff = FALSE)
openRecentMenu <- tkmenu(topMenu, tearoff = FALSE)
tkadd(openRecentMenu, "command", label = "5 ",
command = function() tkmessageBox(
message = "xxxxx", icon = "error"))
tkadd(openRecentMenu, "command", label = "6",
command = function() tkmessageBox(
message = "yyyyy", icon = "error"))
tkadd(fileMenu, "cascade", label = "BBB", menu =
openRecentMenu)
tkadd(fileMenu, "command", label = "EXIT", command =
function()
tkdestroy(tt))
tkadd(topMenu, "cascade", label = "AAA", menu = fileMenu)
# end code
------------------------------
Message: 40
Date: Mon, 22 Dec 2008 11:52:35 -0800 (PST)
From: iamsilvermember <m2chan at ucsd.edu>
Subject: [R] Error: cannot allocate vector of size 1.8 Gb
To: r-help at r-project.org
Message-ID: <21133949.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
> dim(data)
[1] 22283 19
> dm=dist(data, method = "euclidean", diag = FALSE, upper = FALSE,
p = 2)
Error: cannot allocate vector of size 1.8 Gb
Hi Guys, thank you in advance for helping. :-D
Recently I ran into the "cannot allocate vector of size 1.8GB" error.
I am
pretty sure this is not a hardware limitation because it happens no matter I
ran the R code in a 2.0Ghz Core Duo 2GB ram Mac or on a Intel Xeon 2x2.0Ghz
quard-core 8GB ram Linux server.
I also tried to clear the workspace before running the code too, but it
didn't seem to help...
Weird thing though is that once in a while it will work, but next when I run
clustering on the above result>hc=hclust(dm, method = "complete", members=NULL)
it give me the same error...
I searched around already, but the memory.limit, memory.size method does not
seem to help. May I know what can i do to resolve this problem?
Thank you so much for your help.
--
View this message in context:
http://www.nabble.com/Error%3A-cannot-allocate-vector-of-size-1.8-Gb-tp21133
949p21133949.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 41
Date: Mon, 22 Dec 2008 14:13:19 -0500
From: "Lu, Zheng" <Zheng.Lu at mpi.com>
Subject: [R] question about read datafile
To: <r-help at r-project.org>
Message-ID:
<429BD5F1CFD42940B99E159DE356A37A0609D605 at US-BE3.corp.mpi.com>
Content-Type: text/plain
Dear all:
I have been thinking to import below one data file (.txt)into R by
read.table(..,skip=1, header=T). But How can I deal with the repeated
rows of TABLE NO.1 and names of data variables in the middle of this
data file. The similar block will be repeated 100 times, here only show
4 of them and within each block, data records also can vary, here only
paste 4 rows for example. I appreciate your consideration and help in
[[elided Yahoo spam]]
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3918E+02
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6267E+02
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.1781E+02
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 5.7557E+01
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 8.8583E+01
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.7342E+02
3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.0179E+02
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.4389E+02
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6147E+02
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2634E+02
3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.0733E+02
TABLE NO. 1
ID GID TIME OBS AMT EVID
RATE ADDL II CMT WT IPRE
3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2003E+02
3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.2116E+02
3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3642E+02
3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.7881E+02
...
...
...
zheng
This e-mail, including any attachments, is a confidential business
communication, and may contain information that is confidential, proprietary
and/or privileged. This e-mail is intended only for the individual(s) to
whom it is addressed, and may not be saved, copied, printed, disclosed or
used by anyone else. If you are not the(an) intended recipient, please
immediately delete this e-mail from your computer system and notify the
sender. Thank you.
[[alternative HTML version deleted]]
------------------------------
Message: 42
Date: Mon, 22 Dec 2008 18:17:35 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] newbie question on tcltk
Cc: R-help <r-help at stat.math.ethz.ch>
Message-ID:
<971536df0812221517t76dd9a1dhaaf79941b4c2230b at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
*** WARNING ***
Just a warning to anyone thinking of copying the code below into their
workspace --
the second line in the code below will erase your entire workspace.
On Mon, Dec 22, 2008 at 5:57 PM, eugen pircalabelu
> Hi List,
>
> Can anyone tell me how could i put the "BACK" button in the
following
code, just under the "AAA" menu? I want this button to go back to the
previous page, and since it has nothing to do with the "1" and
"2" buttons,
i want it somehow separated from these two buttons, but i don't know how. I
searched the web for some examples but my results were
unsatisfactory.>
> Thank you and have a great day ahead.
------------------------------
Message: 43
Date: Mon, 22 Dec 2008 18:26:38 -0500
From: David Winsemius <dwinsemius at comcast.net>
Subject: Re: [R] Integrate function
To: glenn roberts <g1enn.roberts at btinternet.com>
Cc: "r-help at R-project.org" <r-help at r-project.org>
Message-ID: <2355E69F-AA41-44B9-9F82-A2FDDEAFC124 at comcast.net>
Content-Type: text/plain; charset=WINDOWS-1252; format=flowed;
delsp=yes
If these messages you're hearing are warnings, then the answer might be:
?warnings
--
David Winsemius
On Dec 22, 2008, at 6:07 PM, glenn roberts wrote:
> Quick One if any one can help please.
>
> On use of integration function ?integrate?; how do I get the
> function to
> return just the value with no messages please
>
> Glenn
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 44
Date: Mon, 22 Dec 2008 15:26:52 -0800 (PST)
From: iamsilvermember <m2chan at ucsd.edu>
Subject: [R] Error: cannot allocate vector of size 1.8 Gb
To: r-help at r-project.org
Message-ID: <21133949.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
> dim(data)
[1] 22283 19
> dm=dist(data, method = "euclidean", diag = FALSE, upper = FALSE,
p = 2)
Error: cannot allocate vector of size 1.8 Gb
Hi Guys, thank you in advance for helping. :-D
Recently I ran into the "cannot allocate vector of size 1.8GB" error.
I am
pretty sure this is not a hardware limitation because it happens no matter I
ran the R code in a 2.0Ghz Core Duo 2GB ram Mac or on a Intel Xeon 2x2.0Ghz
quard-core 8GB ram Linux server.
I also tried to clear the workspace before running the code too, but it
didn't seem to help...
Weird thing though is that once in a while it will work, but next when I run
clustering on the above result>hc=hclust(dm, method = "complete", members=NULL)
it give me the same error...
I searched around already, but the memory.limit, memory.size method does not
seem to help. May I know what can i do to resolve this problem?
Thank you so much for your help.
--
View this message in context:
http://www.nabble.com/Error%3A-cannot-allocate-vector-of-size-1.8-Gb-tp21133
949p21133949.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 45
Date: Mon, 22 Dec 2008 17:54:51 -0600
From: William Revelle <lists at revelle.net>
Subject: Re: [R] Summary information by groups programming assitance
To: S?ren H?jsgaard <Soren.Hojsgaard at agrsci.dk>, "Ranney,
Steven"
<steven.ranney at montana.edu>, <r-help at r-project.org>
Message-ID: <p06240818c575d9227456@[192.168.1.108]>
Content-Type: text/plain; charset="iso-8859-1" ;
format="flowed"
Yet another suggestion is describe.by in the psych package.
At 11:25 PM +0100 12/22/08, S?ren H?jsgaard wrote:>Maybe summaryBy (or lapplyBy/splitBy) in the doBy package might help you.
>Regards
>S?ren
>
>________________________________
>
>Fra: r-help-bounces at r-project.org p? vegne af Ranney, Steven
>Sendt: ma 22-12-2008 22:51
>Til: r-help at r-project.org
>Emne: [R] Summary information by groups programming assitance
>
>
>
>All -
>
>I have data that looks like
>
> psd Species Lake Length Weight St.weight Wr
>Wr.1 vol
>432 substock SMB Clear 150 41.00 0.01 95.12438
>95.10118 0.0105
>433 substock SMB Clear 152 39.00 0.01 86.72916
>86.70692 0.0105
>434 substock SMB Clear 152 40.00 3.11 88.95298
>82.03689 3.2655
>435 substock SMB Clear 159 48.00 0.04 92.42095
>92.34393 0.0420
>436 substock SMB Clear 159 48.00 0.01 92.42095
>92.40170 0.0105
>437 substock SMB Clear 165 47.00 0.03 80.38023
>80.32892 0.0315
>438 substock SMB Clear 171 62.00 0.21 94.58105
>94.26070 0.2205
>439 substock SMB Clear 178 70.00 0.01 93.91912
>93.90571 0.0105
>440 substock SMB Clear 179 76.00 1.38 100.15760
>98.33895 1.4490
>441 S-Q SMB Clear 180 75.00 0.01 97.09330
>97.08035 0.0105
>442 S-Q SMB Clear 180 92.00 0.02 119.10111
>119.07522 0.0210
>...
>[truncated]
>
>where psd and lake are categorical variables, with five and four
>categories, respectively. I'd like to find the maximum vol and the
>lengths associated with each maximum vol by each category by each lake.
>In other words, I'd like to have a data frame that looks something like
>
>Lake Category Length vol
>Clear substock 152 3.2655
>Clear S-Q 266 11.73
>Clear Q-P 330 14.89
>...
>Pickerel substock 170 3.4965
>Pickerel S-Q 248 10.69
>Pickerel Q-P 335 25.62
>Pickerel P-M 415 32.62
>Pickerel M-T 442 17.25
>
>
>In order to originally get this, I used
>
>with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
>with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length, psd),max))
>with(smb[Lake=="Pickerel",], tapply(vol, list(Length, psd),max))
>with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
>
>and pulled the values I needed out by hand and put them into a .csv.
>Unfortunately, I've got a number of other data sets upon which I'll
need
>to do the same analysis. Finding a programmable alternative would
>provide a much easier (and likely less error prone) method to achieve
>the same results. Ideally, the "Length" and "vol" data
would be in a
>data frame such that I could then analyze with nls.
>
>Does anyone have any thoughts as to how I might accomplish this?
>
>Thanks in advance,
>
>Steven Ranney
>
>______________________________________________
>R-help at r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>and provide commented, minimal, self-contained, reproducible code.
>
>______________________________________________
>R-help at r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-help
>PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>and provide commented, minimal, self-contained, reproducible code.
--
William Revelle http://personality-project.org/revelle.html
Professor http://personality-project.org/personality.html
Department of Psychology http://www.wcas.northwestern.edu/psych/
Northwestern University http://www.northwestern.edu/
Attend ISSID/ARP:2009 http://issid.org/issid.2009/
------------------------------
Message: 46
Date: Mon, 22 Dec 2008 15:57:20 -0800 (PST)
From: iamsilvermember <m2chan at ucsd.edu>
Subject: Re: [R] Error: cannot allocate vector of size 1.8 Gb
To: r-help at r-project.org
Message-ID: <21137233.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
[[elided Yahoo spam]]
> sessionInfo()
R version 2.7.1 (2008-06-23)
x86_64-redhat-linux-gnu
locale:
LC_CTYPE=en_US.UTF-8;LC_NUMERIC=C;LC_TIME=en_US.UTF-8;LC_COLLATE=en_US.UTF-8
;LC_MONETARY=C;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_US.UTF-8;LC_NAME=C;LC_ADD
RESS=C;LC_TELEPHONE=C;LC_MEASUREMENT=en_US.UTF-8;LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
--
View this message in context:
http://www.nabble.com/Error%3A-cannot-allocate-vector-of-size-1.8-Gb-tp21133
949p21137233.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 47
Date: Mon, 22 Dec 2008 18:58:12 -0500
From: David Winsemius <dwinsemius at comcast.net>
Subject: Re: [R] Integrate function
To: David Winsemius <dwinsemius at comcast.net>
Cc: "r-help at R-project.org" <r-help at r-project.org>
Message-ID: <8EDADD97-BB6A-4896-AC82-5D90050B9847 at comcast.net>
Content-Type: text/plain; charset=WINDOWS-1252; format=flowed;
delsp=yes
.. but it turned out he wanted;
integrate(<integrand>)$value
--
David Winsemius
On Dec 22, 2008, at 6:26 PM, David Winsemius wrote:
> If these messages you're hearing are warnings, then the answer might
> be:
>
> ?warnings
>
> -- David Winsemius
>
> On Dec 22, 2008, at 6:07 PM, glenn roberts wrote:
>
>> Quick One if any one can help please.
>>
>> On use of integration function ?integrate?; how do I get the
>> function to
>> return just the value with no messages please
>>
>> Glenn
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>> and provide commented, minimal, self-contained, reproducible code.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 48
Date: Mon, 22 Dec 2008 19:15:42 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] Summary information by groups programming assitance
To: "Ranney, Steven" <steven.ranney at montana.edu>
Cc: r-help at r-project.org
Message-ID:
<971536df0812221615r5fce9a34x1fd894827d44ae5a at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Here are two solutions assuming DF is your data frame:
# 1. aggregate is in the base of R
aggregate(DF[c("Length", "vol")], DF[c("Lake",
"psd")], max)
or the following which is the same except it labels psd as Category:
aggregate(DF[c("Length", "vol")], with(DF, list(Lake = Lake,
Category
= psd)), max)
# 2. sqldf. The sqldf package allows specification using SQL notation:
library|(sqldf)
sqldf("select Lake, psd as Category, max(Length), max(vol) from DF
group by Lake, psd")
There are many other good solutions too using various packages which
have already
been mentioned on this thread.
On Mon, Dec 22, 2008 at 4:51 PM, Ranney, Steven
<steven.ranney at montana.edu> wrote:> All -
>
> I have data that looks like
>
> psd Species Lake Length Weight St.weight Wr
> Wr.1 vol
> 432 substock SMB Clear 150 41.00 0.01 95.12438
> 95.10118 0.0105
> 433 substock SMB Clear 152 39.00 0.01 86.72916
> 86.70692 0.0105
> 434 substock SMB Clear 152 40.00 3.11 88.95298
> 82.03689 3.2655
> 435 substock SMB Clear 159 48.00 0.04 92.42095
> 92.34393 0.0420
> 436 substock SMB Clear 159 48.00 0.01 92.42095
> 92.40170 0.0105
> 437 substock SMB Clear 165 47.00 0.03 80.38023
> 80.32892 0.0315
> 438 substock SMB Clear 171 62.00 0.21 94.58105
> 94.26070 0.2205
> 439 substock SMB Clear 178 70.00 0.01 93.91912
> 93.90571 0.0105
> 440 substock SMB Clear 179 76.00 1.38 100.15760
> 98.33895 1.4490
> 441 S-Q SMB Clear 180 75.00 0.01 97.09330
> 97.08035 0.0105
> 442 S-Q SMB Clear 180 92.00 0.02 119.10111
> 119.07522 0.0210
> ...
> [truncated]
>
> where psd and lake are categorical variables, with five and four
> categories, respectively. I'd like to find the maximum vol and the
> lengths associated with each maximum vol by each category by each lake.
> In other words, I'd like to have a data frame that looks something like
>
> Lake Category Length vol
> Clear substock 152 3.2655
> Clear S-Q 266 11.73
> Clear Q-P 330 14.89
> ...
> Pickerel substock 170 3.4965
> Pickerel S-Q 248 10.69
> Pickerel Q-P 335 25.62
> Pickerel P-M 415 32.62
> Pickerel M-T 442 17.25
>
>
> In order to originally get this, I used
>
> with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
> with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length,
psd),max))
> with(smb[Lake=="Pickerel",], tapply(vol, list(Length, psd),max))
> with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
>
> and pulled the values I needed out by hand and put them into a .csv.
> Unfortunately, I've got a number of other data sets upon which I'll
need
> to do the same analysis. Finding a programmable alternative would
> provide a much easier (and likely less error prone) method to achieve
> the same results. Ideally, the "Length" and "vol" data
would be in a
> data frame such that I could then analyze with nls.
>
> Does anyone have any thoughts as to how I might accomplish this?
>
> Thanks in advance,
>
> Steven Ranney
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 49
Date: Mon, 22 Dec 2008 19:32:00 -0500
From: "John Fox" <jfox at mcmaster.ca>
Subject: Re: [R] sem package fails when no of factors increase from 3
to 4
To: "'Xiaoxu LI'" <lixiaoxu at gmail.com>
Cc: r-help at r-project.org
Message-ID: <002c01c96495$df098ea0$9d1cabe0$@ca>
Content-Type: text/plain; charset="us-ascii"
Dear Xiaoxu LI,
sem.mod(mod4, cor18, 500, debug=TRUE) will show you what went wrong with the
optimization. Since the three-factor solutions look reasonable, I tried
using them to get better start values for the parameters in the four-factor
model, producing the solution shown below.
As well, I noticed that your correlation matrix was given only to two
decimal places, and that some of the correlations have only one significant
digit. It's possible, though not necessarily the case, that using a more
precise correlation matrix would produce the solution more easily.
I hope this helps,
John
--------------- snip ----------------
> mod4 <- specify.model()
1: X1 <-> X1, TD11, 0.30397
2: X2 <-> X2, TD22, 0.33656
3: X3 <-> X3, TD33, 0.48680
4: X4 <-> X4, TD44, 0.62441
5: X5 <-> X5, TD55, 0.78681
6: X6 <-> X6, TD66, 0.68547
7: X7 <-> X7, TD77, 0.79154
8: X8 <-> X8, TD88, 0.67417
9: X9 <-> X9, TD99, 0.60875
10: X10 <-> X10, TDaa, 0.37764
11: X11 <-> X11, TDbb, 0.74658
12: X12 <-> X12, TDcc, 0.85765
13: X1 <- xi1, LY11, 0.83428
14: X2 <- xi1, LY21, 0.81452
15: X3 <- xi1, LY31, 0.71638
16: X4 <- xi2, LY42, 0.61285
17: X5 <- xi2, LY52, 0.46173
18: X6 <- xi2, LY62, 0.56084
19: X7 <- xi3, LY73, 0.45658
20: X8 <- xi3, LY83, 0.57082
21: X9 <- xi3, LY93, 0.62550
22: X10 <- xi4, LXa4, 0.78890
23: X11 <- xi4, LXb4, 0.50340
24: X12 <- xi4, LXc4, 0.37729
25: xi1 <-> xi1, NA, 1
26: xi2 <-> xi2, NA, 1
27: xi3 <-> xi3, NA, 1
28: xi4 <-> xi4, NA, 1
29: xi1 <-> xi2, PH12, 0.13185
30: xi1 <-> xi3, PH13, 0.17445
31: xi2 <-> xi3, PH23, 0.25125
32: xi4 <-> xi1, PH41, 0.35819
33: xi4 <-> xi2, PH42, 0.12253
34: xi4 <-> xi3, PH43, 0.22137
35:
Read 34 records
> summary(sem(mod4, cor18, 500))
Model Chisquare = 80.675 Df = 48 Pr(>Chisq) = 0.0021920
Chisquare (null model) = 1106.4 Df = 66
Goodness-of-fit index = 0.9747
Adjusted goodness-of-fit index = 0.95888
RMSEA index = 0.036935 90% CI: (0.022163, 0.050657)
Bentler-Bonnett NFI = 0.92708
Tucker-Lewis NNFI = 0.95682
Bentler CFI = 0.9686
SRMR = 0.032512
BIC = -217.63
Normalized Residuals
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1.71000 -0.23300 -0.00337 0.08850 0.26700 2.13000
Parameter Estimates
Estimate Std Error z value Pr(>|z|)
TD11 0.30641 0.037053 8.2694 2.2204e-16 X1 <--> X1
TD22 0.33226 0.037158 8.9419 0.0000e+00 X2 <--> X2
TD33 0.48899 0.039007 12.5358 0.0000e+00 X3 <--> X3
TD44 0.62205 0.076640 8.1165 4.4409e-16 X4 <--> X4
TD55 0.78652 0.063364 12.4126 0.0000e+00 X5 <--> X5
TD66 0.68780 0.070102 9.8114 0.0000e+00 X6 <--> X6
TD77 0.79474 0.062019 12.8144 0.0000e+00 X7 <--> X7
TD88 0.67378 0.069039 9.7595 0.0000e+00 X8 <--> X8
TD99 0.60536 0.075437 8.0247 1.1102e-15 X9 <--> X9
TDaa 0.39902 0.094378 4.2279 2.3590e-05 X10 <--> X10
TDbb 0.74223 0.060911 12.1854 0.0000e+00 X11 <--> X11
TDcc 0.84956 0.060891 13.9523 0.0000e+00 X12 <--> X12
LY11 0.83282 0.040846 20.3895 0.0000e+00 X1 <--- xi1
LY21 0.81715 0.041065 19.8990 0.0000e+00 X2 <--- xi1
LY31 0.71485 0.042041 17.0036 0.0000e+00 X3 <--- xi1
LY42 0.61478 0.066956 9.1818 0.0000e+00 X4 <--- xi2
LY52 0.46204 0.059887 7.7152 1.1990e-14 X5 <--- xi2
LY62 0.55875 0.064082 8.7192 0.0000e+00 X6 <--- xi2
LY73 0.45306 0.058293 7.7721 7.7716e-15 X7 <--- xi3
LY83 0.57116 0.062721 9.1064 0.0000e+00 X8 <--- xi3
LY93 0.62821 0.065434 9.6007 0.0000e+00 X9 <--- xi3
LXa4 0.77523 0.069569 11.1434 0.0000e+00 X10 <--- xi4
LXb4 0.50771 0.056580 8.9733 0.0000e+00 X11 <--- xi4
LXc4 0.38786 0.056614 6.8510 7.3350e-12 X12 <--- xi4
PH12 0.13207 0.064099 2.0604 3.9361e-02 xi2 <--> xi1
PH13 0.17417 0.063512 2.7423 6.1006e-03 xi3 <--> xi1
PH23 0.25059 0.077099 3.2503 1.1529e-03 xi3 <--> xi2
PH41 0.36109 0.055310 6.5285 6.6416e-11 xi1 <--> xi4
PH42 0.12606 0.072905 1.7292 8.3780e-02 xi2 <--> xi4
PH43 0.22301 0.071781 3.1068 1.8913e-03 xi3 <--> xi4
Iterations = 14
Warning message:
In sem.mod(mod4, cor18, 500) :
The following observed variables are in the input covariance or raw-moment
matrix but do not appear in the model:
X13, X14, X15, X16, X17, X18
>
------------------------------
John Fox, Professor
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox
> -----Original Message-----
> From: r-help-bounces at r-project.org [mailto:r-help-bounces at
r-project.org]
On> Behalf Of Xiaoxu LI
> Sent: December-22-08 3:58 PM
> To: r-help at r-project.org
> Subject: [R] sem package fails when no of factors increase from 3 to 4
>
> #### I checked through every 3 factor * 3 loading case.
> #### While, 4 factor * 3 loading failed.
> #### the data is 6 factor * 3 loading
>
> require(sem);
>
> cor18<-read.moments();
> 1
> .68 1
> .60 .58 1
> .01 .10 .07 1
> .12 .04 .06 .29 1
> .06 .06 .01 .35 .24 1
> .09 .13 .10 .05 .03 .07 1
> .04 .08 .16 .10 .12 .06 .25 1
> .06 .09 .02 .02 .09 .16 .29 .36 1
> .23 .26 .19 .05 .04 .04 .08 .09 .09 1
> .11 .13 .12 .03 .05 .03 .02 .06 .06 .40 1
> .16 .09 .09 .10 .10 .02 .04 .12 .15 .29 .20 1
> .24 .26 .22 .14 .06 .10 .06 .07 .08 .03 .04 .02 1
> .21 .22 .29 .07 .05 .17 .12 .06 .06 .03 .12 .04 .55 1
> .29 .28 .26 .06 .07 .05 .06 .15 .20 .10 .03 .12 .64 .61 1
> .15 .16 .19 .18 .08 .07 .08 .10 .06 .15 .16 .07 .25 .25 .16 1
> .24 .20 .16 .13 .15 .18 .19 .18 .14 .11 .07 .16 .19 .21 .22 .35 1
> .14 .25 .12 .09 .11 .09 .09 .11 .21 .17 .09 .05 .21 .23 .18 .39 .48 1
>
> mod3.1_9<-specify.model();
> X1 <-> X1,TD11,NA
> X2 <-> X2,TD22,NA
> X3 <-> X3,TD33,NA
> X4 <-> X4,TD44,NA
> X5 <-> X5,TD55,NA
> X6 <-> X6,TD66,NA
> X7 <-> X7,TD77,NA
> X8 <-> X8,TD88,NA
> X9 <-> X9,TD99,NA
> X1 <- xi1,LY11, NA
> X2 <- xi1,LY21, NA
> X3 <- xi1,LY31, NA
> X4 <- xi2,LY42, NA
> X5 <- xi2,LY52, NA
> X6 <- xi2,LY62, NA
> X7 <- xi3,LY73, NA
> X8 <- xi3,LY83, NA
> X9 <- xi3,LY93, NA
> xi1 <-> xi1,NA,1
> xi2 <-> xi2,NA,1
> xi3 <-> xi3,NA,1
> xi1 <-> xi2 ,PH12,NA
> xi1 <-> xi3 ,PH13,NA
> xi2 <-> xi3 ,PH23,NA
>
> mod3.1_6AND10_12<-specify.model();
> X1 <-> X1,TD11,NA
> X2 <-> X2,TD22,NA
> X3 <-> X3,TD33,NA
> X4 <-> X4,TD44,NA
> X5 <-> X5,TD55,NA
> X6 <-> X6,TD66,NA
> X10 <-> X10,TD77,NA
> X11 <-> X11,TD88,NA
> X12 <-> X12,TD99,NA
> X1 <- xi1,LY11, NA
> X2 <- xi1,LY21, NA
> X3 <- xi1,LY31, NA
> X4 <- xi2,LY42, NA
> X5 <- xi2,LY52, NA
> X6 <- xi2,LY62, NA
> X10 <- xi3,LY73, NA
> X11 <- xi3,LY83, NA
> X12 <- xi3,LY93, NA
> xi1 <-> xi1,NA,1
> xi2 <-> xi2,NA,1
> xi3 <-> xi3,NA,1
> xi1 <-> xi2 ,PH12,NA
> xi1 <-> xi3 ,PH13,NA
> xi2 <-> xi3 ,PH23,NA
>
> mod3.1_3AND7_9AND10_12<-specify.model();
> X1 <-> X1,TD11,NA
> X2 <-> X2,TD22,NA
> X3 <-> X3,TD33,NA
> X10 <-> X10,TD44,NA
> X11 <-> X11,TD55,NA
> X12 <-> X12,TD66,NA
> X7 <-> X7,TD77,NA
> X8 <-> X8,TD88,NA
> X9 <-> X9,TD99,NA
> X1 <- xi1,LY11, NA
> X2 <- xi1,LY21, NA
> X3 <- xi1,LY31, NA
> X10 <- xi2,LY42, NA
> X11 <- xi2,LY52, NA
> X12 <- xi2,LY62, NA
> X7 <- xi3,LY73, NA
> X8 <- xi3,LY83, NA
> X9 <- xi3,LY93, NA
> xi1 <-> xi1,NA,1
> xi2 <-> xi2,NA,1
> xi3 <-> xi3,NA,1
> xi1 <-> xi2 ,PH12,NA
> xi1 <-> xi3 ,PH13,NA
> xi2 <-> xi3 ,PH23,NA
>
> mod3.4_6AND7_9AND10_12<-specify.model();
> X10 <-> X10,TD11,NA
> X11 <-> X11,TD22,NA
> X12 <-> X12,TD33,NA
> X4 <-> X4,TD44,NA
> X5 <-> X5,TD55,NA
> X6 <-> X6,TD66,NA
> X7 <-> X7,TD77,NA
> X8 <-> X8,TD88,NA
> X9 <-> X9,TD99,NA
> X10 <- xi1,LY11, NA
> X11 <- xi1,LY21, NA
> X12 <- xi1,LY31, NA
> X4 <- xi2,LY42, NA
> X5 <- xi2,LY52, NA
> X6 <- xi2,LY62, NA
> X7 <- xi3,LY73, NA
> X8 <- xi3,LY83, NA
> X9 <- xi3,LY93, NA
> xi1 <-> xi1,NA,1
> xi2 <-> xi2,NA,1
> xi3 <-> xi3,NA,1
> xi1 <-> xi2 ,PH12,NA
> xi1 <-> xi3 ,PH13,NA
> xi2 <-> xi3 ,PH23,NA
>
> mod4<-specify.model();
> X1 <-> X1,TD11,NA
> X2 <-> X2,TD22,NA
> X3 <-> X3,TD33,NA
> X4 <-> X4,TD44,NA
> X5 <-> X5,TD55,NA
> X6 <-> X6,TD66,NA
> X7 <-> X7,TD77,NA
> X8 <-> X8,TD88,NA
> X9 <-> X9,TD99,NA
> X10 <-> X10,TDaa,NA
> X11 <-> X11,TDbb,NA
> X12 <-> X12,TDcc,NA
> X1 <- xi1,LY11, NA
> X2 <- xi1,LY21, NA
> X3 <- xi1,LY31, NA
> X4 <- xi2,LY42, NA
> X5 <- xi2,LY52, NA
> X6 <- xi2,LY62, NA
> X7 <- xi3,LY73, NA
> X8 <- xi3,LY83, NA
> X9 <- xi3,LY93, NA
> X10 <- xi4,LXa4,NA
> X11 <- xi4,LXb4,NA
> X12 <- xi4,LXc4,NA
> xi1 <-> xi1,NA,1
> xi2 <-> xi2,NA,1
> xi3 <-> xi3,NA,1
> xi4 <-> xi4,NA,1
> xi1 <-> xi2 ,PH12,NA
> xi1 <-> xi3 ,PH13,NA
> xi2 <-> xi3 ,PH23,NA
> xi4 <-> xi1,PH41,NA
> xi4 <-> xi2,PH42,NA
> xi4 <-> xi3,PH43,NA
>
> summary(sem(mod3.1_9,cor18,500))$RMSEA;
> summary(sem(mod3.1_6AND10_12,cor18,500))$RMSEA;
> summary(sem(mod3.1_3AND7_9AND10_12,cor18,500))$RMSEA;
> summary(sem(mod3.4_6AND7_9AND10_12,cor18,500))$RMSEA;
> summary(sem(mod4,cor18,500))$RMSEA;##fail
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 50
Date: Mon, 22 Dec 2008 16:36:55 -0800 (PST)
From: Thomas Lumley <tlumley at u.washington.edu>
Subject: Re: [R] svyglm and sandwich estimator of variance
To: Roberta Pereira Niquini <robertaniquini at ensp.fiocruz.br>
Cc: r-help at r-project.org
Message-ID:
<Pine.LNX.4.64.0812221633020.5556 at homer21.u.washington.edu>
Content-Type: text/plain; charset="iso-8859-1";
Format="flowed"
On Fri, 19 Dec 2008, Roberta Pereira Niquini wrote:
> Hi,
>
> I would like to estimate coefficients using poisson regression and then
get> standard errors that are adjusted for heteroskedasticity, using a complex
> sample survey data. Then I will calculate prevalence ratio and confidence
> intervals.
> Can sandwich estimator of variance be used when observations aren?t
> independent? In my case, observations are independent across groups
> (clusters), but not necessarily within groups. Can I calculate the
standard> errors with robust variance, in complex sample survey data using R?
The standard errors that svyglm() produces are already the sandwich
estimator and already correctly handle the clustering.
Use vcov() to extract the variance-covariance matrix, if you need it, or
SE() to extract the standard errors.
-thomas
> Outputs:
>
> design_tarv<-svydesign(ids=~X2, strata=~X3, data=banco, weights=~X4)
>
> banco.glm7 <- svyglm(y ~x1, data = banco, family = poisson (link=
"log"),
> design= design_tarv)
> summary(banco.glm7)
>
> Call:
> svyglm(y ~ x1, data = banco, family = poisson(link = "log"),
> design = design_tarv)
>
> Survey design:
> svydesign(ids = ~X2, strata = ~X3, data = banco,
> weights = ~X4)
>
> Coefficients:
> Estimate Std. Error t value Pr(>|t|)
> (Intercept) -0.91893 0.04696 -19.570 < 2e-16 ***
> x1 0.19710 0.06568 3.001 0.00603 **
> ---
> Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
>
> (Dispersion parameter for poisson family taken to be 0.5722583)
>
> Number of Fisher Scoring iterations: 5
>
>
> library(sandwich)
>
> vcovHC(banco.glm7)
> (Intercept) x1
> (Intercept) 4.806945e-13 -4.771409e-13
> x1 -4.771409e-13 7.127168e-13
>
> sqrt(diag(vcovHC(banco.glm7, type="HC0")))
> (Intercept) x1
> 6.923295e-07 8.426314e-07
>
> # I think this result isn?t correct, because standard errors are so small.
>
>
> Thank you for the help,
> Roberta Niquini.
>
>
>
>
>
>
>
> --
> ENSP - Fiocruz
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
Thomas Lumley Assoc. Professor, Biostatistics
tlumley at u.washington.edu University of Washington, Seattle
------------------------------
Message: 51
Date: Mon, 22 Dec 2008 19:49:00 -0500
From: "jim holtman" <jholtman at gmail.com>
Subject: Re: [R] question about read datafile
To: "Lu, Zheng" <Zheng.Lu at mpi.com>
Cc: r-help at r-project.org
Message-ID:
<644e1f320812221649q30ce0a6bmfd346aa4c354b80 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Read in the data using readLines to read the complete line. Use
grep/regexpr to scan for valid lines and then convert them to numeric
by using strsplit/as.numeric.
On Mon, Dec 22, 2008 at 2:13 PM, Lu, Zheng <Zheng.Lu at mpi.com>
wrote:> Dear all:
>
>
>
> I have been thinking to import below one data file (.txt)into R by
> read.table(..,skip=1, header=T). But How can I deal with the repeated
> rows of TABLE NO.1 and names of data variables in the middle of this
> data file. The similar block will be repeated 100 times, here only show
> 4 of them and within each block, data records also can vary, here only
> paste 4 rows for example. I appreciate your consideration and help in
[[elided Yahoo spam]]>
>
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3918E+02
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6267E+02
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.1781E+02
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 5.7557E+01
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 8.8583E+01
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.7342E+02
>
> 3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.0179E+02
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 1.4389E+02
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.6147E+02
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2634E+02
>
> 3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.0733E+02
>
> TABLE NO. 1
>
> ID GID TIME OBS AMT EVID
> RATE ADDL II CMT WT IPRE
>
> 3.1000E+01 1.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 1.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 2.2003E+02
>
> 3.1000E+01 1.0000E+00 0.0000E+00 2.0500E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.2116E+02
>
> 3.1000E+01 1.0000E+00 9.6000E+01 4.2100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 3.3642E+02
>
> 3.1000E+01 1.0000E+00 1.6800E+02 5.3100E+02 0.0000E+00 0.0000E+00
> 0.0000E+00 0.0000E+00 0.0000E+00 4.0000E+00 2.4000E-02 4.7881E+02
>
> ...
>
> ...
>
> ...
>
>
>
> zheng
>
>
>
>
>
>
>
>
> This e-mail, including any attachments, is a confidential business
communication, and may contain information that is confidential, proprietary
and/or privileged. This e-mail is intended only for the individual(s) to
whom it is addressed, and may not be saved, copied, printed, disclosed or
used by anyone else. If you are not the(an) intended recipient, please
immediately delete this e-mail from your computer system and notify the
sender. Thank you.>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Jim Holtman
Cincinnati, OH
+1 513 646 9390
What is the problem that you are trying to solve?
------------------------------
Message: 52
Date: Mon, 22 Dec 2008 18:15:22 -0700
From: "Ranney, Steven" <steven.ranney at montana.edu>
Subject: Re: [R] Summary information by groups programming assitance
To: <r-help at r-project.org>
Message-ID:
<677B91F53FD4074CB84B79754E8521C4026C92FB at GEMSTONES.msu.montana.edu>
Content-Type: text/plain
Thank you all for your help. I appreciate the assistance. I'm thinking I
should have been more specific in my original question.
Unless I'm mistaken, all of the suggestions so far have been for maximum vol
and maximum Length by Lake and psd. I'm trying to extract the max vol by
Lake and psd along with the corresponding value of Length. So, instead of
maximum vol and maximum Length, I'd like to find the max vol and the Length
associated with that value.
Sorry for any confusion,
SR
Steven H. Ranney
Graduate Research Assistant (Ph.D)
USGS Montana Cooperative Fishery Research Unit
Montana State University
P.O. Box 173460
Bozeman, MT 59717-3460
phone: (406) 994-6643
fax: (406) 994-7479
http://studentweb.montana.edu/steven.ranney
________________________________
From: Gabor Grothendieck [mailto:ggrothendieck at gmail.com]
Sent: Mon 12/22/2008 5:15 PM
To: Ranney, Steven
Cc: r-help at r-project.org
Subject: Re: [R] Summary information by groups programming assitance
Here are two solutions assuming DF is your data frame:
# 1. aggregate is in the base of R
aggregate(DF[c("Length", "vol")], DF[c("Lake",
"psd")], max)
or the following which is the same except it labels psd as Category:
aggregate(DF[c("Length", "vol")], with(DF, list(Lake = Lake,
Category
= psd)), max)
# 2. sqldf. The sqldf package allows specification using SQL notation:
library|(sqldf)
sqldf("select Lake, psd as Category, max(Length), max(vol) from DF
group by Lake, psd")
There are many other good solutions too using various packages which
have already
been mentioned on this thread.
On Mon, Dec 22, 2008 at 4:51 PM, Ranney, Steven
<steven.ranney at montana.edu> wrote:> All -
>
> I have data that looks like
>
> psd Species Lake Length Weight St.weight Wr
> Wr.1 vol
> 432 substock SMB Clear 150 41.00 0.01 95.12438
> 95.10118 0.0105
> 433 substock SMB Clear 152 39.00 0.01 86.72916
> 86.70692 0.0105
> 434 substock SMB Clear 152 40.00 3.11 88.95298
> 82.03689 3.2655
> 435 substock SMB Clear 159 48.00 0.04 92.42095
> 92.34393 0.0420
> 436 substock SMB Clear 159 48.00 0.01 92.42095
> 92.40170 0.0105
> 437 substock SMB Clear 165 47.00 0.03 80.38023
> 80.32892 0.0315
> 438 substock SMB Clear 171 62.00 0.21 94.58105
> 94.26070 0.2205
> 439 substock SMB Clear 178 70.00 0.01 93.91912
> 93.90571 0.0105
> 440 substock SMB Clear 179 76.00 1.38 100.15760
> 98.33895 1.4490
> 441 S-Q SMB Clear 180 75.00 0.01 97.09330
> 97.08035 0.0105
> 442 S-Q SMB Clear 180 92.00 0.02 119.10111
> 119.07522 0.0210
> ...
> [truncated]
>
> where psd and lake are categorical variables, with five and four
> categories, respectively. I'd like to find the maximum vol and the
> lengths associated with each maximum vol by each category by each lake.
> In other words, I'd like to have a data frame that looks something like
>
> Lake Category Length vol
> Clear substock 152 3.2655
> Clear S-Q 266 11.73
> Clear Q-P 330 14.89
> ...
> Pickerel substock 170 3.4965
> Pickerel S-Q 248 10.69
> Pickerel Q-P 335 25.62
> Pickerel P-M 415 32.62
> Pickerel M-T 442 17.25
>
>
> In order to originally get this, I used
>
> with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
> with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length,
psd),max))
> with(smb[Lake=="Pickerel",], tapply(vol, list(Length, psd),max))
> with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
>
> and pulled the values I needed out by hand and put them into a .csv.
> Unfortunately, I've got a number of other data sets upon which I'll
need
> to do the same analysis. Finding a programmable alternative would
> provide a much easier (and likely less error prone) method to achieve
> the same results. Ideally, the "Length" and "vol" data
would be in a
> data frame such that I could then analyze with nls.
>
> Does anyone have any thoughts as to how I might accomplish this?
>
> Thanks in advance,
>
> Steven Ranney
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
[[alternative HTML version deleted]]
------------------------------
Message: 53
Date: Tue, 23 Dec 2008 00:16:23 +0100
From: "Mark Heckmann" <mark.heckmann at gmx.de>
Subject: [R] Problem in passing on an argument via ... how do I access
it?
To: <r-help at R-project.org>
Message-ID: <979CC698961E495F8D10A8E89B72229D at TCPC000>
Content-Type: text/plain; charset="us-ascii"
Hi r-experts,
I want to check if a certain argument has been passed on in a function call
via ...
ftest <- function(x1, ...) {
if(hasArg(y2)==TRUE) print(y2)
}
Now I call the function passing y2 via ... but I cannot access or use the
object.
ftest(y2= 2, x= 1)
> error in print(y2) : object "y2" not found
What I am doing wrong here? How can I access the object y2?
TIA and Merry Christmas,
Mark
------------------------------
Message: 54
Date: Mon, 22 Dec 2008 17:24:46 -0800 (PST)
Subject: [R] Simulate dataset using Parallel Latent CTT model in R
To: r-help at r-project.org
Message-ID: <21138080.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
All,
I want to simulate dataset using Parallel Latent CTT model in R however dont
know how to start. Is there anyone who have done the same? Any help on this
will be greatly appreciated.
Regards
-NK
--
View this message in context:
http://www.nabble.com/Simulate-dataset-using-Parallel-Latent-CTT-model-in-R-
tp21138080p21138080.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 55
Date: Mon, 22 Dec 2008 18:57:44 -0500
From: "Oren Cheyette" <ocheyett at bonddesk.com>
Subject: [R] nlsrob fails with puzzling error message on input
accepted by nls
To: <r-help at R-project.org>
Message-ID: <D02BD122255CC64197F577C75CF172B0228D4A at mimail2.bdg.local>
Content-Type: text/plain; charset="us-ascii"
I have a nonlinear model estimation problem with ~50,000 data records
and a simple 3 parameter model (logistic type - please don't tell me
that there are linear methods for such a problem). I run nls with
constraints once to get a good initial parameter guess, then try to run
nlrob to get improved estimates. The model is well-behaved for the
parameters that come from nls - no huge values, NAs or infinities for
the values of the independent variables. But nlrob fails immediately
(on the first pass) with the error message
> pxe2 <- nlrob(dpx ~ peFnc(tradeSide, tradeSz, tcScale, szScale,
alpha), data=fitData, start= pxe$m$getAllPars(), trace=TRUE);
robust iteration 1
2138.747 : 2.19 2.31 0.45
Error in numericDeriv(form[[3]], names(ind), env) :
<--------------------
Missing value or an infinity produced when evaluating the model
<-------------------
With debug(), I've traced the problem to the call to nls() inside nlrob.
For reasons I haven't been able to track down, when called outside nlrob
(with algorithm='port') it runs fine. But I get the error in nlrob, even
if I include algorithm='port' in the call.
Given the size of this problem, it's extremely difficult to identify the
inputs that are causing the failure. However, a fairly simple tweak to
the error reporting would simplify the task hugely. The error message is
coming (I think) from nls.c, at line 318:
for(i = 0; i < LENGTH(ans); i++) {
if (!R_FINITE(REAL(ans)[i]))
error(_("Missing value or an infinity produced when
evaluating the model")); /* <---------- */
}
Would it be possible to add reporting of the record causing the problem,
e.g., by modifying the error line to
error(_("Missing value or an infinity produced when evaluating the
model at record %d"), i);
(I'm not a maintainer of the package and have been using the precompiled
binaries, so I'm hesitant to try to do this myself...)
Alternatively, does anyone have a suggestion as to how to identify the
source of the trouble?
R-Version: 2.7.2.
Platform: i386 (WinXP)
Thanks.
Oren Cheyette
------------------------------
Message: 56
Date: Mon, 22 Dec 2008 20:00:14 -0600
From: "hadley wickham" <h.wickham at gmail.com>
Subject: Re: [R] Summary information by groups programming assitance
To: "Ranney, Steven" <steven.ranney at montana.edu>
Cc: r-help at r-project.org
Message-ID:
<f8e6ff050812221800l59da230t2f5d1766548d966f at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
On Mon, Dec 22, 2008 at 7:15 PM, Ranney, Steven
<steven.ranney at montana.edu> wrote:> Thank you all for your help. I appreciate the assistance. I'm thinking
I
should have been more specific in my original question.>
> Unless I'm mistaken, all of the suggestions so far have been for
maximum
vol and maximum Length by Lake and psd. I'm trying to extract the max vol
by Lake and psd along with the corresponding value of Length. So, instead
of maximum vol and maximum Length, I'd like to find the max vol and the
Length associated with that value.
Try which.max along with any of the solutions previously mentioned.
Hadley
--
http://had.co.nz/
------------------------------
Message: 57
Date: Mon, 22 Dec 2008 21:14:03 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] Summary information by groups programming assitance
To: "Ranney, Steven" <steven.ranney at montana.edu>
Cc: r-help at r-project.org
Message-ID:
<971536df0812221814s2ae3a9b1jc80443df532d11e4 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Just sort the data first and then apply any of the solutions but with
tail(x, 1)
instead of max, e.g.
DFo <- DF[order(DF$Lake, DF$Length, DF$vol), ]
aggregate(DFo[c("Length", "vol")], DFo[c("Lake",
"psd")], tail, 1)
On Mon, Dec 22, 2008 at 8:15 PM, Ranney, Steven
<steven.ranney at montana.edu> wrote:> Thank you all for your help. I appreciate the assistance. I'm thinking
I
should have been more specific in my original question.>
> Unless I'm mistaken, all of the suggestions so far have been for
maximum
vol and maximum Length by Lake and psd. I'm trying to extract the max vol
by Lake and psd along with the corresponding value of Length. So, instead
of maximum vol and maximum Length, I'd like to find the max vol and the
Length associated with that value.>
> Sorry for any confusion,
>
> SR
>
> Steven H. Ranney
> Graduate Research Assistant (Ph.D)
> USGS Montana Cooperative Fishery Research Unit
> Montana State University
> P.O. Box 173460
> Bozeman, MT 59717-3460
>
> phone: (406) 994-6643
> fax: (406) 994-7479
>
> http://studentweb.montana.edu/steven.ranney
> ________________________________
>
> From: Gabor Grothendieck [mailto:ggrothendieck at gmail.com]
> Sent: Mon 12/22/2008 5:15 PM
> To: Ranney, Steven
> Cc: r-help at r-project.org
> Subject: Re: [R] Summary information by groups programming assitance
>
>
> Here are two solutions assuming DF is your data frame:
>
> # 1. aggregate is in the base of R
>
> aggregate(DF[c("Length", "vol")],
DF[c("Lake", "psd")], max)
>
> or the following which is the same except it labels psd as Category:
>
> aggregate(DF[c("Length", "vol")], with(DF, list(Lake =
Lake, Category
> = psd)), max)
>
>
> # 2. sqldf. The sqldf package allows specification using SQL notation:
>
> library|(sqldf)
> sqldf("select Lake, psd as Category, max(Length), max(vol) from DF
> group by Lake, psd")
>
> There are many other good solutions too using various packages which
> have already
> been mentioned on this thread.
>
> On Mon, Dec 22, 2008 at 4:51 PM, Ranney, Steven
> <steven.ranney at montana.edu> wrote:
>> All -
>>
>> I have data that looks like
>>
>> psd Species Lake Length Weight St.weight Wr
>> Wr.1 vol
>> 432 substock SMB Clear 150 41.00 0.01 95.12438
>> 95.10118 0.0105
>> 433 substock SMB Clear 152 39.00 0.01 86.72916
>> 86.70692 0.0105
>> 434 substock SMB Clear 152 40.00 3.11 88.95298
>> 82.03689 3.2655
>> 435 substock SMB Clear 159 48.00 0.04 92.42095
>> 92.34393 0.0420
>> 436 substock SMB Clear 159 48.00 0.01 92.42095
>> 92.40170 0.0105
>> 437 substock SMB Clear 165 47.00 0.03 80.38023
>> 80.32892 0.0315
>> 438 substock SMB Clear 171 62.00 0.21 94.58105
>> 94.26070 0.2205
>> 439 substock SMB Clear 178 70.00 0.01 93.91912
>> 93.90571 0.0105
>> 440 substock SMB Clear 179 76.00 1.38 100.15760
>> 98.33895 1.4490
>> 441 S-Q SMB Clear 180 75.00 0.01 97.09330
>> 97.08035 0.0105
>> 442 S-Q SMB Clear 180 92.00 0.02 119.10111
>> 119.07522 0.0210
>> ...
>> [truncated]
>>
>> where psd and lake are categorical variables, with five and four
>> categories, respectively. I'd like to find the maximum vol and the
>> lengths associated with each maximum vol by each category by each lake.
>> In other words, I'd like to have a data frame that looks something
like
>>
>> Lake Category Length vol
>> Clear substock 152 3.2655
>> Clear S-Q 266 11.73
>> Clear Q-P 330 14.89
>> ...
>> Pickerel substock 170 3.4965
>> Pickerel S-Q 248 10.69
>> Pickerel Q-P 335 25.62
>> Pickerel P-M 415 32.62
>> Pickerel M-T 442 17.25
>>
>>
>> In order to originally get this, I used
>>
>> with(smb[Lake=="Clear",], tapply(vol, list(Length, psd),max))
>> with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length,
psd),max))
>> with(smb[Lake=="Pickerel",], tapply(vol, list(Length,
psd),max))
>> with(smb[Lake=="Roy",], tapply(vol, list(Length, psd),max))
>>
>> and pulled the values I needed out by hand and put them into a .csv.
>> Unfortunately, I've got a number of other data sets upon which
I'll need
>> to do the same analysis. Finding a programmable alternative would
>> provide a much easier (and likely less error prone) method to achieve
>> the same results. Ideally, the "Length" and "vol"
data would be in a
>> data frame such that I could then analyze with nls.
>>
>> Does anyone have any thoughts as to how I might accomplish this?
>>
>> Thanks in advance,
>>
>> Steven Ranney
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>> and provide commented, minimal, self-contained, reproducible code.
>>
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
------------------------------
Message: 58
Date: Mon, 22 Dec 2008 21:29:13 -0500
From: "Gabor Grothendieck" <ggrothendieck at gmail.com>
Subject: Re: [R] Summary information by groups programming assitance
To: "Ranney, Steven" <steven.ranney at montana.edu>
Cc: r-help at r-project.org
Message-ID:
<971536df0812221829rb9e3act23012ab9ccacb90d at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
The sorting should have been by Lake, psd and vol (not what I had)
so it should be revised to:
DFo <- DF[order(DF$Lake, DF$psd, DF$vol), ]
aggregate(DFo[c("Length", "vol")], DFo[c("Lake",
"psd")], tail, 1)
This is the same as before except DF$psd is used in place of DF$Length
in the first line.
On Mon, Dec 22, 2008 at 9:14 PM, Gabor Grothendieck
<ggrothendieck at gmail.com> wrote:> Just sort the data first and then apply any of the solutions but with
tail(x, 1)> instead of max, e.g.
>
> DFo <- DF[order(DF$Lake, DF$Length, DF$vol), ]
> aggregate(DFo[c("Length", "vol")],
DFo[c("Lake", "psd")], tail, 1)
>
>
> On Mon, Dec 22, 2008 at 8:15 PM, Ranney, Steven
> <steven.ranney at montana.edu> wrote:
>> Thank you all for your help. I appreciate the assistance. I'm
thinking I
should have been more specific in my original question.>>
>> Unless I'm mistaken, all of the suggestions so far have been for
maximum
vol and maximum Length by Lake and psd. I'm trying to extract the max vol
by Lake and psd along with the corresponding value of Length. So, instead
of maximum vol and maximum Length, I'd like to find the max vol and the
Length associated with that value.>>
>> Sorry for any confusion,
>>
>> SR
>>
>> Steven H. Ranney
>> Graduate Research Assistant (Ph.D)
>> USGS Montana Cooperative Fishery Research Unit
>> Montana State University
>> P.O. Box 173460
>> Bozeman, MT 59717-3460
>>
>> phone: (406) 994-6643
>> fax: (406) 994-7479
>>
>> http://studentweb.montana.edu/steven.ranney
>> ________________________________
>>
>> From: Gabor Grothendieck [mailto:ggrothendieck at gmail.com]
>> Sent: Mon 12/22/2008 5:15 PM
>> To: Ranney, Steven
>> Cc: r-help at r-project.org
>> Subject: Re: [R] Summary information by groups programming assitance
>>
>>
>> Here are two solutions assuming DF is your data frame:
>>
>> # 1. aggregate is in the base of R
>>
>> aggregate(DF[c("Length", "vol")],
DF[c("Lake", "psd")], max)
>>
>> or the following which is the same except it labels psd as Category:
>>
>> aggregate(DF[c("Length", "vol")], with(DF,
list(Lake = Lake, Category
>> = psd)), max)
>>
>>
>> # 2. sqldf. The sqldf package allows specification using SQL notation:
>>
>> library|(sqldf)
>> sqldf("select Lake, psd as Category, max(Length), max(vol) from DF
>> group by Lake, psd")
>>
>> There are many other good solutions too using various packages which
>> have already
>> been mentioned on this thread.
>>
>> On Mon, Dec 22, 2008 at 4:51 PM, Ranney, Steven
>> <steven.ranney at montana.edu> wrote:
>>> All -
>>>
>>> I have data that looks like
>>>
>>> psd Species Lake Length Weight St.weight Wr
>>> Wr.1 vol
>>> 432 substock SMB Clear 150 41.00 0.01 95.12438
>>> 95.10118 0.0105
>>> 433 substock SMB Clear 152 39.00 0.01 86.72916
>>> 86.70692 0.0105
>>> 434 substock SMB Clear 152 40.00 3.11 88.95298
>>> 82.03689 3.2655
>>> 435 substock SMB Clear 159 48.00 0.04 92.42095
>>> 92.34393 0.0420
>>> 436 substock SMB Clear 159 48.00 0.01 92.42095
>>> 92.40170 0.0105
>>> 437 substock SMB Clear 165 47.00 0.03 80.38023
>>> 80.32892 0.0315
>>> 438 substock SMB Clear 171 62.00 0.21 94.58105
>>> 94.26070 0.2205
>>> 439 substock SMB Clear 178 70.00 0.01 93.91912
>>> 93.90571 0.0105
>>> 440 substock SMB Clear 179 76.00 1.38 100.15760
>>> 98.33895 1.4490
>>> 441 S-Q SMB Clear 180 75.00 0.01 97.09330
>>> 97.08035 0.0105
>>> 442 S-Q SMB Clear 180 92.00 0.02 119.10111
>>> 119.07522 0.0210
>>> ...
>>> [truncated]
>>>
>>> where psd and lake are categorical variables, with five and four
>>> categories, respectively. I'd like to find the maximum vol and
the
>>> lengths associated with each maximum vol by each category by each
lake.
>>> In other words, I'd like to have a data frame that looks
something like
>>>
>>> Lake Category Length vol
>>> Clear substock 152 3.2655
>>> Clear S-Q 266 11.73
>>> Clear Q-P 330 14.89
>>> ...
>>> Pickerel substock 170 3.4965
>>> Pickerel S-Q 248 10.69
>>> Pickerel Q-P 335 25.62
>>> Pickerel P-M 415 32.62
>>> Pickerel M-T 442 17.25
>>>
>>>
>>> In order to originally get this, I used
>>>
>>> with(smb[Lake=="Clear",], tapply(vol, list(Length,
psd),max))
>>> with(smb[Lake=="Enemy.Swim",], tapply(vol, list(Length,
psd),max))
>>> with(smb[Lake=="Pickerel",], tapply(vol, list(Length,
psd),max))
>>> with(smb[Lake=="Roy",], tapply(vol, list(Length,
psd),max))
>>>
>>> and pulled the values I needed out by hand and put them into a
.csv.
>>> Unfortunately, I've got a number of other data sets upon which
I'll need
>>> to do the same analysis. Finding a programmable alternative would
>>> provide a much easier (and likely less error prone) method to
achieve
>>> the same results. Ideally, the "Length" and
"vol" data would be in a
>>> data frame such that I could then analyze with nls.
>>>
>>> Does anyone have any thoughts as to how I might accomplish this?
>>>
>>> Thanks in advance,
>>>
>>> Steven Ranney
>>>
>>> ______________________________________________
>>> R-help at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-help
>>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>>> and provide commented, minimal, self-contained, reproducible code.
>>>
>>
>>
>>
>>
>> [[alternative HTML version deleted]]
>>
>> ______________________________________________
>> R-help at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html>> and provide commented, minimal, self-contained, reproducible code.
>>
>
------------------------------
Message: 59
Date: Mon, 22 Dec 2008 21:54:54 -0500
From: David Winsemius <dwinsemius at comcast.net>
Subject: Re: [R] Problem in passing on an argument via ... how do I
access it?
To: Mark Heckmann <mark.heckmann at gmx.de>
Cc: r-help at r-project.org
Message-ID: <7A4DB3F5-66F0-4DE4-9803-A35C472F6DAA at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Try:
ftest <- function(x1, ...) {
yargs =list(...) ;
if (hasArg(y2) == TRUE) print("YES");
return(yargs)
}
> ftest(2, y2 = 3)
[1] "YES"
$y2
[1] 3
> yt <- ftest(2, y2=3)
[1] "YES"
> yt
$y2
[1] 3
On Dec 22, 2008, at 6:16 PM, Mark Heckmann wrote:
> Hi r-experts,
>
>
> I want to check if a certain argument has been passed on in a
> function call
> via ...
>
> ftest <- function(x1, ...) {
> if(hasArg(y2)==TRUE) print(y2)
> }
>
> Now I call the function passing y2 via ... but I cannot access or
> use the
> object.
>
> ftest(y2= 2, x= 1)
>
>> error in print(y2) : object "y2" not found
>
> What I am doing wrong here? How can I access the object y2?
>
>
> TIA and Merry Christmas,
> Mark
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 60
Date: Mon, 22 Dec 2008 22:44:51 -0500
From: "Sharma, Dhruv" <Dhruv.Sharma at PenFed.org>
Subject: [R] sorting regression coefficients by p-value
To: <r-help at r-project.org>
Message-ID:
<BF3D95EDD37088488ADB82FBBC5689CD0193B914 at
CHN-EXMBXPR01.pfcuhq.penfed.ads>
Content-Type: text/plain
Hi,
Is there a way to get/extract a matrix of regression variable name,
coefficient, and p values?
(for lm and glm; which can be sort by p value?)
thanks
Dhruv
[[alternative HTML version deleted]]
------------------------------
Message: 61
Date: Mon, 22 Dec 2008 23:58:15 -0500
From: "Stavros Macrakis" <macrakis at alum.mit.edu>
Subject: [R] Tabular output: from R to Excel or HTML
To: r-help at r-project.org
Message-ID:
<8b356f880812222058qaf38205i5088276233248e91 at mail.gmail.com>
Content-Type: text/plain
What is the equivalent for formatted tabular output of the various very
sophisticated plotting tools in R (plot, lattice, ggplot2)?
In particular, I'd like to be able to produce formatted Excel spreadsheets
(using color, fonts, borders, etc. -- probably via Excel XML) and formatted
HTML tables (ideally through a format-independent interface), and preview
them using commands within R, just as I would do with R graphics. The
reason I'd like to produce Excel or HTML rather than (say) TeX or PDF is to
make it easy for the readers of my results to manipulate them in their own
environments (usually Excel). There are various papers on the R-project.org
website related to this topic, but I haven't been able to find any
particular package supporting this functionality. I have found information
on importing from Excel, calling R functions from Excel, calling COM
interfaces from R, writing unformatted (CSV) data to Excel, etc., but not on
producing nicely-formatted tabular output.
I wouldn't have too much trouble putting together something quick-and-dirty
to produce HTML tables, but if someone's already done it well, I'd
rather
take advantage of their work. I also don't know enough about COM to do
something as simple as to cause my HTML to display in a browser window or my
XMLSS in Excel....
Thanks,
-s
[[alternative HTML version deleted]]
------------------------------
Message: 62
Date: Tue, 23 Dec 2008 00:40:45 -0500
From: David Winsemius <dwinsemius at comcast.net>
Subject: Re: [R] sorting regression coefficients by p-value
To: "Sharma, Dhruv" <Dhruv.Sharma at PenFed.org>
Cc: r-help at r-project.org
Message-ID: <5AE9F369-F2AD-404B-BD27-2D46B5386222 at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Assuming that you are using the example in the lm help page:
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, labels=c("Ctl","Trt")) weight <-
c(ctl, trt)
lm.D9 <- lm(weight ~ group)
# The coefficients are just :
coef(lm.D9)
# The relevant section of str(lm.D9):
$ coefficients : num [1:2, 1:4] 5.032 -0.371 0.22 0.311 22.85 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:2] "(Intercept)" "groupTrt"
.. ..$ : chr [1:4] "Estimate" "Std. Error" "t
value" "Pr(>|t|)"
> as.data.frame(summary(lm.D9)$coefficients)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.032 0.2202177 22.850117 9.547128e-15
groupTrt -0.371 0.3114349 -1.191260 2.490232e-01
set X <- that object,
cbind(rownames(X),X[,c("Estimate", "Pr(>|t|)")])
is what you asked for.
--
David Winsemius
On Dec 22, 2008, at 10:44 PM, Sharma, Dhruv wrote:
> Hi,
> Is there a way to get/extract a matrix of regression variable name,
> coefficient, and p values?
> (for lm and glm; which can be sort by p value?)
>
> thanks
> Dhruv
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 63
Date: Tue, 23 Dec 2008 06:41:31 +0100
From: Tobias Verbeke <tobias.verbeke at telenet.be>
Subject: Re: [R] Tabular output: from R to Excel or HTML
To: Stavros Macrakis <macrakis at alum.mit.edu>
Cc: r-help at r-project.org
Message-ID: <49507A0B.1090604 at telenet.be>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Hi Stavros,
> What is the equivalent for formatted tabular output of the various very
> sophisticated plotting tools in R (plot, lattice, ggplot2)?
For the tabulation itself the reshape package by Hadley Wickham might be
a handy tool:
http://had.co.nz/reshape/
> In particular, I'd like to be able to produce formatted Excel
spreadsheets
> (using color, fonts, borders, etc. -- probably via Excel XML) and
formatted> HTML tables (ideally through a format-independent interface), and preview
> them using commands within R, just as I would do with R graphics. The
> reason I'd like to produce Excel or HTML rather than (say) TeX or PDF
is
to> make it easy for the readers of my results to manipulate them in their own
> environments (usually Excel). There are various papers on the
R-project.org> website related to this topic, but I haven't been able to find any
> particular package supporting this functionality. I have found
information> on importing from Excel, calling R functions from Excel, calling COM
> interfaces from R, writing unformatted (CSV) data to Excel, etc., but not
on> producing nicely-formatted tabular output.
>
> I wouldn't have too much trouble putting together something
quick-and-dirty> to produce HTML tables, but if someone's already done it well, I'd
rather
> take advantage of their work. I also don't know enough about COM to do
> something as simple as to cause my HTML to display in a browser window or
my> XMLSS in Excel....
For the HTML part of your question, you can have a look at
- hwriter
- R2HTML
Both packages are on CRAN. See also
http://www.ebi.ac.uk/~gpau/hwriter/
HTH,
Tobias
------------------------------
Message: 64
Date: Tue, 23 Dec 2008 00:45:00 -0500
From: David Winsemius <dwinsemius at comcast.net>
Subject: Re: [R] Tabular output: from R to Excel or HTML
To: "Stavros Macrakis" <macrakis at alum.mit.edu>
Cc: r-help at r-project.org
Message-ID: <C095DC66-2D8E-498E-9485-295E0275581B at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
You should be looking at odfWeave. It has support for the OpenOffice
table formatting and once those are created, the conversion to Excel
should proceed smoothly.
--
David Winsemius
On Dec 22, 2008, at 11:58 PM, Stavros Macrakis wrote:
> What is the equivalent for formatted tabular output of the various
> very
> sophisticated plotting tools in R (plot, lattice, ggplot2)?
>
> In particular, I'd like to be able to produce formatted Excel
> spreadsheets
> (using color, fonts, borders, etc. -- probably via Excel XML) and
> formatted
> HTML tables (ideally through a format-independent interface), and
> preview
> them using commands within R, just as I would do with R graphics. The
> reason I'd like to produce Excel or HTML rather than (say) TeX or
> PDF is to
> make it easy for the readers of my results to manipulate them in
> their own
> environments (usually Excel). There are various papers on the R-
> project.org
> website related to this topic, but I haven't been able to find any
> particular package supporting this functionality. I have found
> information
> on importing from Excel, calling R functions from Excel, calling COM
> interfaces from R, writing unformatted (CSV) data to Excel, etc.,
> but not on
> producing nicely-formatted tabular output.
>
> I wouldn't have too much trouble putting together something quick-
> and-dirty
> to produce HTML tables, but if someone's already done it well, I'd
> rather
> take advantage of their work. I also don't know enough about COM to
> do
> something as simple as to cause my HTML to display in a browser
> window or my
> XMLSS in Excel....
>
> Thanks,
>
> -s
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
------------------------------
Message: 65
Date: Mon, 22 Dec 2008 17:49:26 -0800 (PST)
From: Stephen Oman <stephen.oman at gmail.com>
Subject: Re: [R] AR(2) coefficient interpretation
To: r-help at r-project.org
Message-ID: <21138255.post at talk.nabble.com>
Content-Type: text/plain; charset=us-ascii
As I need your urgent help so let me modify my question. I imported the
following data set to R and run the statements i mentioned in my previous
reply
Year Month Period a b c
1 2008 Jan 2008-Jan 105,536,785 9,322,074 9,212,111
2 2008 Feb 2008-Feb 137,239,037 10,986,047 11,718,202
3 2008 Mar 2008-Mar 130,237,985 10,653,977 11,296,096
4 2008 Apr 2008-Apr 133,634,288 10,582,305 11,729,520
5 2008 May 2008-May 161,312,530 13,486,695 13,966,435
6 2008 Jun 2008-Jun 153,091,141 12,635,693 13,360,372
7 2008 Jul 2008-Jul 176,063,906 13,882,619 15,202,934
8 2008 Aug 2008-Aug 193,584,660 14,756,116 16,083,263
9 2008 Sep 2008-Sep 180,894,120 13,874,154 14,524,268
10 2008 Oct 2008-Oct 196,691,055 14,998,119 15,802,627
11 2008 Nov 2008-Nov 184,977,893 13,748,124 14,328,875
and the AR result is
Call:
arima(x = a, order = c(2, 0, 0))
Coefficients:
ar1 ar2 intercept
0.4683 0.4020 5.8654
s.e. 0.2889 0.3132 2.8366
sigma^2 estimated as 4.115: log likelihood = -24.04, aic = 56.08
The minimum mount of a is more than 100 million and the intercept is 5.86
based on the result above.
If I placed all values into the formula then Xt=5.8654+0.4683*(184,977,893
)+0.4020*(196,691,055 )= 165,694,957.27. Do you think that makes sense? Did
i interpret the result incorrectly?
Also, i submit the following statement for the prediction of next period
> predict<-predict(fit, n.ahead=1)
> predict
it came out the value of 9.397515 below and I have no idea about how to
interpret this value. Please help.
$pred
Time Series:
Start = 12
End = 12
Frequency = 1
[1] 9.397515
$se
Time Series:
Start = 12
End = 12
Frequency = 1
[1] 2.028483
Stephen Oman wrote:>
> I am a beginner in using R and I need help in the interpretation of AR
> result by R. I used 12 observations for my AR(2) model and it turned out
> the intercept showed 5.23 while first and second AR coefficients showed
> 0.40 and 0.46. It is because my raw data are in million so it seems the
> intercept is too small and it doesn't make sense. Did i make any
mistake
> in my code? My code is as follows:
>
> r<-read.table("data.txt", dec=",", header=T)
> attach(r)
> fit<-arima(a, c(2,0,0))
>
> Thank you for your help first.
>
>
--
View this message in context:
http://www.nabble.com/AR%282%29-coefficient-interpretation-tp21129322p211382
55.html
Sent from the R help mailing list archive at Nabble.com.
------------------------------
Message: 66
Date: Mon, 22 Dec 2008 20:38:12 -0800 (PST)
Subject: [R] newbie problem using Design.rcs
To: r-help at r-project.org
Message-ID: <780080.14954.qm at web51505.mail.re2.yahoo.com>
Content-Type: text/plain; charset=us-ascii
Hi,
I read data from a file. I'm trying to understand how to use Design.rcs by
using simple test data first. I use 1000 integer values (1,...,1000) for x
(the predictor) with some noise (x+.02*x) and I set the response variable
y=x. Then, I try rcs and ols as follows:
m = ( sqrt(y1) ~ ( rcs(x1,3) ) ); #I tried without sqrt also
f = ols(m, data=data_train.df);
print(f);
[I plot original x1,y1 vectors and the regression as in
y <- coef2[1] + coef2[2]*x1 + coef2[3]*x1*x1]
But this gives me a VERY bad fit:
"
Linear Regression Model
ols(formula = m, data = data_train.df)
n Model L.R. d.f. R2 Sigma
1000 4573 2 0.9897 0.76
Residuals:
Min 1Q Median 3Q Max
-4.850930 -0.414008 -0.009648 0.418537 3.212079
Coefficients:
Value Std. Error t Pr(>|t|)
Intercept 5.90958 0.0672612 87.86 0
x1 0.03679 0.0002259 162.88 0
x1' -0.01529 0.0002800 -54.60 0
Residual standard error: 0.76 on 997 degrees of freedom
Adjusted R-Squared: 0.9897
"
[[elided Yahoo spam]]
Sincerely,
sp
------------------------------
Message: 67
Date: Tue, 23 Dec 2008 15:13:37 +0800
From: ronggui <ronggui.huang at gmail.com>
Subject: Re: [R] QCA adn Fuzzy
To: "Adrian DUSA" <dusa.adrian at gmail.com>
Cc: r-help at r-project.org
Message-ID:
<38b9f0350812222313j429cde4p54d03bc873a8f73c at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1
Dear Gott and Prof Adrian DUSA ,
I am learning fuzzy set QCA and recently, I just write a function to
construct a truthTable, which can be passed to QCA:::eqmcc to do the
Boolean minimization. The function is here:
http://code.google.com/p/asrr/source/browse/trunk/R/fs_truthTable.R
and the help page is:
http://code.google.com/p/asrr/source/browse/trunk/man/fs_truthTable.rd
and the example dataset from Ragin (2009) is here
http://code.google.com/p/asrr/source/browse/trunk/data/Lipset_fs.rda
Best
On Wed, Mar 8, 2006 at 2:13 AM, Adrian DUSA <dusa.adrian at gmail.com>
wrote:> Dear Prof. Gott,
>
> On Monday 06 March 2006 14:37, R Gott wrote:
>> Does anybody know of aything that will help me do Quantitiative
>> Comparative Analysis (QCA) and/or Fuzzy set analysis?? Or failing that
>> Quine?
>> ta
>> rg
>> Prof R Gott
>> Durham Univesrity
>> UK
>
> There is a package called QCA which (in its first release) performs only
> crisp set analysis. I am currently adapting a Graphical User Interface,
but> the functions are nevertheless usefull.
> For fuzzy set analysis, please consider Charles Ragin's web site
> http://www.u.arizona.edu/%7Ecragin/fsQCA/index.shtml
> which offers a software (still not complete, though). Also to consider
> is a good software called Tosmana (http://www.tosmana.org/) which does
> multi-value
> QCA.
> I am considering writing the inclusion algorithms in the next releases of
my> package, but it is going to take a little while. Any contributions and/or
> feedback are more than welcome.
>
> I hope this helps you,
> Adrian
>
>
> --
> Adrian DUSA
> Romanian Social Data Archive
> 1, Schitu Magureanu Bd
> 050025 Bucharest sector 5
> Romania
> Tel./Fax: +40 21 3126618 \
> +40 21 3120210 / int.101
>
> ______________________________________________
> R-help at stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
[[elided Yahoo spam]]
http://www.R-project.org/posting-guide.html>
--
HUANG Ronggui, Wincent
Tel: (00852) 3442 3832
PhD Candidate, City University of Hong Kong
Website: http://ronggui.huang.googlepages.com/
RQDA project: http://rqda.r-forge.r-project.org/
------------------------------
Message: 68
Date: Tue, 23 Dec 2008 08:06:31 +0000 (GMT)
From: Prof Brian Ripley <ripley at stats.ox.ac.uk>
Subject: Re: [R] AR(2) coefficient interpretation
To: Stephen Oman <stephen.oman at gmail.com>
Cc: r-help at r-project.org
Message-ID: <alpine.LFD.2.00.0812230800500.5888 at gannet.stats.ox.ac.uk>
Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII
You forgot to RTFM. From ?arima
Different definitions of ARMA models have different signs for the
AR and/or MA coefficients. The definition used here has
'X[t] = a[1]X[t-1] + ... + a[p]X[t-p] + e[t] + b[1]e[t-1] + ... +
b[q]e[t-q]'
and so the MA coefficients differ in sign from those of S-PLUS.
Further, if 'include.mean' is true (the default for an ARMA
model), this formula applies to X - m rather than X.
Since you have not yet produced a reproducible example (at least in a
single email), we don't have enough information to reproduce your reults.
But I hope we are not fitting AR(2) models to (potentialy seasonal) time
series of length 11.
On Mon, 22 Dec 2008, Stephen Oman wrote:
>
> As I need your urgent help so let me modify my question. I imported the
> following data set to R and run the statements i mentioned in my previous
> reply
> Year Month Period a b c
> 1 2008 Jan 2008-Jan 105,536,785 9,322,074 9,212,111
> 2 2008 Feb 2008-Feb 137,239,037 10,986,047 11,718,202
> 3 2008 Mar 2008-Mar 130,237,985 10,653,977 11,296,096
> 4 2008 Apr 2008-Apr 133,634,288 10,582,305 11,729,520
> 5 2008 May 2008-May 161,312,530 13,486,695 13,966,435
> 6 2008 Jun 2008-Jun 153,091,141 12,635,693 13,360,372
> 7 2008 Jul 2008-Jul 176,063,906 13,882,619 15,202,934
> 8 2008 Aug 2008-Aug 193,584,660 14,756,116 16,083,263
> 9 2008 Sep 2008-Sep 180,894,120 13,874,154 14,524,268
> 10 2008 Oct 2008-Oct 196,691,055 14,998,119 15,802,627
> 11 2008 Nov 2008-Nov 184,977,893 13,748,124 14,328,875
>
> and the AR result is
> Call:
> arima(x = a, order = c(2, 0, 0))
>
> Coefficients:
> ar1 ar2 intercept
> 0.4683 0.4020 5.8654
> s.e. 0.2889 0.3132 2.8366
>
> sigma^2 estimated as 4.115: log likelihood = -24.04, aic = 56.08
>
> The minimum mount of a is more than 100 million and the intercept is 5.86
> based on the result above.
> If I placed all values into the formula then Xt=5.8654+0.4683*(184,977,893
> )+0.4020*(196,691,055 )= 165,694,957.27. Do you think that makes sense?
Did> i interpret the result incorrectly?
>
> Also, i submit the following statement for the prediction of next period
>
>> predict<-predict(fit, n.ahead=1)
>> predict
>
> it came out the value of 9.397515 below and I have no idea about how to
> interpret this value. Please help.
>
> $pred
> Time Series:
> Start = 12
> End = 12
> Frequency = 1
> [1] 9.397515
>
> $se
> Time Series:
> Start = 12
> End = 12
> Frequency = 1
> [1] 2.028483
>
>
>
> Stephen Oman wrote:
>>
>> I am a beginner in using R and I need help in the interpretation of AR
>> result by R. I used 12 observations for my AR(2) model and it turned
out
>> the intercept showed 5.23 while first and second AR coefficients showed
>> 0.40 and 0.46. It is because my raw data are in million so it seems the
>> intercept is too small and it doesn't make sense. Did i make any
mistake
>> in my code? My code is as follows:
>>
>> r<-read.table("data.txt", dec=",", header=T)
>> attach(r)
>> fit<-arima(a, c(2,0,0))
>>
>> Thank you for your help first.
>>
>>
>
> --
> View this message in context:
http://www.nabble.com/AR%282%29-coefficient-interpretation-tp21129322p211382
55.html> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html> and provide commented, minimal, self-contained, reproducible code.
>
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 69
Date: Tue, 23 Dec 2008 08:15:19 +0000 (GMT)
From: Prof Brian Ripley <ripley at stats.ox.ac.uk>
Subject: Re: [R] Error: cannot allocate vector of size 1.8 Gb
To: iamsilvermember <m2chan at ucsd.edu>
Cc: r-help at r-project.org
Message-ID: <alpine.LFD.2.00.0812230810440.5888 at gannet.stats.ox.ac.uk>
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
On Mon, 22 Dec 2008, iamsilvermember wrote:
>
>> dim(data)
> [1] 22283 19
>
>> dm=dist(data, method = "euclidean", diag = FALSE, upper =
FALSE, p = 2)
> Error: cannot allocate vector of size 1.8 Gb
That would be an object of size 1.8Gb.
See ?"Memory-limits"
>
>
>
> Hi Guys, thank you in advance for helping. :-D
>
> Recently I ran into the "cannot allocate vector of size 1.8GB"
error. I
am> pretty sure this is not a hardware limitation because it happens no matter
I> ran the R code in a 2.0Ghz Core Duo 2GB ram Mac or on a Intel Xeon
2x2.0Ghz> quard-core 8GB ram Linux server.
Why? Both will have a 3GB address space limits unless the Xeon box is
64-bit. And this works on my 64-bit Linux boxes.
> I also tried to clear the workspace before running the code too, but it
> didn't seem to help...
>
> Weird thing though is that once in a while it will work, but next when I
run> clustering on the above result
>> hc=hclust(dm, method = "complete", members=NULL)
> it give me the same error...
See ?"Memory-limits" for the first part.
> I searched around already, but the memory.limit, memory.size method does
not> seem to help. May I know what can i do to resolve this problem?
What are you going to do with an agglomerative hierarchical clustering of
22283 objects? It will not be interpretible.
> Thank you so much for your help.
--
Brian D. Ripley, ripley at stats.ox.ac.uk
Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel: +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UK Fax: +44 1865 272595
------------------------------
Message: 70
Date: Tue, 23 Dec 2008 09:45:34 +0000
From: Richard.Cotton at hsl.gov.uk
Subject: [R] Borders for rectangles in lattice plot key
To: r-help at r-project.org
Message-ID:
<OF8D8703FA.9ED9C406-ON80257528.0034142C-80257528.00359CAB at hsl.gov.uk>
Content-Type: text/plain; charset="US-ASCII"
Hopefully an easy question. When drawing a rectangles in a lattice plot
key, how do you omit the black borders?
Here is an example adapted from one on the xyplot help page:
bar.cols <- c("red", "blue")
key.list <- list(
space="top",
rectangles=list(col=bar.cols),
text=list(c("foo", "bar"))
)
barchart(
yield ~ variety | site,
data = barley,
groups = year,
layout = c(1,6),
ylab = "Barley Yield (bushels/acre)",
scales = list(x = list(abbreviate = TRUE, minlength = 5)),
col=bar.cols,
border="transparent",
key=key.list
)
Notice the black borders around the rectangles in the key.
I checked to see if there was an undocumented border component for the
rectangles compoenent of key that I could set to "transparent" or
FALSE,
but no luck. I also tried setting lwd=0 on the rectangle component but
that didn't change anything either.
Regards,
Richie.
Mathematical Sciences Unit
HSL
------------------------------------------------------------------------
ATTENTION:
This message contains privileged and confidential inform...{{dropped:20}}
------------------------------
_______________________________________________
R-help at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
End of R-help Digest, Vol 70, Issue 23