similar to: Apparent bug in summary.data.frame() with columns of Date class and NA's present

Displaying 20 results from an estimated 10000 matches similar to: "Apparent bug in summary.data.frame() with columns of Date class and NA's present"

2013 Apr 05
1
mixed formatting of integer and numeric (e. g., by summary.default())
Hello, eveRybody, I've been trying to find the origin for the following formatting-"inconsistency": E. g., look at the number of digits in summary.defaults()'s output when NAs occur: in my example below the number of NA's is displayed as an integer, the rest as numeric (floating point numbers): > summary.default( c( 1:2, NA)) Min. 1st Qu. Median Mean 3rd Qu.
2013 Sep 10
1
[PATCH] show vector length in summary()
(summary.default): show the vector length in addition to quantiles diff -u -i -p -F '^(def' -b -w -B /home/sds/src/R-3.0.1/src/library/base/R/summary.R.old /home/sds/src/R-3.0.1/src/library/base/R/summary.R --- /home/sds/src/R-3.0.1/src/library/base/R/summary.R.old 2013-03-05 18:02:33.000000000 -0500 +++ /home/sds/src/R-3.0.1/src/library/base/R/summary.R 2013-09-10 10:19:02.682946339
2013 Mar 12
5
extract values
Hello all! I have a problem to extract values greater that for example 1820. I try this code: x[x[,1]>1820,]->x1 Please help me! Thank you! The data structure is: structure(c(2.576, 1.728, 3.434, 2.187, 1.928, 1.886, 1.2425, 1.23, 1.075, 1.1785, 1.186, 1.165, 1.732, 1.517, 1.4095, 1.074, 1.618, 1.677, 1.845, 1.594, 1.6655, 1.1605, 1.425, 1.099, 1.007, 1.1795, 1.3855, 1.4065, 1.138, 1.514,
2000 Dec 19
1
Bug in glm.fit() or plot.lm() (PR#778)
Here's a bug one of my students noticed. When you call plot() on a glm object, plot.lm gets called. The second plot it shows is supposed to give a normal QQ plot of the standard deviance residuals, but it doesn't. The glm object created by glm.fit returns something (the IRLS weights?) in fit$weights which plot.lm takes as observation weights, so you get strange residuals in the QQ
2013 Mar 13
2
merge datas
Hello all! I have a problem with R. I try to merge data like this: structure(c(2.1785, 1.868, 2.1855, 2.5175, 2.025, 2.435, 1.809, 1.628, 1.327, 1.3485, 1.4335, 2.052, 2.2465, 2.151, 1.7945, 1.79, 1.6055, 1.616, 1.633, 1.665, 2.002, 2.152, 1.736, 1.7985, 1.9155, 1.7135, 1.548, 1.568, 1.713, 2.079, 1.875, 2.12, 2.072, 1.906, 1.4645, 1.3025, 1.407, 1.5445, 1.437, 1.463, 1.5235, 1.609, 1.738, 1.478,
2010 Aug 24
3
odd behavior of "summary" function
Hello All, Using the standard "summary" function in 'R', I ran across some odd behavior that I cannot understand. Easy to reproduce: Typing: summary(c(6,207936)) Yields:: Min. *1st Qu. Median Mean 3rd Qu. Max.* 6 *51990 104000 104000 156000 207900* None of these values are correct except for the minimum. If I perform "quantile(c(6,
2013 Apr 07
2
group data in classes
Hello all! I have a problem to group my data (years) in 10 years classes. For example for year year decade 1598 1590-1600 1599 1590-1600 1600 1590-1600 1601 1600-1610 --- my is like this> [1] 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 [16] 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 [31] 1628 1629 1630 1631 1632 1633
2008 Nov 04
1
perform Kruskal-Wallis test without using the built-in command in R
Hi, again i am stuck in my presentation, and i have never learn R before in my life but need this to be done, so please help me out for a favour: http://www.nabble.com/file/p20333155/kew.dat kew.dat run this in R and these comes up: Month Year Rain 1 Jan 1900 74.400000 2 Feb 1900 80.500000 3 Mar 1900 23.600000 4 Apr 1900 23.600000 5 May 1900 25.100000 6
2005 Apr 28
3
have to point it out again: a distribution question
Stock returns and other financial data have often found to be heavy-tailed. Even Cauchy distributions (without even a first absolute moment) have been entertained as models. Your qq function subtracts numbers on the scale of a normal (0,1) distribution from the input data. When the input data are scaled so that they are insignificant compared to 1, say, then you get essentially the
2006 Oct 12
1
Should NA's in summary() output always be reported???
Consider > summary(1:5) Min. 1st Qu. Median Mean 3rd Qu. Max. 1 2 3 3 4 5 > summary(c(1:5,NA)) Min. 1st Qu. Median Mean 3rd Qu. Max. NA's 1 2 3 3 4 5 1 Wouldn't it be more stringent if "NA's" was also reported in the first case?? Regards S?ren
2013 Mar 29
1
problem with data
Hello all! I have a problem with my data in R. When I want to plot the following data, I have a problem with y scale. The maximum value is cc. 10 degrees and in R is about 100. I use this code: fasy<-read.table("gridd1.txt",sep="\t",dec=",",header=T,row.names=1) # here are the years: x <- as.numeric(rownames(fasy)) # extract a series that you want to plot: y
2017 May 28
3
Rounding in print.summaryDefault()
Dear all I am happy that summary.default() no longer rounds since R 3.4.0. However, in R 3.4.0, in a few cases, print.summaryDefault() rounds the mean value (and the median value) differently on my GNU/Linux machine and on my colleague's MS-Windows machine. Here is a small (simplified) reproducible example: R> a <- 1234568.01 + c(0:1) R> summary(a) Output on MS-Windows (expected
2017 May 28
1
Rounding in print.summaryDefault()
Might this be related to the Linux version? I'm testing on one of our university servers, and they tend to be deprived of regular updates sometimes... (Dirk, sorry for sending you this twice.) > Sys.info() sysname "Linux" release
2006 Jan 04
1
Difficulty with 'merge'
Dear R-helpers, Happy New Year to all the helpful members of the list. Here is the behavior I'm looking for: > v1 <- c("a","b","c") > n1 <- c(0, 1, 2) > v2 <- c("c", "a", "b") > n2 <- c(0, 1 , 2) > (f1 <- data.frame(v1, n1)) v1 n1 1 a 0 2 b 1 3 c 2 > (f2 <- data.frame(v2, n2))
2004 Sep 21
2
Ever see a stata import problem like this?
Greetings Everybody: I generated a 1.2MB dta file based on the general social survey with Stata8 for linux. The file can be re-opened with Stata, but when I bring it into R, it says all the values are missing for most of the variables. This dataset is called "morgen.dta" and I dropped a copy online in case you are interested http://www.ku.edu/~pauljohn/R/morgen.dta looks like this
2004 Mar 03
5
get.hist.quote - is great, but am I missing something?
I find it's just great to be able to say: library(tseries) x <- get.hist.quote(instrument="ongc.ns") and it gets a full time-series of the stock price of the symbol ongc.ns from Yahoo quote. However, once my hopes have been raised by such beauty :-) I get disappointed when I do > plot(x) and the annotation is horrible! The x axis is not labelled as dates. The default
2005 May 31
1
apply the function "factor" to multiple columns
I have a case where I would like to change multiple columns containing numbers to factors. I can change each column one at a time as in: TEMP.FACT$EXPOS01<-factor(TEMP.FACT$EXPOS01,levels=c(1,2,3),labels=c("No ne","Low Impact","MedHigh Imp")) TEMP.FACT$EXPOS02<-factor(TEMP.FACT$EXPOS02,levels=c(1,2,3),labels=c("No ne","Low
2011 Oct 31
1
reshape2: Lost Values Between melt() and dcast()
Working with 5 subset streams from my source data frame, three of them successfully call dcast(), but two fail: jerritt.cast <- dcast(jerritt.melt, site + sampdate ~ param) Aggregation function missing: defaulting to length and winters.cast <- dcast(winters.melt, site + sampdate ~ param) Aggregation function missing: defaulting to length Yet both data frames have the values in their
2012 Feb 23
1
Sexpr not getting expanded in Sweave
An Sweave file, 'test.Rnw': \documentclass{article} \title{Sweave minimal} \author{MK} \begin{document} \maketitle We try Sweave: <<1>>= data(airquality) summary(airquality) x <- airquality[1, 1] @ I try Sexpr: \Sexpr{x} We plot: \begin{center} <<2, fig=TRUE, echo=FALSE >>= boxplot(Ozone ~ Month, data = airquality) @ \end{center} \end{document} I check the
2012 Oct 30
2
issues with krige function
Greetings all, Ran into a strange problem with the krige function from geoR. The problem that I am having is that while the krige function seems to work well, the resulting predicted values are all NAs. Given the size of the datasets I am working with can't attach it, but I can provide snippets of the datasets. > casedata station year month day obs mpe bias type