search for: 0.227

Displaying 20 results from an estimated 49 matches for "0.227".

Did you mean: 0.22
1999 Oct 25
2
leaps: XHAUST returned error code -999
Hi there, This problem has been dogging me for a bit, and I'm trying to figure out why. When running the the subsets function in the leaps library, R is giving me the following error message > lvodsub <- subsets(pred, resp$LVOD) Warning message: XHAUST returned error code -999 in: leaps.exhaustive(a, really.big = really.big) but this still happens if I add the really.big option:
2013 Feb 23
2
assign index to colnames(matrix)
Hello, I’m trying to follow the syntax of a script from a journal website. In order to create a regression formula used later in the script, the regression matrix must have column names “X1”, “X2”, etc. I have tried to assign these column names to my matrix ScoutRSM.mat using a for loop, but I don’t know how to interpret the error message. Suggestions? Thanks, Paul
2018 May 31
2
mysterious rounding digits output
Well pointed out, Jim! It is infortunate that the documentation for options(digits=...) does not mention that these are *significant digits* and not *decimal places* (which is what Joshua seems to want): "?digits?: controls the number of digits to print when printing numeric values." On the face of it, printing the value "0,517" of 'ccc' looks like printing 4
2018 May 31
0
mysterious rounding digits output
>>>>> Ted Harding >>>>> on Thu, 31 May 2018 07:10:32 +0100 writes: > Well pointed out, Jim! > It is infortunate that the documentation for options(digits=...) > does not mention that these are *significant digits* > and not *decimal places* (which is what Joshua seems to want): Since R 3.4.0 the help on ?options *does* say
2008 Sep 03
1
R puts '+' within my numbers
Hello, my test.R file contains two huge arrays (>3000 entries), from which R needs to calculate the Pearson Correlation, if I look at the file the numbers look correct. if I run R R < test.R --no-save I see things like this: 0.723, 0.838, 1.002, 0.364, 0.357, 0.227, 0.982+ , 0.963, 0.535, 1.214, 1.270, 0.832, 1.033, 0.632, 2.482, 1.239, 0.743, 1.077, 0.962, 1.052, 1.075, 1.427, 1.395,
2018 May 31
0
mysterious rounding digits output
Hi Joshua, Because there are no values in column ddd less than 1. itemInfo[3,"ddd"]<-0.3645372 itemInfo aaa bbb ccc ddd eee skill 1.396 6.225 0.517 5.775 2.497 predict 1.326 5.230 0.462 5.116 -2.673 waiting 1.117 4.948 NA 0.365 NA complex 1.237 4.170 0.220 4.713 5.642 novelty 1.054 4.005 0.442 4.260 2.076 creative 1.031 3.561 0.362 3.689
2018 May 31
3
mysterious rounding digits output
R version 3.5.0 (2018-04-23) -- "Joy in Playing" Platform: x86_64-pc-linux-gnu (64-bit) options(digits=3) itemInfo <- structure(list("aaa" = c(1.39633732316667, 1.32598263816667, 1.11658324066667, 1.23651072616667, 1.05368679983333, 1.03100737383333, 0.9630728395, 0.7483865045, 0.620086646166667, 0.5411017985, 0.496397607833333, 0.459528044666667, 0.427877047833333,
2012 Oct 03
1
How to draw a graph after model selection?
I am very new to R and I basically used SPSS to do my model selection, which I had used generalized linear model. So my best model is P= D + T + L + T*L and there is a parameters table from the SPSS output which I suppose I have to use the coefficients (column B) in the table (as attached) when I draw my graph in R. I want to draw a graph in R which x-axis is D, using the model and the relevant
2003 Apr 24
1
write.table problem
Dear R helpers, I have been using the loadings function from the multiv library and I get the typical output (see below). When I try to export these results to a file using a write.table() I get the following error message "Error in as.data.frame.default(x[[i]], optional = TRUE) : can't coerce loadings into a data.frame" Any idea why write.table is doing that and any
2008 Mar 25
1
Subset of matrix
Dear R users I have a big matrix like 6021 1188 790 290 1174 1015 1990 6613 6288 100714 6021 1 0.658 0.688 0.474 0.262 0.163 0.137 0.32 0.252 0.206 1188 0.658 1 0.917 0.245 0.331 0.122 0.148 0.194 0.168 0.171 790 0.688 0.917 1 0.243 0.31 0.122 0.15 0.19 0.171 0.174 290 0.474
2010 Dec 21
2
please Help me on a repeated measures anova
I currently work on a draft of an aquatic bioassessment. The conditions tested are the following: ER river water T dechlorinated water control 0.5 + 0.5mg / L of malate T + 1 dechlorinated water control + 1g / L of malate T ED dechlorinated water control SED + ER + river water sediment SED ED + sediment + water dechlorinated. It is the result of AChE in muscle (fillet of fish). The production of
2024 Dec 17
1
R_CheckUserInterrupt() can be a performance bottleneck within GUIs
tl;dr: R_CheckUserInterrupt() can be a performance bottleneck within GUIs. This also affects functions in the 'stats' package, which could be improved by changing the position of calls to R_CheckUserInterrupt(). Dear all, Recently I was puzzled because some code in a package under development, which consisted almost entirely of a .Call() to a function written in
2019 Jul 30
1
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
On Tue, Jul 30, 2019 at 11:54:53AM -0400, Michael S. Tsirkin wrote: > On Tue, Jul 30, 2019 at 05:43:29PM +0200, Stefano Garzarella wrote: > > This series tries to increase the throughput of virtio-vsock with slight > > changes. > > While I was testing the v2 of this series I discovered an huge use of memory, > > so I added patch 1 to mitigate this issue. I put it in this
2011 Jan 31
4
Select rows with distinct values in a column and other conditions
My data frame looks like: SightingID PA1 PA2 PlotID InOverlap Area1 2001 1 -99 392 Y 0.22 2002 1 -99 388 Y 0.253 2008 1 NA 104 N 0.344 2010 1 NA 71 N 0.185 2012 1 NA 61 N 0.166 2013 1 NA 61 N 0.227 2014 1 NA 62
2024 Dec 17
1
R_CheckUserInterrupt() can be a performance bottleneck within GUIs
A more generic solution would be for R to throttle calls to R_CheckUserInterrupt(), because it makes no sense to check 1000 times per second if a user has interrupted, but it is difficult for the caller to know when R_CheckUserInterrupt() has been last called, or do it regularly without over-doing it. Here is a simple patch: https://github.com/r-devel/r-svn/pull/125 See also:
2017 Jun 17
1
(no subject)
I have 4 years of data and for each year, I have initialize the value so now for fitting the model, I want to remove the initial value and get the model based on remaining data set. Could anyone can help on this? I want to get linear model based on fourth column and 13th column but need to remove the initial value for each year and each treatment ( the second column I have 1:36) . Thank you,
2006 Jun 14
2
lmer binomial model overestimating data?
Hi folks, Warning: I don't know if the result I am getting makes sense, so this may be a statistics question. The fitted values from my binomial lmer mixed model seem to consistently overestimate the cell means, and I don't know why. I assume I am doing something stupid. Below I include code, and a binary image of the data is available at this link:
2008 Aug 08
2
aggregate
Dear All- I have a dataset that is comprised of the following: doy yr mon day hr hgt1 hgt2 hgt3 co21 co22 co23 sig1 sig2 sig3 dif flag 244.02083 2005 09 01 00 2.6 9.5 17.8 375.665 373.737 373.227 3.698 1.107 0.963 -0.509 PRE 244.0625 2005 09 01 01 2.6 9.5 17.8 393.66 384.773 379.466 15.336 11.033 5.76 -5.307 PRE 244.10417 2005 09 01 02 2.6 9.5 17.8 411.162 397.866 387.755 6.835 5.61 6.728
2024 Dec 18
2
R_CheckUserInterrupt() can be a performance bottleneck within GUIs
It seems benign, but has implications since checking time is actually not a cheap operation: adding jus ta time check alone incurs a penalty of ca. 700% compared with the time it takes to call R_CheckUserInterrupt(). Generally, it makes no sense to check interrupts at every iteration - you'll find code like if (++i % 10000 == 0) R_CheckUserInterrupt(); in loops to make sure it's not called
2024 Dec 17
1
R_CheckUserInterrupt() can be a performance bottleneck within GUIs
This seems like a great idea. Would it help to escalate this to a post on R-bugzilla, so it is less likely to fall through the cracks? On 12/17/24 09:51, Jeroen Ooms wrote: > A more generic solution would be for R to throttle calls to > R_CheckUserInterrupt(), because it makes no sense to check 1000 times > per second if a user has interrupted, but it is difficult for the > caller to