search for: 0.69

Displaying 20 results from an estimated 177 matches for "0.69".

Did you mean: 0.29
2012 Apr 08
2
xyplot() does not plot legends with "relation=free" scales
Hi all, I have this problem with lattice that xyplot() won't draw some of my axis labels if the type (i.e. the relation argument) of scales is set as free. For example, in the plot below, I would want it to also show: 1. the labels E1,...E6 below the 10th panel (i.e. 3rd row, 2 col)....just as it is now done below the 12th panel.... 2. as well as the labels (2,4,6,8) on the top of panels 1
2016 Apr 22
2
Finding Highest value in groups
Hi I have two columns in data frame. First column is based on "ID" assigned to each group of my data (similar ID depicts one group). From second column, I want to identify highest value among each group and want to assign the same ID to that highest value. Right now the data looks like: ID Value 1 0.69 1 0.31 2 0.01 2 0.99 3 1.00 4 NA 4
2010 Nov 26
2
get list index
Hi R-users, I have a list mylist <- list(c(0.79, 0.92, 0.91, 0.86, 0.96, 0.96, 0.95, 0.94, 0.99), c(0.28, 0.45, 0.59, 0.69, 0.80, 0.87, 0.95, 0.94, 0.98), c(0.29, 0.39, 0.59, 0.69, 0.68, 0.80, 0.93, 0.95, 0.98)) Is there a way to find the index of the list element that contains the lowest value among all the other elements? As the lowest value in each element is the first, the
2008 May 02
0
Adaptive design code
I have been trying to create code to calculate the power for an adaptive design with a survival endpoint according to the method of Schafer and Muller ('Modification of the sample size and the schedule of interim analyses in survival trials based on interim inspections,' Stats in Med, 2001). This design allows for the sample size to be increased (if necessary) based on an interim look at
2016 Apr 22
0
Finding Highest value in groups
Assuming your dataframe is in a variable x: > require(dplyr) > x %>% group_by(ID) %>% summarise(maxVal = max(Value,na.rm=TRUE)) On Fri, 2016-04-22 at 13:51 +0000, Saba Sehrish via R-help wrote: > Hi > > > I have two columns in data frame. First column is based on "ID" assigned to each group of my data (similar ID depicts one group). From second column, I
2016 Apr 22
1
Finding Highest value in groups
Since the aggregate S3 method for class formula already has got na.action = na.omit, ## S3 method for class 'formula' aggregate(formula, data, FUN, ..., subset, na.action = na.omit) I think that to deal with NA's, it is enough: aggregate(Value~ID, dta, max) Moreover, passing na.rm = FALSE/TRUE is "don't care": aggregate(Value~ID, dta, max, na.rm=FALSE)
2010 Oct 12
2
R Profiling
Dear All, I need to do some very basic R profiling, something along the lines of: run this whole script five times and return the average completion time. I do not want (at this stage) delve into the details of the percentage of the time spent in which function and doing what. Which tools should I use? Any recommendation is welcome. Best Regards Lorenzo
2012 Aug 11
3
help counting in data
Hi >i have this data > X [1] 5.79 1579.52 2323.70 68.85 426.07 110.29 108.29 1067.60 17.05 22.66 [11] 21.02 175.88 139.07 144.12 20.46 43.40 194.90 47.30 7.74 0.40 [21] 82.85 9.88 89.29 215.10 1.75 0.79 15.93 3.91 0.27 0.69 [31] 100.58 27.80 13.95 53.24 0.96 4.15 0.19 0.78 8.01 31.75 [41] 7.35 6.50
2008 Jun 18
4
inverse cumsum
I've a matrix like this: 1985 1.38 1.27 1.84 2.10 0.59 3.47 1986 1.05 1.13 1.21 1.54 0.21 2.14 1987 1.33 1.21 1.77 1.44 0.27 2.85 1988 1.86 1.06 2.33 2.14 0.55 1.40 1989 2.10 0.65 2.74 2.43 1.19 1.45 1990 1.55 0.00 1.59 1.94 0.99 2.14 1991 0.92
2008 Jun 02
4
NOT-SO-SIMPLE function!
I am trying to set up a function which processes my data according to the following rules: 1. if (x[i]==0) NA 2. if (x[i]>0) log(x[i]/(number of consecutive zeros immediately preceding it +1)) The data this will apply to include a variety of whole numbers not limited to 1 & 0, a number of which may appear consecutively and not separated by zeros. Below is an example with a detailed
2006 Jul 20
2
how to print table with more columns per row?
When printing a table it is broken at some point (depending how long are the associated names) >>> see example below. Is there a way to control number of columns being printed for a given chunk of the table? Best regards, Ryszard > z5 AAAAAAA BBBBBBB CCCCCCC DDDDDDD EEEEEEE FFFFFFF GGGGGGG HHHHHHH IIIIIII AAAAAAA 1.00 -0.69 -0.54 -0.88 NA NA NA
2007 Jan 06
2
Bootstrapping Confidence Intervals for Medians
I apologize for this post. I am new to R (two days) and I have tried and tried to calculated confidence intervals for medians. Can someone help me? Here is my data: institution1 0.21 0.16 0.32 0.69 1.15 0.9 0.87 0.87 0.73 The first four observations compose group 1 and observations 5 through 9 compose group 2. I would like to create a bootstrapped 90% confidence interval on the difference of
2010 Jan 30
2
question about time series objects
Hi All, I have a very simple question about a time series object: how to access values for a particular year and quarter (say)? Suppose, following http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm I have read in data as a time series; here is how it looks. * Qtr1 Qtr2 Qtr3 Qtr4 1960 0.71 0.63 0.85 0.44 1961 0.61 0.69 0.92 0.55 . . . . .
2008 Apr 09
1
simple intro to cluster analysis using R
I am looking for simple introduction to cluster analysis using R, that would be understandable to a novice in statistics. Or, could someone perhaps help me understand how to proceed in my analysis? I am very new to both statistics and R, but am trying hard to avoid having to use SPSS as everyone around me... I have dataset on people presenting their opinions on different religious
2003 Apr 11
2
princomp with not non-negative definite correlation matrix
$ R --version R 1.6.1 (2002-11-01). So I would like to perform principal components analysis on a 16X16 correlation matrix, [princomp(cov.mat=x) where x is correlation matrix], the problem is princomp complains that it is not non-negative definite. I called eigen() on the correlation matrix and found that one of the eigenvectors is close to zero & negative (-0.001832311). Is there any way
2008 Sep 26
2
ANOVA between & within variance
hi, is there an option to calculate the 'within' & 'between' group variances for a simple ANOVA (aov) model (2 groups, 1 trait, normally distr.) ? or do I have to calculate them from the Sum Sq ? thanks for your time and greetings, gregor -- Gregor Rolshausen PhD Student; University of Freiburg, Germany e-mail: gregor.rolshausen at biologie.uni-freiburg.de tel. :
2017 Oct 05
4
dealing with a messy dataset
dear R-users, I am facing a quite regular and basic problem when it comes to dealing with datasets, but I cannot find any satisfying answer so far. I have a messy dataset of galaxies like that : And XVIII 000214.5+450520 0.69 17 9 0.00 -8.7 26.8 6.44 6.78 < 6.65 -44 0.5 MESSIER031 0.6 1.54 PAndAS-03 000356.4+405319 0.10 17 0.00 -3.6 27.8 4.38
2006 Dec 08
2
A smal fitting problem...
Dear R-helpers, I'm for sure not familiar with R, but it seem like a nice sofware tool, so I've decided to try using it. Here is my problem I just can't figure out: I'd like to do least square fit of a straight horizontal (a = 0) line y = ax + b through some data points x = (3,4,5,6,7,8) y = (0.62, 0.99, 0.83, 0.69, 0.76, 0.82) How would i find b????? All the best, Ked
2006 Oct 04
0
One-arm survival sample estimates
A few months ago, I posted a query regarding code for a sample size estimate for a one arm survival trial. Below is some code I created to calculate such an estimate - perhaps it may be of some use. #cox.pow computes sample size for a one arm survival trial. med.0 is the null median #survival, med.a is the alternative median survival, a.time is the accrual time, and #f.time is the follow up
2011 Nov 01
1
low sigma in lognormal fit of gamlss
Hi, I'm playing around with gamlss and don't entirely understand the sigma result from an attempted lognormal fit. In the example below, I've created lognormal data with mu=10 and sigma=2. When I try a gamlss fit, I get an estimated mu=9.947 and sigma=0.69 The mu estimate seems in the ballpark, but sigma is very low. I get similar results on repeated trials and with Normal and