Displaying 20 results from an estimated 149 matches for "2.34".
Did you mean:
2.3
2009 Nov 13
2
why the same values cannot be judged to be the same in R
Hi Rusers,
I found sometimes that the same values cannot be judged to be the same in
R. Anybody knows the probelm? I think i ignored some minor detail. Thanks.
Here is the example.
############
data1<-matrix(data=c(1,1.2,1.3,"3/23/2004",1,1.5,2.3,"3/22/2004",2,0.2,3.3,"4/23/2004",3,1.5,1.3,"5/22/2004"),nrow=4,ncol=4,byrow=TRUE)
2010 Sep 06
1
size limit of string/parse a string and convert to vector
Hi,
I have a loop as follows,
dataStr <- character(0)
repeat{
fstr<-read.socket(sockfd)
if(fstr=="")
break
dataStr<-paste(dataStr,fstr)
}
at what point does dataStr stop accepting(gets full)? I'm sending millions of records over the socket and need to know if all of it can go into dataStr.
Also, Incase all of it cannot go into dataStr, I need to parse each
2005 Aug 30
1
graphics
Hello,
I guess a have a very simple problem though up to now couldn't solve it:
I want to plot two datasets wihtin one plot like plot(x) provides it for
one dataset(type="b" that is: points connected by lines).
Example data 'x':
Befragung1 Befragung2 Befragung3 Geschlecht
2.25 2.34 1.78 weiblich
1.34 3.45 2.23 maennlich
The two rows of the example above
2008 May 17
1
tapply and grouping
Hello all,
I have a df like this:
w <- c(1.20, 1.34, 2.34, 3.12, 2.89, 4.67, 2.43,
2.89, 1.99, 3.45, 2.01, 2.23, 1.45, 1.59)
g <- rep(c("a", "b"), each=7)
df <- data.frame(g, w)
df
# 1. Mean for each group
tapply(df$w, df$g, function(x) mean(x))
# 2. Range for each group - fix value 0.15
tapply(df$w, df$g,
function(x)
x[(x > mean(x) -
2009 Aug 27
5
Transform data for repeated measures
I have a dataset that I'm trying to rearrange for a repeated measures analysis:
It looks like:
patient basefev1 fev11h fev12h fev13h fev14h fev15h fev16h fev17h fev18h drug
201 2.46 2.68 2.76 2.50 2.30 2.14 2.40 2.33 2.20 a
202 3.50 3.95 3.65 2.93 2.53 3.04 3.37 3.14 2.62 a
203 1.96 2.28 2.34 2.29 2.43 2.06 2.18 2.28 2.29 a
2017 Jun 28
4
Extraneous full stop in csv read
I ran into a puzzling minor behaviour I would like to understand.
Reading in a csv file, I find an extraneous "." after a column header,
"in" [short for "inches"] thus, "in.". Is this due to "in" being
reserved? I initially blamed this on RStudio or to processing the data
through LibreCalc. However, the same result occurs in a console R
session.
2006 Jun 06
4
pls help me regarding Maths round up function.....
Hi,
I have some values on my webpage displaying like
1.22333333333
2.33333344444
2.33377777777
etc.
Here I want to display values upto 2 decimal places correct.
i.e, 1.22333333333 should be dislayed as 1.22
2.33333344444 should be dislayed as 2.33
2.33777777777 should be dislayed as 2.34
& so on....
How to do this in ruby?????
Is there any function???
Thanx in advance.
Prash
--
2007 Jan 14
1
Questions about paste and assign
Hi,
I would like to assign a value to a member b of the list a in position 3, by calling:
assign( target, 2.34, 3)
My question is what the "target" should be. I tried target <- paste("a", $, "b") and something else,
but haven't got the right answer yet.
BTW, if I attached a list named
2006 Jun 19
4
Qurey : How to add trendline( st. line) in Graph
How to add trendline (i.e. straight line passing through maximum points)
in graph.
I have worked on the data given below.
Please tell me how to add trendline in the graph.
The script is as follows
=================================== start
====================================================
# The data is as follows
data <- c( 0.01, 0.02, 0.04, 0.13, 0.17 , 0.19 , 0.21 , 0.27 , 0.27 ,
2012 Apr 12
2
How to calculate the "McFadden R-square" for LOGIT model?
Dear all, can somebody please help me how to calculate "McFadden
R-square" for a LOGIT model? Corresponding definition can be found
here:
http://publib.boulder.ibm.com/infocenter/spssstat/v20r0m0/index.jsp?topic=%2Fcom.ibm.spss.statistics.help%2Falg_plum_statistics_rsq_mcfadden.htm
Here is my data:
Data <- structure(c(1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1,
0, 0, 1, 1,
2007 Mar 24
2
Two Problems while trying to aggregate a dataframe
Hello!
Given is an Excel-Sheet with actually 11,000 rows and 9 columns. I want
to work with the data in R. The contents are similar to my following
example.
I have a list with ID-number, personal name and two kinds of
loan-values. I want to aggregate the list, that for each person only one
row remains and where the loan-values are added.
First I tried some commands with tapply but had no
2009 Nov 06
1
probem on merge data
Hi there,
data1<-matrix(data=c(1,1.2,1.3,"3/23/2004",1,1.5,2.3,"3/22/2004",2,0.2,3.3,"4/23/2004",3,1.5,1.3,"5/22/2004"),nrow=4,ncol=4,byrow=TRUE)
data1<-data.frame(data1)
names(data1)<-c("areaid","x","y","date")
data1
areaid x y date
1 1 1.2 1.3 3/23/2004
2 1 1.5 2.3 3/22/2004
3 2
2008 Sep 01
3
how to read multiple lines per case
How can I read a space-delimited file, where the data values for each case
are folded before column 80, and so appear on two lines for each case?
The first few cases look like this
loc type bio H2S sal Eh7 pH buf P K Ca Mg Na Mn Zn Cu NH4
OI DVEG 676 -610 33 -290 5.00 2.34 20.238 1441.67 2150.00 5169.05 35184.5
14.2857 16.4524 5.02381 59.524
OI DVEG 516 -570 35 -268 4.75 2.66 15.591 1299.19
2011 Jan 23
2
Problem with combined two data frame.
Dear All.
I have some problem with combined two data frame.
....
I have first data frame ..
GPAX THAI MATH SCINCE SOCIAL HEALT ART CAREER LANGUAGE
1227 2.99 3.32 2.50 2.64 3.05 3.60 3.72 3.57 2.62
1704 2.81 2.56 2.48 2.86 3.22 3.19 3.55 3.20 2.51
617 2.18 1.90 1.97 2.06 2.38 3.50 3.54 2.33 1.70
876 2.82 3.14 2.73 2.46 2.71 3.11 3.04 3.24 2.90
2012 Jul 06
3
estimating NA values against selected slots
Dear R Users,
Could you please help me on the following issue?
I have a real large yearly data set. For each year I have
365 flow values. Some of the flow values are not known and that’s why you will
see NA written in those slots. I wanted to know, is there a way that I can
estimate those values? I tried approx command but it seems least helpful for
the kind of issue I am up against.
2009 Oct 19
2
how to get rid of 2 for-loops and optimize runtime
Short: get rid of the loops I use and optimize runtime
Dear all,
I want to calculate for each row the amount of the month ago. I use a matrix with 2100 rows and 22 colums (which is still a very small matrix. nrows of other matrixes can easily be more then 100000)
Table before
Year month quarter yearmonth Service ... Amount
2009 9 Q3 092009 A ...
2010 Dec 09
1
Extremely poor write performance, but read appears to be okay
Hello,
I'm writing from the otherside of the world from where my systems are,
so details are coming in slow. We have a 6TB OCFS2 volume across 20 or
so nodes all running OEL5.4 running ocfs2-1.4.4. The system has worked
fairly well for the last 6-8 months. Something has happened over the
last few weeks which has driven write performance nearly to a halt.
I'm not sure how to proceed, and
2007 Jul 13
1
Correlation matrix
I have a model with 5 parameters that I am optimising where the (best)
value of the objective function is negative. I would like to use the
Hessian matrix (from genoud and/or optim functions) to construct the
covariance and correlation matrices.
This is the code that I am using:
est <- out$par # Parameter estimates
H <- out$hessian # Hessian
V <-
2012 Apr 12
1
Seeking help with LOGIT model
Dear all, I am fitting a LOGIT model on this Data...........
Data <- structure(c(1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1,
0, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1,
0, 1, 1, 0, 1, 0, 47, 58, 82, 100, 222, 164, 161, 70, 219, 81,
209, 182, 185, 104, 126, 192, 95, 245, 97, 177, 125, 56, 85,
199, 298, 145, 78, 144, 178, 146, 132, 98, 120, 148, 123, 282,
79, 34, 104,
2008 Apr 17
1
survreg() with frailty
Dear R-users,
I have noticed small discrepencies in the reported estimate of the
variance of the frailty by the print method for survreg() and the
'theta' component included in the object fit:
# Examples in R-2.6.2 for Windows
library(survival) # version 2.34-1 (2008-03-31)
# discrepancy
fit1 <- survreg(Surv(time, status) ~ rx + frailty(litter), rats)
fit1
fit1$history[[1]]$theta