similar to: help with e+01 number abbreviations

Displaying 20 results from an estimated 90 matches similar to: "help with e+01 number abbreviations"

2012 Mar 28
2
Data extraction
Dear ReXperts, I have the below text file output. I need to extract the T, QC, QO, QO-QC and WT columns for the data between T = 10 and T=150. Any ideas? Thanks in advance. ======================================================================================== 1 D C ---CAT-- T THETA QC QO QO-QC QC/QO WT FSD 8 1 0 1.0000E+01
2009 Apr 02
1
Updating a data frame
Folks, Updating values in a table is simple in SAS or SQL, but I've wracked my brain looking for an elegant solution in R to no avail thus far. Certainly this is a common need that's been solved in dozens of different ways. Given an initial dataframe nn and a smaller dataframe of updates uu, I'd like to replace the values in nn <- expand.grid('a'=1:4, 'b'=1:3)
2006 Aug 16
2
adding multiple fitted curves to xyplot graph
Hello RHelpers, This may already have been answered, but despite days of scouring through the archives I haven't found it. My goal is to add multiple fitted curves to a plot. An example data set (a data frame named df in following code) is: x1 y1 factor1 4 1298.25 0.00000000 1 5 1393.25 0.00000000 1 6 1471.50
2012 Oct 18
4
speeding read.table
R 2.15.1 OS X Colleagues, I am reading a 1 GB file into R using read.table. The file consists of 100 tables, each of which is headed by two lines of characters. The first of these lines is: TABLE NO. 1 The second is a list of column headers. For example: TABLE NO. 1 COL1 COL2 COL3 COL4 COL5 COL6 COL7 COL8 COL9 COL10
2008 Sep 05
0
help for color parameter
Dear all: I attached the dataset with this email and post codes as below. My questions is related to the 'col=temp.col' for the line and pch in my code, I have 4 IDs, 10 DIDs and each ID include different DIDs, for example, first ID has 3 DIDs, then the color is the first three colors(black, red, green) in the first plot, but in the second plot, why the color change to pink which is
2012 Nov 28
3
Speeding reading of large file
R 2.15.1 OS X and Windows Colleagues, I have a file that looks that this: TABLE NO. 1 PTID TIME AMT FORM PERIOD IPRED CWRES EVID CP PRED RES WRES 2.0010E+03 3.9375E-01 5.0000E+03 2.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 1.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 0.0000E+00 2.0010E+03 8.9583E-01
2008 Dec 22
1
questions about read datafile into R
Dear all: I have been thinking to import below one data file (.txt)into R by read.table(..,skip=1, header=T). But How can I deal with the repeated rows of TABLE NO.1 and names of data variables in the middle of this data file. The similar block will be repeated 100 times, here only show 4 of them and within each block, data records also can vary, here only paste 4 rows for example. I appreciate
2008 Dec 22
1
question about read datafile
Dear all: I have been thinking to import below one data file (.txt)into R by read.table(..,skip=1, header=T). But How can I deal with the repeated rows of TABLE NO.1 and names of data variables in the middle of this data file. The similar block will be repeated 100 times, here only show 4 of them and within each block, data records also can vary, here only paste 4 rows for example. I appreciate
2004 Apr 14
4
Non-Linear Regression Problem
Dear all, I was wondering if there is any way i could do a "Grid Search" on a parameter space using R (as SAS 6.12 and higher can do it) to start the Newton-Gauss Linearization least squares method when i have NO prior information about the parameter. W. N. Venables and B. D. Ripley (2002) "Modern Applied Statistics with S", 4 th ed., page 216-7 has a topic
2005 Nov 15
3
Darstellung mit Nachkommastellen
Hi! I got a rather stupid question (I think): Is there ANY option that makes R display numericals not like "1e-8" but as "0.00000001" by default ? And I need the outcome to be really numerical, so formatC(...) which produces a character or something like this won't be acceptable. Any help on this would be appreciated, thanx. Marc
2008 May 25
1
How to write a package based on nlme
Dear R Helpers, I try to write a small package that based on nlme however my code does not work. R always appears this message: Error in eval(expr, envir, enclos) : object "y" not found where y is the response variable. Please help me out! This is my code: require(nlme) AMPmixed<-function(y, x, S1=c("asymptotic","logistic"), random,data,
2001 Jun 26
5
breaks in hist()
I was using the hist() function to create a frequency table of some network traffic data. The range in values is rather large, from 0 till just under 10e12. Calling hist(x, breaks=c(0,1000,1e6,1e9,1e12),plot=F,freq=T) causes hist() to return : $breaks [1] -1.0000e+05 1.0100e+05 1.1000e+06 1.0001e+09 1.0000e+12 Is this recalculation of the breaks by hist() intended? Maarten van Gelder.
2002 Apr 01
3
svd, La.svd (PR#1427)
(I tried to send this earlier, but it doesnt seem to have come through, due to problems on my system) Hola: Both cannot be correct: > m <- matrix(1:4, 2) > svd(m) $d [1] 5.4649857 0.3659662 $u [,1] [,2] [1,] -0.5760484 -0.8174156 [2,] -0.8174156 0.5760484 $v [,1] [,2] [1,] -0.4045536 0.9145143 [2,] -0.9145143 -0.4045536 > La.svd(m) $d [1]
2006 Jun 12
1
r's optim vs. matlab's fminsearch
Hi, I'm having a problem converting a Matlab program into R. The R code works almost all the time, but about 4% of the time R's optim function gets stuck on a local minimum whereas matlab's fminsearch function does not (or at least fminsearch finds a better minimum than optim). My understanding is that both functions default to Nelder-Mead optimization, but what's different about
2006 Jul 11
2
0* log(0) should be zero but NaN
Dear R-users >prob <- c(0.5,0.4,0.3,0.1,0.0) >cal <- prob * log(prob,base=2) >cal [1] -0.5000000 -0.5287712 -0.5210897 -0.3321928 NaN Is there any way to change NaN to zero ? I did come up with this by applying Ripley's relpy to my previous question cal <-prob*log(pmax(prob,0.00000001),base=2) Any suggestion ? Thank you Taka
2007 Jan 31
2
Bug in 'pchisq' for x=0.0 (PR#9485)
The function 'pchisq' from the 'stats' library gives a wrong result if the argument equals exactly zero: # Upper tail of central 1-df chi^2 distribution > pchisq(1 , 1, ncp=0, lower.tail = F, log.p = FALSE) [1] 0.3173105 > pchisq(0.5 , 1, ncp=0, lower.tail = F, log.p = FALSE) [1] 0.4795001 > pchisq(0.01 , 1, ncp=0, lower.tail = F, log.p = FALSE) [1]
2011 Mar 30
3
optim and optimize are not finding the right parameter
Dear all, I have a function that predicts DV based on one predictor pred: pred<-c(0,3000000,7800000,15600000,23400000,131200000) DV<-c(0,500,1000,1400,1700,1900) ## I define Function 1 that computes the predicted value based on pred values and parameters a and b: calc_DV_pred <- function(a,b) { DV_pred <- rep(0,(length(pred))) for(i in 1:length(DV_pred)){ DV_pred[i] <- a *
2009 Nov 20
1
Bessel function with large index value
I am looking for a method of dealing with the modified Bessel function K_\nu(x) for large \nu. The besselK function implementation of this allows for dealing with large values of x by allowing for exponential scaling, but there is no facility for dealing with large \nu. What would work for me would be an lbesselK function in the manner of lgamma which returned the log of K_\nu(x) for large
2010 Aug 12
0
Error: evaluation nested too deeply
Hi guys, I have a code in R and it was work well but when I decrease the epsilon value (indicated in the code) , then I am getting this error Error: evaluation nested too deeply: infinite recursion / options(expressions=)? any help please y = 6.8; w = 7.4; z = 5.7; muy = 7; muw = 7; muz = 6; sigmay = 0.8; sigmaz = 0.76; sigmaw = 0.3; betayx = 0.03; betayz = 0.3; betayw = 0.67 s =
2013 Jul 20
1
BH correction with p.adjust
Dear List, I have been trying to use p.adjust() to do BH multiple test correction and have gotten some unexpected results. I thought that the equation for this was: pBH = p*n/i where p is the original p value, n is the number of tests and i is the rank of the p value. However when I try and recreate the corrected p from my most significant value it does not match up to the one computed by the