search for: 0.46

Displaying 20 results from an estimated 278 matches for "0.46".

Did you mean: 0.4
2012 Jul 30
2
distance matrix and hclustering
Dear R Users,i am very new to R. I want your help on an issue regarding distance matrix and cluster analysis i had discharge data of 4 rivers(a,b,c,d) in 4 vectors each having 364 values > dput(qmu)structure(list(a = c(0.26, 0.25, 0.25, 0.25, 0.24, 0.23, 0.22, 0.21, 0.21, 0.21, 0.2, 0.19, 0.19, 0.19, 0.19, 0.18, 0.18, 0.18, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17, 0.17,
2013 Mar 06
3
About basic logical operators
Hello everyone,           I have a basic question regarding logical operators. > x<-seq(-1,1,by=0.02) > x   [1] -1.00 -0.98 -0.96 -0.94 -0.92 -0.90 -0.88 -0.86 -0.84 -0.82 -0.80 -0.78  [13] -0.76 -0.74 -0.72 -0.70 -0.68 -0.66 -0.64 -0.62 -0.60 -0.58 -0.56 -0.54  [25] -0.52 -0.50 -0.48 -0.46 -0.44 -0.42 -0.40 -0.38 -0.36 -0.34 -0.32 -0.30  [37] -0.28 -0.26 -0.24 -0.22 -0.20 -0.18 -0.16
2018 May 30
2
Evaluation failure of IAPWS95 functions in a rowwise manner (tidyverse style)
I'm trying to use the IAPWS95 package with the tidyverse packages. For some reason, the function is not outputting the correct rho. A minimal example with results is below. I've also included the definition of the DTp function from the IAPWS95 library. ==================================== library(IAPWS95) library(tidyverse) initial <- data.frame(T=c(279,294),p=c(0.46,0.46))
2018 May 30
0
Evaluation failure of IAPWS95 functions in a rowwise manner (tidyverse style)
Hi Shawn, I don't think it has anything to do with the tidyverse. If you keep simplifying your example you'll get all the way down to > DTp(T=c(279,294),p=c(0.46,0.46)) [1] 1000.12283 --Ista On Wed, May 30, 2018 at 2:14 PM, Shawn Way <SWay at meco.com> wrote: > I'm trying to use the IAPWS95 package with the tidyverse packages. For some reason, the function is not
2012 Sep 28
2
Converting array to matrix
Hi, I have a 3d array as below, I want to make this array to a matrix of p=50(rows) and n=20(columns) with the coverage values . The code before the array is: library(binom) Loading required package: lattice pi.seq<-seq(from = 0.01, to = 0.5, by = 0.01) no.seq<-seq(from = 5, to = 100, by = 5) cp.all = binom.coverage( p = pi.seq, n = no.seq , conf.level = 0.95, method = "exact")
2005 Oct 28
0
chkrootkit 0.46 reboots FreeBSD 5.4-RELEASE-p8
Hello, Please, don't use chkrootkit 0.46 on production machines. The "chkproc" process sends a SIGXFSZ (25) signal to init, that interprets this signal as a "disaster" and reboots after a 30s sleep. I'm contacting the chkrootkit maintainer to fix this problem. Sorry, Cordeiro
2008 Mar 16
1
pretty formatting of lists
Hello, is there already a function in any R package which does source code formatting of deparsed lists? Let's create the following list: x <- list(a = round(rnorm(3), 2), b = round(rnorm(3), 2)) xx <-c(aa = round(rnorm(30)), f = function(a) a + b, list(x, x)) Now, I want deparse it in a way that yields something like: list( aa = c(0.25, 0.18, 0.84, -1.25, 0.09,
2013 May 15
1
x and y lengths differ
I have a problem with R. I try to compute the confidence interval for my df. When I want to create the plot I have this problem: Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' and 'y' lengths differ. I try this code: library(dplR) df.rwi <- detrend(rwl = df, method = "Spline",nyrs=NULL) write.table(df.rwi,file="rwi.txt",quote=FALSE,row.names=TRUE)
2010 Jul 05
4
To detect the location of duplicate values
Dear R family, I have a question about how to detect some duplicate numeric observations. Suppose that I have two variables dataset. order value 1 0.52 2 0.23 3 0.43 4 0.21 5 0.32 6 0.32 7 0.32 8 0.32 9 0.32 10 0.12 11 0.46 12 0.09 13 0.32 14 0.25 ; Could you help me indicate where the duplicate observations in a row (e.g., 0.32) are? best, moohwan
2007 Oct 22
2
Help interpreting output of Rprof
Hello there, I am not quite sure how to interpret the output of Rprof (in the following the output I was staring at). I was poking around the web a little bit for documentation but without much success. I guess if I want to figure out what takes so long in my code the 2nd table $by.total and the total.pct column (pct = percent) is the most helpful. What does it mean that [ or [.data.frame is
2011 May 03
1
Unexp. behavior from boot with multiple statistics
I am attempting to use package boot to summarize and compare the performance of three models. I'm using R 2.13.0 in a Win32 environment. My statistic function returns a vector of 6 values, 3 of which are error rates for different models, and 3 are pairwise differences between those error rates. It looks like: multiEst<-function(dat,i) { .... c(E1,E2,E3,E2-E1,E3-E1,E3-E2); }
2010 Jun 28
1
Stacking several vectors from the list
Hi everybody, I'm working on the very messy data, I have tried to clean it up in SAS and SAS/IML but there is not enough info on how to handle certain things in SAS so I have turned to R. The thing itself should be rather simple, so i was wondering if someone could help me out. The original .csv has ([1] 7138 6338 ) dimensions with funds with the corresponding dates and observations for each
2013 Jan 24
4
sorting/grouping/classification problem?
Hi, I'm a database admin for a database which manage chromatographic results of products during stability studies. I use R for the reporting of the results in MS Word through R2wd. But now I think I need your help: suppose we have the following data frame: ID rrt Mnd Result 1 0.45 0 0.10 1 0.48 0 0.30 1 1.24 0 0.50 2 0.45 3 0.20 2 0.48 3 0.60 2 1.22 3 0.40 3
2019 May 14
3
Unir coordenas en un plano mediante linea
Buenos días -- Francisco Maturana Miranda Dr. Planificación territorial, urbanismo y dinámicas del espacio Profesor Asociado, Departamento de Geografía Universidad Alberto Hurtado www.fmaturana.cl [[alternative HTML version deleted]]
2007 Mar 05
4
Identifying last record in individual growth data over different time intervalls
Hi I have a plist t which contains size measurements of individual plants, identified by the field "plate". It contains, among other, a field "year" indicating the year in which the individual was measured and the "height". The number of measurements range from 1 to 4 measurements in different years. My problem is that I would need the LAST measurement. I only
2008 Oct 22
2
suboptimal lp solutions
Hi list, I want to find the total maximum resources I can spend given a set allocation proportion and some simple budget constraints. However, I get suboptimal results via lp and friends (i.e. lpSolve and simplex in the linprog and boot) . For example: library(lpSolve) proportions = c( 0.46, 0.28, 0.26) constraints = c( 352, 75, 171) lp(objective.in = proportions, const.mat =
2008 Jan 25
0
For-Loop faster than vectorized code?
Dear R-Users, I am working on an Hierarchical Bayes model and tried to replace the inner for-loop (which loops over a list with n.observations elements) with truely vectorized code (where I calculated everything based on ONE dataset over all respondents). However, when comparing the performance of the two alternatives, I found out that the code with the for-loop actually was faster! In order to
2013 Oct 22
2
doveadm: Fatal: open(/dev/tty)
I received this message today, and remembered, you can't do that... $ doveadm pw -s SHA512-CRYPT Enter new password: doveadm(dan): Fatal: open(/dev/tty) failed: No such file or directory </pre> It seems if you have no tty, you can't create a password. Surely there is a better way to do this? Looking at the code, it's trying to open the tty and turn off echo. For the
2011 Sep 06
2
subsetting tables
Hi guys, one of the questions where you need a real human instead of a search engine, so it would be great if you could help. I have a matrix of z-scores which I would like to filter, sometimes columnwise, sometimes rowwise. Data looks like this: Allstar hsa.let.7a hsa.let.7a.1 hsa.let.7a.2 2 0.87 0.79 -0.57 1.07 3 0.67 -1.14 -0.78 -0.95 4
2010 Jul 05
2
to remove duplicate values
Dear R family, Suppose I have two series. order value 1 0.52 2 0.23 3 0.43 4 0.21 5 0.32 6 0.32 7 0.32 8 0.32 9 0.32 10 0.12 11 0.46 12 0.09 13 0.32 14 0.25 For these two series, I figured out the way to detect the locations of duplicate values. The next thing to do is remove the repeated values except for a value that would not be next to each other. In other words, while keeping the