similar to: problem with "unique" function

Displaying 20 results from an estimated 100 matches similar to: "problem with "unique" function"

2017 Jul 28
0
problem with "unique" function
Most likely, previous computations have ended up giving slightly different values of say 0.13333. A pragmatic way out is to round to, say, 5 digits before applying unique. In this particular case, it seems like all numbers are multiples of 1/30, so another idea could be to multiply by 30, round, and divide by 30. -pd > On 28 Jul 2017, at 17:17 , li li <hannah.hlx at gmail.com> wrote:
1999 Nov 30
3
model.tables
A non-text attachment was scrubbed... Name: not available Type: text Size: 3126 bytes Desc: not available Url : https://stat.ethz.ch/pipermail/r-help/attachments/19991130/5cb00c0f/attachment.pl
2006 Nov 13
0
Confidence intervals for relative risk
Wolfgang, It is common to handle relative risk problems using Poisson regression. In your example you have 8 events out of 508 tries, and 0/500 in the second data set. > tdata <- data.frame(y=c(8,0), n=c(508,500), group=1:0) > fit <- glm(y ~ group + offset(log(n)), data=tdata, family=poisson) Because of the zero, the standard beta/se(beta) confidence intervals don't work.
2004 Jun 24
0
tree model with at most one split point per variable
I would like to create a tree model with at most one split point per variable using tree, rpart or other routine. Its OK if a variable enters at more than one node but if it does then all splits for that variable should be at the same point. The idea is that I want to be able to summarize the data as binary factors with the chosen split points. I don't want to have three level or more
2008 Mar 23
2
scaling problems in "optim"
Dear R users, I am trying to figure out the control parameter in "optim," especially, "fnscale" and "parscale." In the R docu., ------------------------------------------------------ fnscale An overall scaling to be applied to the value of fn and gr during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on
2009 Sep 30
5
Rounding error in seq(...)
Hi, Today I was flabbergasted to see something that looks like a rounding error in the very basic seq function in R. > a = seq(0.1,0.9,by=0.1) > a [1] 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 > a[1] == 0.1 [1] TRUE > a[2] == 0.2 [1] TRUE > a[3] == 0.3 [1] FALSE It turns out that the alternative > a = (1:9)/10 works just fine. Are there any good guides out there on how to deal
2012 Sep 07
7
Producing a table with mean values
Hi All, I have a data set wit three size classes (pico, nano and micro) and 12 different sites (Seamounts). I want to produce a table with the mean and standard deviation values for each site. Seamount Pico Nano Micro Total_Ch 1 Off_Mount 1 0.0691 0.24200 0.00100 0.31210 2 Off_Mount 1 0.0938 0.00521 0.02060 0.11961 3 Off_Mount 1 0.1130 0.20000 0.06620 0.37920 4 Off_Mount 1
2020 Sep 14
3
Interpretación de salida de un GLM
Estimada comunidad, tengo unas dudas que son muy básicas creo, pero es mi primera incursión en GLM. Estoy ajustando un modelo binomial a unos datos de germinación. El modelo es muy sencillo, tengo un factor "Condicion" con dos niveles: "a" y "b" (nivel de humedad en suelo). Por otro lado, tengo una variable explicativa "HF" (horas frío=estratificación) que
2008 Jul 28
1
Mixed model question.
I continue to struggle with mixed models. The square zero version of the problem that I am trying to deal with is as follows: A number (240) of students are measured (tested; for reading comprehension) on 6 separate occasions. Initially (square zero) I want to treat the test time as a factor (with 6 levels). The students are of course ``random effects''. Later I want to look at
2008 Apr 06
1
row by row similarity
Hello all and thanks in advance for any advice. I am very new to R and have searched my question but have not come up with anything quite like what I would like to do. My problem is: I have a data set for individuals (rows) and values for behaviours (columns). I would like to know the proportion of shared behaviours for all possible pairs of individuals. The sum of shared behaviours divided by
2024 Feb 27
4
converting MATLAB -> R | element-wise operation
So, trying to convert a very long, somewhat technical bit of lin alg MATLAB code to R. Most of it working, but raninto a stumbling block that is probaably simple enough for someone to explain. Basically, trying to 'line up' MATLAB results from an element-wise division of a matrix by a vector with R output. Here is a simplified version of the MATLAB code I'm translating: NN = [1,
2008 Dec 30
5
Downloading data from Economagic
I was trying to dw data from Economagic [http://www.economagic.com/em-cgi/data.exe/libor/day-ussnon], using following code : library(fimport) dat2 = economagicSeries("libor/day-ussnon", frequency = "daily") Here I see that data is not complete, downloaded data starts from "2007-12-31 ", whereas actual data is available from 2001. secondly, how I convert that data
2009 Nov 01
1
package lme4
Hi R Users, When I use package lme4 for mixed model analysis, I can't distinguish the significant and insignificant variables from all random independent variables. Here is my data and result: Data: Rice<-data.frame(Yield=c(8,7,4,9,7,6,9,8,8,8,7,5,9,9,5,7,7,8,8,8,4,8,6,4,8,8,9), Variety=rep(rep(c("A1","A2","A3"),each=3),3),
2012 Jul 04
1
Error in hclust?
Dear R users, I have noted a difference in the merge distances given by hclust using centroid method. For the following data: x<-c(1009.9,1012.5,1011.1,1011.8,1009.3,1010.6) and using Euclidean distance, hclust using centroid method gives the following results: > x.dist<-dist(x) > x.aah<-hclust(x.dist,method="centroid") > x.aah$merge [,1] [,2] [1,] -3 -6
2004 Mar 29
1
Interpreting knn Results
Maybe you should show your colleague how to access help pages in R? Right in ?knn, it says: prob: If this is true, the proportion of the votes for the winning class are returned as attribute 'prob'. so 1.0 mean all three NNs are of the `winning'; i.e., predicted, class, and 0.66667 means 2 out of the 3 NNs are of the winning class, etc. Andy > From: Ko-Kang
2024 Feb 27
2
[External] converting MATLAB -> R | element-wise operation
> t(t(NN)/lambda) [,1] [,2] [,3] [1,] 0.5 0.6666667 0.75 [2,] 2.0 1.6666667 1.50 > R matrices are column-based. MATLAB matrices are row-based. > On Feb 27, 2024, at 14:54, Evan Cooch <evan.cooch at gmail.com> wrote: > > So, trying to convert a very long, somewhat technical bit of lin alg > MATLAB code to R. Most of it working, but raninto a stumbling block
2001 Oct 23
1
summary of aov fit on a contrast basis
Hello, In a book (David W. Stockburger, "Multivariate Statistics: Concepts, Models, and Applications", chapter 12 "Contrasts, Special and Otherwise", available online at http://www.psychstat.smsu.edu/multibook) I've found some examples of doing analysis of variance on a contrast basis. I attach my solution (in R, the book uses SPSS) to this problem. Am I computing the
2009 Nov 16
2
^ operator
Hi, I want to apply ^ operator to a vector but it is applied to some of the elements correctly and to some others, it generates NaN. Why is it not able to calculate -6.108576e-05^(1/3) even though it exists? tmp [1] -6.108576e-05 4.208762e-05 3.547092e-05 7.171101e-04 -1.600269e-03 > tmp^(1/3) [1] NaN 0.03478442 0.03285672 0.08950802 NaN > -6.108576e-05^(1/3) [1]
2009 Jun 14
1
time function behavior for ts class objects
Hi all- I am trying to use the time function for ts class objects and do not understand the return value. I want to use it to set up a time trend in arima fits. It does not seem to return a correct linear sequence that matches the underlying time series. I am running: R version 2.8.1 (2008-12-22). For example: R> ## create a time series R> x <- rnorm(24) R> (xts <-
2024 Feb 27
2
converting MATLAB -> R | element-wise operation
Why anything but sweep? The fundamental data type in Matlab is a matrix... they don't have vectors, they have Nx1 matrices and 1xM matrices. Vectors don't have any concept of "row" vs. "column". Straight division is always elementwise with recycling as needed, and matrices are really vectors in row-major order: 1 2 3 4 5 6 is really 1 4 2 5 3 6 and when you do