search for: incomparability

Displaying 20 results from an estimated 74 matches for "incomparability".

Did you mean: comparability
2008 Sep 12
1
match and incomparables
Hello, I was playing around with the newly implemented 'incomparables' argument in 'match' and realized the argument does not behave anything like I expected. Can someone explain what is going on here? Sorry if I'm misreading the documentation. > match(1:3, 1:3, incomparables=1) [1] NA 2 3 # This seems right, the 1 in 'x' is 'incomparable' >
2009 Mar 30
1
duplicated fails to rise correct errors (PR#13632)
Full_Name: Wacek Kusnierczyk Version: 2.8.0 and 2.10.0 r48242 OS: Ubuntu 8.04 Linux 32 bit Submission from: (NULL) (129.241.110.161) In the following code: duplicated(data.frame(), incomparables=NA) # Error in if (!is.logical(incomparables) || incomparables) .NotYetUsed("incomparables != FALSE") : # missing value where TRUE/FALSE needed the raised error is clearly not the
2004 Aug 19
0
suggesting a new feature for unique()
Dear R-devel, May I suggest that a new feature be added to a couple of unique() methods? Sometimes it's useful to have the indices of the original data that the unique elements come from, so that the original data can be recreated from the unique()ed data. I suggest that an `index' argument be added for unique. Below is a suggested patch against R/src/library/base/R/duplicated.R: ***
2007 Nov 02
0
applying duplicated, unique and match to lists?
Dear R developers, While improving duplicated.array() and friends and developing equivalents for the new ff package for large datasets I came across two questions: 1) is it safe to use duplicated.default(), unique.default() and match() on arbitrary lists? If so, we can speed up duplicated.array and friends considerably by using list() instead of paste(collapse="\r") 2) while
2001 Nov 20
2
is match slow?
I'm doing m <- match(matriz, origen, 0) where matriz is a 270x900 matrix and origen a 11675 elements vector, and is taking a very long time. Is match a function implemented in C? If not, would a C code be faster? Thanks Agus Dr. Agustin Lobo Instituto de Ciencias de la Tierra (CSIC) Lluis Sole Sabaris s/n 08028 Barcelona SPAIN tel 34 93409 5410 fax 34 93411 0012 alobo at ija.csic.es
2010 Jun 29
2
POSIXlt matching bug
I came across the below mis-feature/bug using match with POSIXlt objects (from strptime) in R 2.11.1 (though this appears to be an old issue). > x <- as.POSIXlt(Sys.Date()) > table <- as.POSIXlt(Sys.Date()+0:5) > length(x) [1] 1 > x %in% table # I expect TRUE [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE > match(x, table) # I expect 1 [1] NA NA NA NA NA NA NA NA
2016 Jun 08
1
Trivial patch for merge.Rd
Hi all, After replying to r-help earlier today on the merge() related thread, I noted a trivial grammatical error in the description for the 'suffixes' argument in it's help file. A patch against the current SVN trunk version of merge.Rd in ..library/base/man is attached and pasted here: --- merge1.Rd 2016-06-08 13:34:35.000000000 -0500 +++ merge2.Rd 2016-06-08 14:03:34.000000000
2008 May 08
1
[PATCH] Typo in 'unique' help page (PR#11401)
--- src/library/base/man/unique.Rd | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/src/library/base/man/unique.Rd b/src/library/base/man/unique.Rd index a8397c7..4664a34 100644 --- a/src/library/base/man/unique.Rd +++ b/src/library/base/man/unique.Rd @@ -29,7 +29,7 @@ unique(x, incomparables = FALSE, \dots) \item{x}{a vector or a data frame or an array or
2010 Jan 18
0
Fix for bug in match()
Hello all, I posted the following bug last week: # These calls work correctly: match(c("A", "B", "C"), c("A","C"), incomparables=NA) # okay match(c("A", "B", "C"), "A") # okay match("A", c("A", "B"), incomparables=NA) # okay # This one causes R to hang: match(c("A",
2004 Apr 12
1
question on isoMDS
Hello everyone, I have a question on isoMDS. My data set (of vegetation) with 210 samples is in this way: Rotfoehrenau Lavendelweidenau Silberweidenau .... 067_Breg.7 0 2 0 .... 071_Dona.4 0 2 6 .... ... I want to do an isoMDS-analysis with the dissimilarity index
2015 Jan 23
1
:: and ::: as .Primitives?
Hi, On 01/23/2015 07:01 AM, luke-tierney at uiowa.edu wrote: > On Thu, 22 Jan 2015, Michael Lawrence wrote: > >> On Thu, Jan 22, 2015 at 11:44 AM, <luke-tierney at uiowa.edu> wrote: >>> >>> For default methods there ought to be a way to create those so the >>> default method is computed at creation or load time and stored in an >>>
2011 Oct 05
1
unique possible bug
Hi, I am trying to read in a rather large list of transactions using the arules library. It seems in the coerce method into the dgCmatrix, it somewhere calls unique. Unique.c throws an error when n > 536870912; however, when 4*n was modified to 2*n in 2004, the overflow protection should have changed from 2^29 to 2^30, right? If so, how would I change it in my copy? Do I have to recompile
2012 Dec 08
1
namespace S3 and S4 generic imports cannot both be satisfied:
PkgA wishes to write a method for 'unique' on S4 class 'A'. ?Methods indicates that one should setGeneric("unique") setClass("A") unique.A <- function(x, incomparables=FALSE, ...) {} setMethod(unique, "A", unique.A) Both S3 and S4 methods need to be exported in the NAMESPACE import(methods) S3method(unique, A)
2010 Sep 17
0
Merging data frames on a variety of columns
Hello, This is a semi-complicated question about comparing two datasets, probably using merge, but I am open to other ideas. I have a large frame of information about companies.? It's over 30,000 rows and looks something like... df1 <- identifier1???? identifier2 name other_name year H34 C56 ACME ACME_LTD 2001 H34
2011 Sep 28
3
[LLVMdev] Greedy Register Allocation in LLVM 3.0
On Sep 27, 2011, at 12:11 AM, Leo Romanoff wrote: > > > It is true that names are not always reflecting the essense. But on the other hand, there is a lot of ongoing research on register allocation (and compilers in general) and it looks like more and more such efforts choose LLVM as a platform for experimentation. Quite some results and comparisons are published. So, it would be nice
2012 Sep 27
3
Keep rows in a dataset if one value in a column is duplicated
Hi, I have a data set of observations by either one person or a pair of people. I want to only keep the pair observations, and was using the code below until it gave me the error " $ operator is invalid for atomic vectors". I am just beginning to learn R, so I apologize if the code is really rough. Basically I want to keep all the rows in the data set for which the value of
2010 Mar 24
1
Deleting duplicate rows in a matrix at random
Hello, I am relatively new to R, and I've run into a problem formatting my data for input into the package RankAggreg. I have a matrix of gene titles and P-values (weights) in two columns: KCTD12 4.06904E-22 UNC93A 9.91852E-22 CDKN3 1.24695E-21 CLEC2B 4.71759E-21 DAB2 1.12062E-20 HSPB1 1.23125E-20 ... The data contains many, many duplicate gene titles, and I need to remove all but one of
2011 Sep 28
0
[LLVMdev] Greedy Register Allocation in LLVM 3.0
On Sep 28, 2011, at 3:08 PM, Chris Lattner wrote: > > On Sep 27, 2011, at 12:11 AM, Leo Romanoff wrote: > >> >> >> It is true that names are not always reflecting the essense. But on the other hand, there is a lot of ongoing research on register allocation (and compilers in general) and it looks like more and more such efforts choose LLVM as a platform for
2006 Jun 03
10
Ruby on Rails on MacBook
Hi, I''m trying to set up Ruby on Rails following Apple''s tutorial with ruby 1.8.4 and mysql 5.0.22. But every time I ran ''rake migrate'' I got the following access denied error: Access denied for user ''root''@''localhost'' (using password: YES) After turning on --trace switch, it showed the error happened at the following
2010 Jan 16
2
Extracing only Unique Rows based on only 1 Column
To Whomever is Interested, I have spent several days searching the web, help files, the R wiki and the archives of this mailing list for a solution to this problem, but nonetheless I apologize in advance if I have missed something obvious. The problem is this; I have a 5-column data frame with about 4.2 million rows, and want to create a new (and hopefully much smaller) data frame that