Displaying 4 results from an estimated 4 matches for "sdcols".
Did you mean:
s_cols
2002 May 11
1
deleting invariant rows and cols in a matrix
Greetings,
I couldn't find any existing function that would allow me to scan a
matrix and eliminate invariant rows and columns so I have started to
write a simple routine from scratch. The following code fails because
the array index goes out of bounds for obvious reasons you'll see
shortly.
Start with some data
x <- read.table("myex.dat",header=T)
x
v1 v2 v3 v4 v5 id
1
2020 Sep 24
1
How to use `[` without evaluating the arguments.
...which(colnames(colData) %in% colIDs)
lockBinding('colIDs', internals)
# Assemble the pseudo row and column names for the LongTable
.pasteColons <- function(...) paste(..., collapse=':')
rowData[, `:=`(.rownames=mapply(.pasteColons, transpose(.SD))), .SDcols=internals$rowIDs]
colData[, `:=`(.colnames=mapply(.pasteColons, transpose(.SD))), .SDcols=internals$colIDs]
return(.LongTable(rowData=rowData, colData=colData,
assays=assays, metadata=metadata,
.intern=internals))
}
I have also defined a subset...
2013 Mar 13
3
loop in a data.table
Hi everyone,
I have a data.table called "data" with many columns which I want to
group by column1 using data.table, given how fast it is.
The problem with looping a data.table is that data.table does not like
quotations to define the column names (e.g. "col2" instead of col2).
I found a way around which is to use get("col2"), which works fine but
the
2012 Sep 14
3
aggregate() runs out of memory
I have a large data.frame Z (2,424,185,944 bytes, 10,256,441 rows, 17 columns).
I want to get the result of
table(aggregate(Z$V1, FUN = length, by = list(id=Z$V2))$x)
alas, aggregate has been running for ~30 minute, RSS is 14G, VIRT is
24.3G, and no end in sight.
both V1 and V2 are characters (not factors).
Is there anything I could do to speed this up?
Thanks.
--
Sam Steingold