Displaying 20 results from an estimated 4000 matches similar to: "lapply and aggregate function"
2010 Oct 12
5
aggregate with cumsum
Hello everybody,
Data is
myd <- data.frame(id1=rep(c("a","b","c"),each=3),id2=rep(1:3,3),val=rnorm(9))
I want to get a cumulative sum over each of id1. trying aggregate does not work
myd$pcum <- aggregate(myd[,c("val")],list(orig=myd$id1),cumsum)
Please suggest a solution. In real the dataframe is huge so looping with for and subsetting is not a
2012 Jan 20
1
--link-dest doesn't work if target file exists (but needs updating)
Using:
# rsync --version
rsync version 3.0.7 protocol version 30
Copyright (C) 1996-2009 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
append, ACLs, xattrs, iconv, symtimes
rsync comes with ABSOLUTELY NO
2013 Mar 20
3
highlight overlapping region of two densities
Hi all.
I would like to highlight overlapping regions of two densities and I could
not find a way to do it.
Here is the sample code:
myd <- c(2,4,5, 4,3,2,2,3,3,3,2,3,3,4,2,4,3,3,3,2,2.5,
2, 3,3, 2.3, 3, 3, 2, 3)
myd1 <- myd-2
plot(range(density(myd)$x, density(myd1)$x), range(density(myd)$y,
density(myd1)$y), type = "n")
lines(density(myd), col=1, lwd=4)
2008 Jul 08
6
Question: Beginner stuck in a R cycle
Dear All,
I have a database of 200 observations named myD.
In the dataframe there are a column named code (with codes varying from 1 to 77), a column named "prevalence" with some quantitative measurements are given and an column named Pr_mean, with no values.
I would like to set a cycle to compute the average of prevalence values for each different code and store the averages under the
2007 Mar 09
1
dendrogram again
Hi all,
ok, i know i can cut a dendrogram, which i did.
all i get is three objects that a dendrograms itself.
for example:
myd$upper, myd$lower[[1]], myd$lower[[2]]
and so on. of course i can plot them seperately now.
but the lower parts still have hundreds of branches. i?ll need a 30 "
widescreen to watch the whole picture.
what i?d like to is group the lower branches , so that i get a
2024 Aug 06
1
[PATCH] Add SM3 secure hash algorithm
Add OSCCA SM3 secure hash algorithm (OSCCA GM/T 0004-2012 SM3).
---
Makefile.in | 2 +-
configure.ac | 2 +-
digest-libc.c | 11 ++
digest-openssl.c | 1 +
digest.h | 3 +-
mac.c | 1 +
sm3.c | 320 +++++++++++++++++++++++++++++++++++++++++++++++
sm3.h | 51 ++++++++
8 files changed, 388 insertions(+), 3 deletions(-)
create mode
2011 Oct 05
2
aggregate function with a dataframe for both "x" and "by"
I have 2 dataframes. "mydata" contains numerical data. "mybys" contains
information on the "group" each row of the data is in. I wish to aggregate
each column in mydata using the corresponding column in mybys.
Please see the example below. What is a more elegant or "better" way to
accomplish this task?
Thanks!
mydata =
2006 Apr 27
1
scope of variable/object ?
Hi,
I must be missing something here...Essentially, a short piece of code works if it's standalone, but doesn't work if it's divided into two functions.
The code that works is:
################### WORKS ###############
library(pamr)
set.seed(120)
x <- matrix(rnorm(1000*20),ncol=20)
y <- sample(c(1:4),size=20,replace=TRUE)
mydata <- list(x=x,y=y)
2020 May 04
2
"Earlyclobber" but for a subset of the inputs
Hi all,
I'm working on a target whose registers have equal-sized subregisters and
all of those subregisters can be named (or the other way round: registers
can be grouped into super registers).
So for instance we've got 16 registers W (as in wide) W0..W15 and 32
registers N (as in narrow) N0..N31. This way, W0 is made by grouping N0 and
N1, W1 is N2 and N3, W2 is N4 and N5, ..., W15 is
2008 Aug 15
1
Strange error message from geoR´s likfit () lik. max. func.
ComRades:
I am geeting the error message
Error in ldots[[which(MET)]] : attempt to select less than one element
when I try to fit the geostatistical model with the likfit() function of
geoR.
I have tried with old data for which likfit() successfully maximised the
likelihood in previous versions of geoR, and yet the current version
fails.
I have tried in Windows Vista and Windows XP (I haven't
2005 May 13
2
not deleting from the root
I have a bit of an issue with rsync. I am using to keep directories in
sync via another server for backup.
Here is the server config
[w1]
path = /w1
comment = w1 web dir
[w2]
path = /w2
comment = w2 web dir
Now on the client i run this command
rsync -avv --delete --force domain.com::w1/ /w1/
It will NOT delete anything that is no on the server anymore.. for
example on the server/client there
2009 Jan 09
2
rpart with interval censored data crashes R
Hi Everyone,
This example code results in R 'crashing'; that is the R application closes
with no warnings or error messages.
#-----------------------
myD <- read.table(stdin(), header=TRUE, nrows=20)
Broth Salt pH Temp N Y Growth
1 310 9.0 2.92 10 90.0 NA 0
2 615 6.0 7.82 30 1.0 2 1
3 217 2.0 7.34 10 7.0 8
2005 Dec 01
1
Snow & rvpm
At office, using the internal LAN at my disposal, I'm having a go at parallel
computing - to begin with - with pvm, rpvm & snow.
The two boxes are as follows
Remote machine uffbsd:
CPU: Intel(R) Pentium(R) 4 CPU 2.00GHz (1994.13-MHz 686-class CPU)
Origin = "GenuineIntel" Id = 0xf24 Stepping = 4
real memory = 260046848 (248 MB)
This machine NbBSD:
CPU: Mobile Intel(R)
2003 Sep 19
2
extracting the levels of a subset of data
Hi,
> tmpdata<-subset(myd, TYPE=="A")
> levels(tmpdata$TYPE)
> [1] "A" "B" "C"
I'd like to get only "A" as output...
Thanks for your help
Marc
2008 Jan 29
2
Using Predict and GLM
Dear R Help,
I read through the archives pretty extensively before sending this
email, as it seemed there were several threads on using predict with
GLM. However, while my issue is similar to previous posts (cannot
get it to predict using new data), none of the suggested fixes are
working.
The important bits of my code:
set.seed(644)
n0=200 #number of observations
2007 Aug 31
2
memory.size help
I keep getting the 'memory.size' error message when I run a program I have
been writing. It always it cannot allocate a vector of a certain size. I
believe the error comes in the code fragement below where I have multiple
arrays that could be taking up space. Does anyone know a good way around
this?
w1 <- outer(xk$xk1, data[,x1], function(y,z) abs(z-y))
w2 <- outer(xk$xk2,
2018 Feb 25
2
include
HI Jim and all,
I want to put one more condition. Include col2 and col3 if they are not
in col1.
Here is the data
mydat <- read.table(textConnection("Col1 Col2 col3
K2 X1 NA
Z1 K1 K2
Z2 NA NA
Z3 X1 NA
Z4 Y1 W1"),header = TRUE,stringsAsFactors=FALSE)
The desired out put would be
Col1 Col2 col3
1 X1 0 0
2 K1 0 0
3 Y1 0 0
4 W1 0 0
6 K2 X1
2007 May 18
3
lapply not reading arguments from the correct environment
Hello,
I am facing a problem with lapply which I ''''think''' may be a bug.
This is the most basic function in which I can reproduce it:
myfun <- function()
{
foo = data.frame(1:10,10:1)
foos = list(foo)
fooCollumn=2
cFoo = lapply(foos,subset,select=fooCollumn)
return(cFoo)
}
I am building a list of dataframes, in each of which I want to keep
only column
2018 Feb 25
0
include
Jim has been exceedingly patient (and may well continue to be so), but this smells like "failure to launch". At what point will you start showing your (failed) attempts at solving your own problems so we can help you work on your specific weaknesses and become self-sufficient?
--
Sent from my phone. Please excuse my brevity.
On February 25, 2018 7:55:55 AM PST, Val <valkremk at
2018 Feb 25
0
include
Hi Val,
My fault - I assumed that the NA would be first in the result produced
by "unique":
mydat <- read.table(textConnection("Col1 Col2 col3
Z1 K1 K2
Z2 NA NA
Z3 X1 NA
Z4 Y1 W1"),header = TRUE,stringsAsFactors=FALSE)
val23<-unique(unlist(mydat[,c("Col2","col3")]))
napos<-which(is.na(val23))
preval<-data.frame(Col1=val23[-napos],