sounds like plyr although I have never used it... If you want help
with code you need to provide reproducible examples.
On Wed, Nov 5, 2008 at 5:11 AM, Bert Jacobs
<bert.jacobs at figurestofacts.be> wrote:>
>
>
>
> Hi,
>
> I've written the following line of code to make a summary of some data:
>
>
>
> Final.Data.Short <- as.data.frame(aggregate(Merge.FinalSubset[,8:167],
> list(Location = Merge.FinalSubset $Location,Measure = Merge.FinalSubset
> $Measure,Site = Merge.FinalSubset $Site, Label= Merge.FinalSubset $Label),
> FUN=sum))
>
>
>
> Where "Merge.FinalSubset" is a dataframe of 2640 rows and 167
columns
>
> The result "Final.Data.Short" is a dataframe of 890 rows and 164
columns
>
>
>
> This operation takes at the moment more than a minute. Now I was wondering
> if their exist ways to reduce this operation time by using other code or by
> splitting the original dataframe in smaller bits, make several different
> aggregations, and recompose the dataframe again?
>
>
>
> Thx for helping me out
>
> Bert
>
>
>
>
>
>
>
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
--
Stephen Sefick
Research Scientist
Southeastern Natural Sciences Academy
Let's not spend our time and resources thinking about things that are
so little or so large that all they really do for us is puff us up and
make us feel like gods. We are mammals, and have not exhausted the
annoying little problems of being mammals.
-K. Mullis