Displaying 20 results from an estimated 10000 matches similar to: "Handling large data sets via scan()"
2005 Feb 19
2
Memory Fragmentation in R
I have a data set of roughly 700MB which during processing grows up to
2G ( I'm using a 4G linux box). After the work is done I clean up (rm())
and the state is returned to 700MB. Yet I find I cannot run the same
routine again as it claims to not be able to allocate memory even though
gcinfo() claims there is 1.1G left.
At the start of the second time
===============================
2005 Feb 19
2
Memory Fragmentation in R
I have a data set of roughly 700MB which during processing grows up to
2G ( I'm using a 4G linux box). After the work is done I clean up (rm())
and the state is returned to 700MB. Yet I find I cannot run the same
routine again as it claims to not be able to allocate memory even though
gcinfo() claims there is 1.1G left.
At the start of the second time
===============================
2005 Feb 24
1
Do environments make copies?
I am using environments to avoid making copies (by keeping references).
But it seems like there is a hidden copy going on somewhere - for
example in the code fragment below, I am creating a reference to "y"
(of size 500MB) and storing the reference in object "data". But when I
save "data" and then restore it in another R session, gc() claims it is
using twice the
2005 Feb 24
1
Do environments make copies?
I am using environments to avoid making copies (by keeping references).
But it seems like there is a hidden copy going on somewhere - for
example in the code fragment below, I am creating a reference to "y"
(of size 500MB) and storing the reference in object "data". But when I
save "data" and then restore it in another R session, gc() claims it is
using twice the
2013 Jan 11
3
split & rbind (cast) dataframe
Hi,
I would like to split dataframe based on one colum and want
to connect the two dataframes by rows (like rbind). Here a small example:
# The orgininal dataframe
df1 <- data.frame(col1 = c("A","A","B","B"),col2 = c(1:4), col3 = c(1:4))
# The datafame how it could look like
df2 <- data.frame(A.col2 = c(1,2), A.col3 = c(1,2), B.col2 = c(3,4),
B.col3
2008 Feb 10
2
reshape
Dear colleagues,
I'd like to reshape a datafame in a long format to a wide format, but
I do not quite get what I want. Here is an example of the data I've
have (dat):
sp <- c("a", "a", "a", "a", "b", "b", "b", "c", "d", "d", "d", "d")
tr <- c("A",
2008 Feb 08
1
reshape question
I know there are a lot of reshape questions on the mailing list, but I
haven't been able to find an answer to this particular issue.
I am trying to get a datafame structured like this:
> sub <- rep(1:5)
> ta1 <- rep(1,5)
> ta2 <- rep(2,5)
> tb1<- rep(3,5)
> tb2 <- rep(4,5)
> DF <- data.frame(sub,ta1,ta2,tb1,tb2)
> DF
sub ta1 ta2 tb1 tb2
1
2004 May 13
3
EXT3 performance on Large (multi-TeraByte) RAID
Has anyone experienced a significant degradation in ext3 performance when using it on a Multi-TeraByte RAID? As part of an experimental setup, I hooked up three 300GB drives and made an EXT3 RAID5 out of them, using the entire space one each drive, and started throwing a large number of files in the size-range 3KB to 50 KB. Then, I deleted the raid, and created a new one, but this time, I used
2000 Feb 02
1
Large data sets and aggregation
I've noticed quite a few messages relating to large data sets bedeviling
R users, and having just had to program my way through one that actually
caused a "Bus error" when I tried to read it in, I'd like to ask two
questions.
1) Are there any facilities for aggregation of data in R?
( I admit that this will not do much for the large data set problem
immediately)
2) Is there any
2016 Apr 13
4
recreating extensions.conf from live dialplan ?
On 4/13/16 11:57 AM, A J Stiles wrote:
> You could try
> *CLI> dialplan show
Between my older backup and dialplan show, I guess that's my best shot.
Thanks :D
2011 Jan 05
3
How to 'explode' a matrix
Hi everyone,
I'm looking for a way to 'explode' a matrix like this:
> matrix(1:4,2,2)
[,1] [,2]
[1,] 1 3
[2,] 2 4
into a matrix like this:
> matrix(c(1,1,2,2,1,1,2,2,3,3,4,4,3,3,4,4),4,4)
[,1] [,2] [,3] [,4]
[1,] 1 1 3 3
[2,] 1 1 3 3
[3,] 2 2 4 4
[4,] 2 2 4 4
My current kludge is this:
2010 Feb 05
6
large scale paging
Has anyone done any large scale intercom deployments with Asterisk? I've
been asked about building a system to one-way page 500 phones
simultaneously from a single server.
My concerns are:
- My limited math capabilities suggest 41 Mbps of RTP traffic, which
seems like a lot, plus asterisk would be taking a single input stream
and exploding it out to 500 endpoints.
- There are 500
2008 Nov 05
2
Memory limits for large data sets
Hello,
I have several very large data sets (1-7 million observations, sometimes hundreds of variables) that I'm trying to work with in R, and memory seems to be a big issue. I'm currently using a 2 GB Windows setup, but might have the option to run R on a server remotely. Windows R seems basically limited to 2 GB memory if I'm right; is there the possibility to go much beyond that
2005 Nov 08
2
Ruby equivalent of PHP Explode / Implode
Anyone know what the Ruby equivalent of PHP''s explode and implode are
for arrays?
- Jim
2018 Apr 20
4
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
On Thu 19-04-18 12:12:38, Mikulas Patocka wrote:
[...]
> From: Mikulas Patocka <mpatocka at redhat.com>
> Subject: [PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
>
> The kvmalloc function tries to use kmalloc and falls back to vmalloc if
> kmalloc fails.
>
> Unfortunatelly, some kernel code has bugs - it uses kvmalloc and then
> uses DMA-API on the returned
2018 Apr 20
4
[PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
On Thu 19-04-18 12:12:38, Mikulas Patocka wrote:
[...]
> From: Mikulas Patocka <mpatocka at redhat.com>
> Subject: [PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
>
> The kvmalloc function tries to use kmalloc and falls back to vmalloc if
> kmalloc fails.
>
> Unfortunatelly, some kernel code has bugs - it uses kvmalloc and then
> uses DMA-API on the returned
2010 Aug 12
1
help usin scan on large matrix (caveats to what has been discussed before)
Dear all,
I have a few points that I am unsure about using scan. I know that it is
covered in the intro to R, and also has been discussed here:
http://www.mail-archive.com/r-help at r-project.org/msg04869.html
but nevertheless, I cannot get it to work.
I have a potentially very large matrix that I need to read in (35MB). I
am about to run it on a server with 16G of memory etc, so I hope it
2007 Aug 16
2
[LLVMdev] Changing basic blocks
On Wed, 15 Aug 2007, [ISO-8859-1] Em�lio Wuerges wrote:
> --
> int total = BB->size();
> std::vector<MachineInstr*> positionmap(total);
> for (int i = 0; i< total; ++i)
> positionmap.push_back(BB->remove(BB->begin()));
> for(int i = 0; i< total; ++i)
> BB->push_back(positionmap[i]);
> --
This doesn't do what you think. This line:
2010 Dec 08
2
Parallel Scan of Large File
Is it possible to parallel scan a large file into a character vector in 1M
chunks using scan() with the "doMC" package? Furthermore, can I specify the
tasks for each child?
i.e. I'm working on a Linux box with 8 cores and would like to scan in 8M
records at time (all 8 cores scan 1M records at a time) from a file with 40M
records total.
file <-
2009 Dec 01
4
median for time data
Hi everybody
How do I do to calculate the median and average of a colum of time data like
this: "8:50:10". I also need to plot the time difference between two colums
Thanks a lot
--
View this message in context: http://n4.nabble.com/median-for-time-data-tp932287p932287.html
Sent from the R help mailing list archive at Nabble.com.