Displaying 20 results from an estimated 20000 matches similar to: "vector maximum length"
2011 Aug 30
1
"Negative length vector" error in simple merge
Hi,
I'm trying to take a vector (length almost 2,000,000) and merge it with a
data frame of the same length. I'm trying to do it solely based on index,
and not any other factors.
The vector is called "offense", and the data frame is just called "data". I
went with the simplest option:
merge(data,offense)
but it always gives me the same error:
Error in
2013 Jul 01
3
Asterisk 1.8.20 AGI function SAY DATETIME does not play anything when mode in say.conf is changed to "new"
Hi
I am using following say.conf file. Its a default file, which comes with
Asterisk installation.
When I call SAY DATETIME AGI function, it simply returns without playing
date & time. Where as if I use mode=old setting, it works. Is this a bug
or mode=new is not implemented for SAY DATETIME AGI function?
[general]
mode=new ; method for playing numbers and dates
;
2012 Feb 01
3
A Billion Files on OCFS2 -- Best Practices?
We have an application that has many processing threads writing more than a
billion files ranging from 2KB ? 50KB, with 50% under 8KB (currently there
are 700 million files). The files are never deleted or modified ? they are
written once, and read infrequently. The files are hashed so that they are
evenly distributed across ~1,000,000 subdirectories up to 3 levels deep,
with up to 1000 files
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any
Lustre MDT filesystems in existence that have 2B or more total inodes?
This is fairly unlikely, because it would require an MDT filesystem
that is > 8TB in size (which isn''t even supported yet) and/or has been
formatted with specific options to increase the total number of inodes.
This can be checked with
2008 Jun 17
4
maximum MDT inode count
For future filesystem compatibility, we are wondering if there are any
Lustre MDT filesystems in existence that have 2B or more total inodes?
This is fairly unlikely, because it would require an MDT filesystem
that is > 8TB in size (which isn''t even supported yet) and/or has been
formatted with specific options to increase the total number of inodes.
This can be checked with
2007 Jun 17
2
Flint failed to deliver indexing performance to Quartz.
Flint failed to deliver indexing performance to Quartz.
I am proposing to remove Flint as default database and place Quartz
database back as default. The catch is not that Flint database is
smaller and faster during searches then Quartz database as developers
were concerning when were measuring and neglecting to measure
performance when creating the large indexes.
The truth is that Flint
2006 Feb 01
2
memory limit in aov
I want to do an unbalanced anova on 272,992 observations with 405
factors including 2-way interactions between 1 of these factors and
the other 404. After fitting only 11 factors and their interactions I
get error messages like:
Error: cannot allocate vector of size 1433066 Kb
R(365,0xa000ed68) malloc: *** vm_allocate(size=1467461632) failed
(error code=3)
R(365,0xa000ed68) malloc: ***
2003 Nov 22
6
zlib missing when installing openssh-3.7.1p2
"Pacelli, Louis M, ALABS" wrote:
>
> Hi,
> I apologize for sending in this problem via email, but I had trouble using bugzilla.
Please use openssh-unix-dev at mindrot.org for problems with OpenSSH Portable
(ie anything that's not OpenBSD).
> I'm trying to install openssh-3.7.1p2
> When I run the configure step, I get the following message:
>
>
2010 Sep 10
11
Large directory performance
We have been struggling with our Lustre performance for some time now especially with large directories. I recently did some informal benchmarking (on a live system so I know results are not scientifically valid) and noticed a huge drop in performance of reads(stat operations) past 20k files in a single directory. I''m using bonnie++, disabling IO testing (-s 0) and just creating, reading,
2010 Dec 09
2
Error in vector("integer", length) : vector size cannot be NA
Hello,
I have uploaded a csv file that looks like this:
> gc
alpha_id beta_id
1 142053 1
2 9454 1
3 295618 2
4 42691 2
5 389224 3
6 9455 3
The alpha_id contains 310660 unique values and the beta_id contains 17431
unique values. The number of rows adds up to more than 1.3 million. Now I
want to convert
2005 Sep 12
13
Skype purchased by Ebay 2.6 Billion
Good news for service providers in my opinion, Ebay will likely start
alienating Skype users like they did with Paypal users.
"SkypeSucks.com" domain already taken, shucks....
http://www.kesq.com/Global/story.asp?S=3837895
--
Cory J Andrews
Partner / Purchasing
+++++++++++++++
VOIPSupply.com - Everything you need for VOIP
454 Sonwil Drive
Buffalo, NY 14225
+++++++++++++++
tf voice
2009 Jan 04
2
Bring India together
Look, ma... spam! We dun never seen that 'n before.
N.
Sunkara RaviPrakash wrote:
>
> Hi,
>
> Imagine a billion Indians together.
>
> Already 3 million Indians have chosen Indyarocks.com to bring India
> together.
>
> I am already part of it and dont be surprised if you find most of your
> other friends too :). Also you can send Unlimited Free SMS to your
2011 Apr 15
1
simulations with very large number of iterations (1 billion)
Hello R-help list
I'm trying to run 1 billion iterations of a code with calls to random
distributions to implement a data generating process and subsequent
computation of various estimators that are recorded for further
comparison of performance. I have two question about how to achieve
this:
1. the most important: on my laptop, R gives me an error message
saying that it cannot
2008 Nov 02
5
R newbie: how to replace string/regular expression
Hello;
I am a R newbie and would like to know correct and efficient method for
doing string replacement.
I have a large data set, where I want to replace character "M", "b",
and "K" (currency in Million, Billion and K) to millions. That is
209.7B with (209.7 * 10e6) and 100.00K with (100.00 *1/100)
and etc..
d <- c("120.0M", "11.01m",
2010 Feb 17
1
R 64-bit memory.limit() is max of ?
Currently using R 2.92
Win XP
Goal: scale up computing capacity for large datasets (1-5 million records)
I realize under 32 bit versions of R the memory.limit is maxed at 4GB.
Q:
1. What are the limits under 64 bit versions of R? Are those limits OS
dependent?
2. Are there limits to the size of individual objects??
3. Are there limits or problems in using functions such as lm(),
glm(),
2003 Nov 25
2
zlib/openssl/openssh for Solaris
Darren,
I went to install zlib/openssl and openssh on one of my Sun
Servers(Solaris 2.7) and they would not install. Is there a website
where I can get Sun versions of these products?
Thanks,
Lou
-----Original Message-----
From: Darren Tucker [mailto:dtucker at zip.com.au]
Sent: Saturday, November 22, 2003 9:35 PM
To: Pacelli, Louis M, ALABS
Cc: OpenSSH Devel List
Subject: Re: zlib missing when
2004 Dec 15
3
Massive clustering job?
Hi,
I have ~40,000 rows in a database, each of which contains an id column and
20 additional columns of count data.
I want to cluster the rows based on these count vectors.
Their are ~1.6 billion possible 'distances' between pairs of vectors
(cells in my distance matrix), so I need to do something smart.
Can R somehow handle this?
My first thought was to index the database with
2009 Jul 29
2
cannot allocate a vector with 1920165909 length
Dear Rusers,
The error for the following was that it cannot allocate the vector of
length 1920165909.
a <- expand.grid(se1=0:100/100, sp1=0:100/100, se2=0:100/100, sp2=0:100/100,
DR=0:100/100)
How to solve it? Maybe setwd(dir) can, i am not very sure about it.
Any ideas about it?
[[alternative HTML version deleted]]
2012 Apr 26
1
Dataset maximum size?
Hi all,
I have an interesting project coming up, but the datasets are way bigger
than anything I've used before with R. I'll end up with a dataset with about
45,000,000 records, each with 3 columns. I'll probably want to add some more
columns for analysis if I can. My client can't deal with such big files, so
she is sending the data to me in chunks of "only" 4,500,000
2010 Feb 17
8
Use of R in clinical trials
Dear all,
There have been a variety of discussions on the R list regarding the use of R in clinical trials. The following post from the STATA list provides an interesting opinion regarding why SAS remains so popular in this arena: http://www.stata.com/statalist/archive/2008-01/msg00098.html
Regards,
-Cody Hamilton