Displaying 20 results from an estimated 10663 matches for "hugeness".
Did you mean:
eugene's
2010 Aug 24
2
Extract rows from a list object
Dear list members,
I need to create a table from a huge list object,
this list consists of matrices of the same size (but with different
content).
The resulting n tables should contain the same rows from all matrices.
For example:
n <- 23
x <- array(1:20, dim=c(n,6))
huge.list <- list()
for (i in 1:1000) {
huge.list[[i]] <- x }
# One of 1000 matrices
huge.list[[1]][1:4, 1:6]
2006 Apr 04
2
Return function from function with minimal environment
Hi,
this relates to the question "How to set a former environment?" asked
yesterday. What is the best way to to return a function with a
minimal environment from a function? Here is a dummy example:
foo <- function(huge) {
scale <- mean(huge)
function(x) { scale * x }
}
fcn <- foo(1:10e5)
The problem with this approach is that the environment of 'fcn' does
not
2007 Dec 11
0
3 commits - libswfdec/swfdec_as_context.c libswfdec/swfdec_movie.c test/trace
libswfdec/swfdec_as_context.c | 2 +-
libswfdec/swfdec_movie.c | 2 +-
test/trace/Makefile.am | 15 +++++++++++++++
test/trace/crash-0.5.4-13491-stack-overflow-5.swf |binary
test/trace/crash-0.5.4-13491-stack-overflow-5.swf.trace | 1 +
test/trace/crash-0.5.4-13491-stack-overflow-6.swf
2006 Sep 20
3
Spliting a huge vector
Dear R users,
I have a huge vector that I would like to split into
unequal slices. However, the only way I can do this is
to create another huge vector to define the groups
that are used to split the original vector, e.g.
# my vector is this
a.vector <- seq(2, by=5, length=100)
# indices where I would like to slice my vector
cut.values <- c(30, 50, 100, 109, 300, 601, 803)
# so I have to
2008 Jun 25
1
huge data?
Hi Jay Emerson,
Our Intention is to primarily optimize "R" to utilize the Parallel
Processing Capabilities of CELL BE Processor.(has any work been done in this
area?)
We have huge pages(of size 1MB 16MB ) available in the system and as you
pointed out our data is also in the GB ranges.So the idea is if Vectors of
this huge size are allocated from Huge Pages the performance will
2010 Sep 29
1
cor() alternative for huge data set
Hi,
I am have a data set of around 43000 probes(rows), and have to calculate correlation matrix. When I run cor function in R, its throwing an error message of RAM shortage which was obvious for such huge number of rows. I am not getting a logical way to cut off this huge number of entities, is there an alternative to pearson correlation or with other dist() methods calculation(euclidean) that
2005 Jun 06
3
Reading huge chunks of data from MySQL into Windows R
Dear List,
I'm trying to use R under Windows on a huge database in MySQL via ODBC
(technical reasons for this...). Now I want to read tables with some
160.000.000 entries into R. I would be lucky if anyone out there has
some good hints what to consider concerning memory management. I'm not
sure about the best methods reading such huge files into R. for the
moment I spilt the whole
2005 Mar 25
2
Very HUGE binaries!?
Hi!
I've been compiling Samba 3.0.x on a Solaris 2.6 server using GCC
3.4.1 without any problem until recently... The problem started with 3.0.12
version, and reproduced in 3.0.13. Doing "configure" and then "make"
produces with these two versions VERY HUGE binaries! I'm talking about more
than 50 Megabytes binaries in some cases... With 3.0.11 and before I had
much
2006 Mar 17
2
> 1TB filesystems with ZFS and 32-bit Solaris?
Solaris in 32-bit mode has a 1TB device limit. UFS filesystems in 32-bit
mode also have a 1TB limit, even if using a logical volume manager to
span smaller than 1TB devices.
So, what kind of limit does ZFS have when running under 32-bit Solaris?
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
2012 Jan 10
0
ltp hugemmap02 fails on ocfs2
Hi Tiger,
ltp-20120104 hugemmap02 testcase fails on ocfs2 filesystem with both UEK
2.6.39-100.0.18 and RHEL 2.6.18-300.el5:
hugemmap02 1 TCONF : huge mmap failed to test the scenario
hugemmap02 1 TCONF : huge mmap failed to test the scenario
hugemmap02 0 TWARN : tst_rmdir:
rmobj(/mnt/ocfs2/ltp-mQdlAx5411/hugSJXB0B) failed:
lstat(/mnt/ocfs2/ltp-mQdlAx5411/hugSJXB0B) failed;
2011 Jun 04
2
Completely disable local keyboard input in Syslinux / Extlinux?
Hello list,
I'm trying to reuse a fairly old PC based embedded system. It has no
video output at all, no VGA, nothing older, and no keyboard / mouse
connectors. The console is on a standard RS232 interface, including BIOS
output and the minimal BIOS loader. The BIOS emulates / redirects text
output to and keyboard input from the console RS232 interface, but
unfortunately, this emulation
2020 Jul 07
3
hex editor for huge files
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
hexpeek: a hex editor for huge files
Occasionally I need to work with huge binary files. Over the years I've
tried many different tools and never found one that was exactly what I
wanted. In my experience most hex editors either (1) do not work well
with 4GB+ files or (2) require the user to learn a curses interface and
are not scriptable.
So
2023 Feb 10
1
syncing huge files/devices: bmapfs
Hi,
recently I had to sync really huge files (VM images), represented
either as block devices (on one or both sides) or as regular files.
It *seemed* that Rsync didn't work well with block devices (the
--copy-devices didn't 'work for me, maybe I'm stupid or it's broken in
the Rsync that ships with Debian bullseye). And additionally I ran into
performance and resource issues
2006 Mar 16
3
Converting huge mbox to Dovecot mbox + indexes
Migrating from UW IMAP/Qpopper mbox to Dovecot mbox.
I have ~500 users, some with HUGE mbox (500MB-1GB),
is there a script to create the Dovecot indexes at night
to help speed up the migration process.
Any idea.
Thanks
Bertrand Leboeuf
leboeuf at emt.inrs.ca
2005 Feb 04
2
rsync huge tar files
Hi folks,
Are there any tricks known to let rsync operate on huge tar
files?
I've got a local tar file (e.g. 2GByte uncompressed) that is
rebuilt each night (with just some tiny changes, of course),
and I would like to update the remote copies of this file
without extracting the tar files into temporary directories.
Any ideas?
Regards
Harri
2003 Sep 16
1
how to identify huge downloads ?
hello ...
how can I identify huge downloads on link to automticly move them to low priority queue ? somethink like combination rate and duration of session
Thanks
2010 Jan 23
8
The directory that I am trying to clean up is huge
The directory that I am trying to clean up is huge . every time get this
error msg
-bash: /usr/bin/find: Argument list too long
Please advise
Anas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20100123/f6534851/attachment-0001.html>
2017 Mar 10
4
[PATCH v7 kernel 3/5] virtio-balloon: implementation of VIRTIO_BALLOON_F_CHUNK_TRANSFER
On Fri, Mar 10, 2017 at 07:37:28PM +0800, Wei Wang wrote:
> On 03/09/2017 10:14 PM, Matthew Wilcox wrote:
> > On Fri, Mar 03, 2017 at 01:40:28PM +0800, Wei Wang wrote:
> > > From: Liang Li <liang.z.li at intel.com>
> > > 1) allocating pages (6.5%)
> > > 2) sending PFNs to host (68.3%)
> > > 3) address translation (6.1%)
> > > 4) madvise
2017 Mar 10
4
[PATCH v7 kernel 3/5] virtio-balloon: implementation of VIRTIO_BALLOON_F_CHUNK_TRANSFER
On Fri, Mar 10, 2017 at 07:37:28PM +0800, Wei Wang wrote:
> On 03/09/2017 10:14 PM, Matthew Wilcox wrote:
> > On Fri, Mar 03, 2017 at 01:40:28PM +0800, Wei Wang wrote:
> > > From: Liang Li <liang.z.li at intel.com>
> > > 1) allocating pages (6.5%)
> > > 2) sending PFNs to host (68.3%)
> > > 3) address translation (6.1%)
> > > 4) madvise
2005 Jun 06
1
AW: Reading huge chunks of data from MySQL into Windows R
In my (limited) experience R is more powerful concerning data manipulation. An example: I have a vector holding a user id. Some user ids can appear more than once. Doing SELECT COUNT(DISTINCT userid) on MySQL will take approx. 15 min. Doing length(unique(userid)) will take (almost) no time...
So I think the other way round will serve best: Do everything in R and avoid using SQL on the database...