Displaying 20 results from an estimated 30000 matches similar to: "Cannot Allocate Memory"
2018 Jan 15
1
Aw: Re: Lmtp Memory Limit
1999 Oct 21
2
problems with memory allocation
I hope that someone has had a similar trouble and will be able to
help us :
We , have installed the R package in a Digital Workstation with 500Mb
of
RAM memory, running under Unix operating system. The package works fine
but when we try to start the program with more than 120Mb, (vsize
- --120M) the
workstation refuses to allocate this memory. The message that we get
is:
Fatal error:
2018 Jan 14
2
Lmtp Memory Limit
Hi,
i am using dovecot 2.2.33.2 on CentOS 7.4.
Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), my dovecot sieve-pipe scripts crash with Out of memory:
Out of memory (allocated 262144) (tried to allocate 8793 bytes)
There are some memory limits in dovecot or sieve? Can i change this value?
Kernel limitks:
[root at xxx software]# ulimit -a
core file size
2002 Feb 17
1
how to allocate more memory
Hello,
I am new to R. (Neither am I a statistician. )
Experienced a problem when running an application.
######################
> exp.ravglogpm_express(x, method="ravglogpm", normalize=T,
+ normalize.method= "quantile", span=3/5,
+ choose.subset=T, subset.size=5000, verbose=T, maxit=2)
Performing quantile normalization.
Error: cannot allocate vector of size
2010 Nov 23
2
Error: cannot allocate vector of size x Gb (64-bit ... yet again)
Hello,
I am facing the dreaded "Error: cannot allocate vector of size x Gb" and
don't understand
enough about R (or operating system) memory management to diagnose and
solve the problem
-- despite studying previous posts and relevant R help -- e.g.:
"Error messages beginning cannot allocate vector of size indicate a
failure to obtain memory,
either because the size exceeded
2004 Jun 19
2
DU and Hard Links?
Hi,
I'm doing a 30 day rotational backup using rysnc.
If I go to the root of the backup directory and use: du --max-depth=1
-h, it gives me the actual space being taken up by each incremental
directory, the space being taken by the current directory, and then the
total of all.
For example:
44G /Current
1G /06-20-2004
750M /06-19-2004
Etc...
Etc..
..
..
70G Total
But what I would like to
2005 Dec 16
2
out of memory on dovecot alpha5
Hello
Once a day i have these errors in doveecot.log:
===========================================
dovecot: Dec 16 14:06:04 Error: auth-worker(default): .Out of memory
(Needed 52 bytes)
dovecot: Dec 16 14:06:04 Error: auth-worker(default): sql(login,<ip>):
Password query failed: MySQL
client ran out of memory
dovecot: Dec 16 14:06:06 Info: imap-login: Disconnected: user=<login>,
2007 Jul 19
1
RAM, swap, Error: cannot allocate vector of size, Linux:
Dear Community,
I am very new to the world of Linux and R and I have stumbled upon a problem that I cannot seem to resolve on my own. Here is the relevant background:
I am working on a 64-bit Linux Fedora Core 6 OS. I using R version 2.5.1. I have 3.8 Gb of RAM and 1.9 Gb of swap. As I see it, there are no restraints on the amount of memory that R can use imposed by this particular OS build.
2007 Aug 27
3
rsync out of memory at 8 MB although ulimit is 512MB
Hello again,
I encountered something amazing. First I thought there is not
enough memory allowed through ulimit. ulimit is now set to
(almost) 512MB but rsync still gets out fo memory at 8MB.
Can anyone tell me why?
That's my configuration:
rsync version 2.6.2
from AIX 5.3 to SuSE Linux 9 (also has rsync 2.6.2)
ulimit -a (AIX)
ulimit -a AIX (source):
-------------------------
2002 Jun 27
3
UsePrivilegeSeparation: "fatal: xrealloc: out of memory"
I just upgraded to OpenSSH 3.4p1 from 2.5.2p2 to take advantage of
privilege separation. After installation, when a user tries to login
he gets dropped almost immediately. In the server's
/var/log/messages:
Jun 26 20:15:04 sclp3 sshd[6433]: Accepted password for jason from 128.165.148.66 port 41871 ssh2
Jun 26 20:15:12 sclp3 jason[110]: sshd[6444]: fatal: xrealloc: out of memory (new_size
2017 Mar 15
1
Error: memory exhausted (limit reached?)
Hi,
I first posted this message on r-help, but got redirected here.
I encounter a strange memory error, and I'd like some help to determine if I'm doing something wrong or if there's a bug in recent R versions...
I'm currently working on a DeepNet project @home, with an old PC with 4Gb RAM, running Ubuntu 16.04.
For efficiency reason, I preprocessed my dataset and stored it as
2008 Aug 04
1
pam max locked memory issue after updating to 5.2 and rebooting
We were previously running 5.1 x86_64 and recently updated to 5.2
using yum. Under 5.1 we were having problems when running jobs using
torque and the solution had been to add the following items to the
files noted
"* soft memlock unlimited" in /etc/security/limits.conf
"session required pam_limits.so" in /etc/pam.d/{rsh,sshd}
This changed the max
2006 Jun 23
7
malloc small pieces of memory
Hi
I have a Problem with wine. I have one Program, that allogaces many
small pieces with "malloc()" in the memory. "many" means over 2 Mio,
"small" means "from 8 to 128 bytes" in size.
Running this Program on Windows gives me ~300MB Memory Usage (according
to one of the values in the Tast manager). In Linux, "top" shows me a
physical Usage of
2008 Mar 18
0
how to make virtual machine in xen allocate memory automatically?
hi,
recently I installed xen 3.0 at centOS5.1 server with one centOS5.1
virtual machine, and I set the virtual machine allocate 2 vcpu and
minimum memory allocation is 256mb, and maximum memory allocation is
750mb. and the virtual machine running fine at idle status, and
allocated 256mb memory.
And today I install an application called dotcms, and it :
Using JAVA_OPTS: -Ddotserver=dotcms_1.5.1
2014 Dec 18
2
segfault when trying to allocate a large vector
Dear R contributors,
I'm running into trouble when trying to allocate some large (but in
theory viable) vector in the context of C code bound to R through
.Call(). Here is some sample code summarizing the problem:
SEXP test() {
int size = 10000000;
double largevec[size];
memset(largevec, 0, size*sizeof(double));
return(R_NilValue);
}
If size if small enough (up to 10^6), everything is
2008 Oct 16
3
strict memory
Hello All:
Running 5.2 at our university. We have several student's processes
that take up too much memory. Our system have 64G of RAM and some
processes take close to 32-48G of RAM. This is causing many problems
for others. I was wondering if there is a way to restrict memory usage
per process? If the process goes over 32G simply kill it. Any thoughts
or ideas?
TIA
2019 Aug 22
2
Re: RLIMIT_MEMLOCK in container environment
On Thu, Aug 22, 2019 at 12:01 PM Laine Stump <laine@redhat.com> wrote:
>
> On 8/22/19 10:56 AM, Ihar Hrachyshka wrote:
> > On Thu, Aug 22, 2019 at 2:24 AM Daniel P. Berrangé <berrange@redhat.com> wrote:
> >>
> >> On Wed, Aug 21, 2019 at 01:37:21PM -0700, Ihar Hrachyshka wrote:
> >>> Hi all,
> >>>
> >>> KubeVirt uses
2006 Dec 18
6
mongrel_cluster: selective restarts
Hi list,
I have tried to reach Bradley (author of mongrel_cluster) by mail, but
have not gotten a response. So I''ll try trough this channel:
I have ''developed''[1] a small extension to mongrel_cluster that allows
selective restart of any one listener in a configuration that contains
more than one listener by using a command like
mongrel_rails
2018 Jan 23
2
Panic: data stack: Out of memory when allocating bytes
On Tue, Jan 23, 2018 at 14:03:27 -0500, Josef 'Jeff' Sipek wrote:
> On Tue, Jan 23, 2018 at 18:21:38 +0100, Thomas Robers wrote:
> > Hello,
> >
> > I'm using Dovecot 2.3 and sometimes i get this:
> >
> > --- snip ---
> > Jan 23 14:23:13 mail dovecot: imap(bob at tutech.de)<4880><PDqibHFjMvrAqG1n>:
> > Panic: data stack: Out of
2007 Apr 02
1
Out of memory in make_file
Hello,
I did some searching through the rsync archives at mail-archive.com
and didn't find anything less than 2 years old. I am hoping there have
been some developments since then, or that I can look towards debian for
clues to this problem.
I am trying to rsync some pretty large directories. These
directories have about 100-200 users in them. Sometimes we get this on a
single