Displaying 20 results from an estimated 8000 matches similar to: "RHEL 5.1 beta, Dovecot 1.0.3: error while loading shared libraries?"
2008 Oct 08
0
issues with "write.table"
Dear R gurus and users,
I'm having problems with the use of write.table.
I have a 28-variables data frame create at each cycle of a loop; it can contain between 2000 and 3000 rows for each cycle.
After each cycle the data frame is written out to a file with the "append=TRUE" option and then removed from memory.
These are the couple of lines involved:
> data2 <-
2018 Jan 15
0
Lmtp Memory Limit
On 14.01.2018 09:11, Thomas Manninger wrote:
> Hi,
>
> i am using dovecot 2.2.33.2 on CentOS 7.4.
>
> Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), my dovecot sieve-pipe scripts crash with Out of memory:
> Out of memory (allocated 262144) (tried to allocate 8793 bytes)
>
> There are some memory limits in dovecot or sieve? Can i change
2007 Nov 12
2
login_process_size too small on x86_64 Fedora/RHEL
Hello,
as described in [253363], login_process_size of 32M is not enough on 64-bit
versions of Fedora and RHEL (and possibly other distributions as well).
[253363] https://bugzilla.redhat.com/show_bug.cgi?id=253363
As I understand it, there's an intersegment gap between read-only/executable
and writable segments of shared libs. That's probably a security feature or
something.
I found a
2020 Sep 16
0
dovecot 2.2.36.4 problem with ulimit
Hi
Limits:
Where all working fine:
core file size????????? (blocks, -c) 0
data seg size?????????? (kbytes, -d) unlimited
scheduling priority???????????? (-e) 0
file size?????????????? (blocks, -f) unlimited
pending signals???????????????? (-i) 257970
max locked memory?????? (kbytes, -l) 64
max memory size???????? (kbytes, -m) unlimited
open files????????????????????? (-n) 65536
pipe
2018 Jan 14
2
Lmtp Memory Limit
Hi,
i am using dovecot 2.2.33.2 on CentOS 7.4.
Since i upgraded from CentOS 7.2. to CentOS 7.4. (without upgrading dovecot), my dovecot sieve-pipe scripts crash with Out of memory:
Out of memory (allocated 262144) (tried to allocate 8793 bytes)
There are some memory limits in dovecot or sieve? Can i change this value?
Kernel limitks:
[root at xxx software]# ulimit -a
core file size
2020 Sep 16
1
dovecot 2.2.36.4 problem with ulimit
Hi,
perhaps this?
> with new debian9:
> open files (-n) 1024
Regards
Urban
Am 16.09.20 um 12:57 schrieb Maciej Milaszewski:
> Hi
> Limits:
>
> Where all working fine:
>
> core file size????????? (blocks, -c) 0
> data seg size?????????? (kbytes, -d) unlimited
> scheduling priority???????????? (-e) 0
> file size?????????????? (blocks,
2001 Oct 05
0
"File size limit exceeded" when running /sbin/mke2fs -j /dev/sdb1
Hi!
I have problem making ext3 FS on new disk. When I run mke2fs, it stops
and gives me: "File size limit exceeded". Is this known issue?
I'm running linux-2.4.10 with ext3 patch, e2fsprogs-1.25 freshly compiled.
Cheers,
Vita
Appended are outputs of following programs:
bash /usr/src/linux/scripts/ver_linux
/sbin/mke2fs -m0 -v -j /dev/sdb1
fdisk -l /dev/sdb
strace
2018 Jan 15
1
Aw: Re: Lmtp Memory Limit
2006 Mar 23
1
AIX 5.1 rsync large file
Hello,
I have inherited a setup where there are 2 AIX 5.1 systems in 2 separate
sites. There are large database files that are backed from each site to
the other via rsync. Currently, it is using rsync version 2.5.4. It does
it via ssh with the options -avz. This has all been merrily plodding along
for some time. There is one file that is over 45 GB, and it started having
trouble with that
2007 Aug 27
3
rsync out of memory at 8 MB although ulimit is 512MB
Hello again,
I encountered something amazing. First I thought there is not
enough memory allowed through ulimit. ulimit is now set to
(almost) 512MB but rsync still gets out fo memory at 8MB.
Can anyone tell me why?
That's my configuration:
rsync version 2.6.2
from AIX 5.3 to SuSE Linux 9 (also has rsync 2.6.2)
ulimit -a (AIX)
ulimit -a AIX (source):
-------------------------
2002 Sep 09
0
Paul Hass: re 2.5.5 fork
>Is there more than one rsync running? In 2.5.4 a failure in another rsync
>process could kill your rsync. I haven't studied the code recently, but I
>don't think there are any calls to fork() after it has started transfering
>files.
I recall
bash$ ps -aux
showed 2 rsync processes, maybe three when the rsync was executing on the
other console. I have plenty more
2015 Jan 22
2
[PATCH] increase fd_limit to max_client_limit automatically
Hi, with a low soft limit on file descriptors, dovecot 2.2.15 warns on
startup:
Warning: fd limit (ulimit -n) is lower than required under max. load
(256 < 1000), because of default_client_limit
It could try increasing the limit first, and only report the warning if that
fails. I'm attaching a patch that does just this.
Without the patch, the soft fd limit is kept at whatever it
2017 Mar 29
2
cannot login to imap under load
Hi Steffen,
On 29-03-17 12:38, Steffen Kaiser wrote:
> On Tue, 28 Mar 2017, Gerard Ranke wrote:
>
>> dovecot: master: Error: service(imap): fork() failed: Resource
>> temporarily unavailable
>> dovecot: master: Error: service(imap): command startup failed,
>> throttling for 2 secs
>
> check out the ulimits for the Dovecot process.
>
> -- Steffen Kaiser
2017 Aug 23
0
socketpair failed: Too many open files on Debian 9
You probably need to increase ulimit -n
Aki
On 23.08.2017 14:10, Patrick Westenberg wrote:
> Hi @all,
>
> after re-installing one of my two frontends/proxy-servers I get the
> following error messages after some time (sometimes after 1h, sometimes
> after 24h):
>
>
> 11:23:55 imap-login: Error: socketpair() failed: Too many open files
> 11:23:55 imap-login: Error:
2008 Aug 04
1
pam max locked memory issue after updating to 5.2 and rebooting
We were previously running 5.1 x86_64 and recently updated to 5.2
using yum. Under 5.1 we were having problems when running jobs using
torque and the solution had been to add the following items to the
files noted
"* soft memlock unlimited" in /etc/security/limits.conf
"session required pam_limits.so" in /etc/pam.d/{rsh,sshd}
This changed the max
2007 Dec 02
1
mbox File too large error
Hello,
I have a rather large (~400MB) INBOX file in mbox format. When using
deliver, it complains in the logs:
Dec 2 16:47:13 petole deliver(niko): write() failed with mbox file /home/niko/mail/INBOX: File too large
I do not see anything related in the configuration. And ulimit shows
nothing special:
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d)
2017 Aug 23
0
socketpair failed: Too many open files on Debian 9
Hello,
are you using systemd? May be you have to edit unit-file for
dovecotservice and increase filelimit
LimitNOFILE=infinity
Hajo
Am 23.08.2017 um 14:21 schrieb Patrick Westenberg:
> I haven't done this on the old, working machine.
>
> So there must be a difference between Debian 7 and 9 how open files are
> handled?
>
> Regards
> Patrick
>
>
>
> Aki
2015 Jan 26
0
imap-login: Fatal: pipe() failed: Too many open files
Am 26.01.15 um 02:24 schrieb Edgar Pettijohn:
> Sorry didn't scroll to the bottom to see the dovecot -n. I'm assuming
> freebsd has an /etc/login.conf similiar to openbsd. If so you may
> need to do something similiar to this:
>
> dovecot:\
> :openfiles-cur=512:\
> :openfiles-max=2048:\
> :tc=daemon:
>
>
2017 May 25
2
Re: can't establish more than 1000 connections with virsh
在 2017年05月25日 18:37, Daniel P. Berrange 写道:
> On Thu, May 25, 2017 at 06:20:51PM +0800, dw wrote:
>> Hi:
>>
>> I'm trying to connect with libvirtd with virsh from a remote PC,but only
>> can establish 1000 connections.
>>
>> If try more connections,prompt:
>>
>> "error: failed to connect to the hypervisor
>>
2014 Nov 09
0
taskprocessor fails to allocate memory
I keep getting this error
[Nov 8 22:51:31] ERROR[8192]: taskprocessor.c:614
__allocate_taskprocessor: Unable to start taskprocessor listener for
taskprocessor bbe08c34-9d1c-4e5f-8ae0-0cc75289caca
[Nov 8 22:51:31] ERROR[8192]: taskprocessor.c:245
default_listener_shutdown: pthread_join(): Cannot allocate memory
[Nov 8 22:51:31] ERROR[8192]: taskprocessor.c:614
__allocate_taskprocessor: Unable to