similar to: Too many open files

Displaying 20 results from an estimated 10000 matches similar to: "Too many open files"

2016 Oct 08
0
Too many open files
It all looks like a workarounds. I would suggest using a proper solution, such as systemd, that is present in ubuntu 16.04 by default, and where you can raise system limits per system service just by tweaking its config file. m. On 8 października 2016 at 11:37:59, Chen Wei Hsu (cwhsu1984 at gmail.com) wrote: Hi all, I am trying to stream for over 1k users on Ubuntu 16.04. I notice that when
2016 Oct 08
0
Too many open files
On 20 Sep 2016, at 3:10, Chen Wei Hsu wrote: > Hi all, > > I am trying to stream for over 1k users on Ubuntu 16.04. I notice that > when > stream connection is over 1024, it get warning like this: > > WARN connection/_accept_connection accept() failed with error 24: Too > many > open files > > Tried these configs and reboot, it won't work! >
2019 Mar 24
2
Maximum Listeners.
Hello there, I’ve configured my server for maximum 50000 open files. [root at scast1 ~]# ulimit -a ... open files (-n) 50000 ... max user processes (-u) 65535 While im doing the Load Test 1, my server only reaches ~1015 listeners. I’ve set this on etc/security/limits.conf : icecast hard nofile 50000 icecast soft nofile 60000 icecast soft nproc 65535 icecast
2017 May 25
2
Re: can't establish more than 1000 connections with virsh
在 2017年05月25日 18:37, Daniel P. Berrange 写道: > On Thu, May 25, 2017 at 06:20:51PM +0800, dw wrote: >> Hi: >> >> I'm trying to connect with libvirtd with virsh from a remote PC,but only >> can establish 1000 connections. >> >> If try more connections,prompt: >> >> "error: failed to connect to the hypervisor >>
2014 Jul 17
2
ulimit warning when restarting
When restarting Dovecot 2.2.10 (via atrpms) on RHEL 6, I get the error: Warning: fd limit (ulimit -n) is lower than required under max. load (1024 < 4096), because of default_client_limit # doveconf default_internal_user default_internal_user = dovecot Should dovecot print this warning based on $default_internal_user, or based on root? As root: # ulimit -n 1024 As user dovecot: $ ulimit -n
2015 Nov 25
2
limits.conf and AD domain groups
I am using a member server with AD as my source of accounts. ssh logins work great. Yesterday, one of my students wanted to see what a fork bomb was and so now I need to place ulimits on place. Attempts to use AD domain groups fail. So I'm not sure this is an issue for samba+winbind or for /etc/security/limits.conf and pam. Here's what I have added in limits.conf # -- fix fork bomb
2017 May 25
2
can't establish more than 1000 connections with virsh
Hi: I'm trying to connect with libvirtd with virsh from a remote PC,but only can establish 1000 connections. If try more connections,prompt: "error: failed to connect to the hypervisor error: Failed to open file '/etc/libvirt/libvirt.conf': Too many open files" I tried in another PC,also get the same prompt. Anybody know why? Thanks!
2009 Jul 15
2
"limit -n XXX" does NOT allow on CENTOS 4.X???
We have CENTOS 4.7 on DELL server. our /etc/security/limits.conf already setup as: * soft nproc 2047 * hard nproc 16384 * soft nofile 4096 * hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 8192
2014 May 08
2
Processes launched from rc*.d and ulimit -n
I'm running fedora directory server on some boxes in a multi-master arrangement. The problem is that when dirsrv is lauched from init (on boot) the maximum number of allowed file descriptors (ulimit -n) is only 4096. That means that the slapd process can only accept ~4k connections, and it needs to accept ~10k or so. The value for nofile for all users in /etc/security/limits.conf (and
2008 Dec 10
3
Segfault on antispam plugin
Hi Johanners Berg, I put the antispam plugin to work (some days ago) and now my imap daemon dies with segfault. I don't have nothing (wrong) in logs, just a lot of segfaults... Dec 10 15:37:21 curie kernel: printk: 22 messages suppressed. Dec 10 15:37:21 curie kernel: imap[4774]: segfault at 8 rip 2afe7fe7d7ff rsp 7fff2b9bdab0 error 6 Dec 10 15:37:21 curie kernel: imap[4779]: segfault at 8
2009 Dec 08
2
No ulimit for user
Hi, I'm trying to remove any limit on open files for a user; I've set username nofiles to unlimited in /etc/security/logins.conf, but now I get "could not open session" if I try to su to the user. singhh - nofile unlimited I think this is related to PAM, so I've modifed /etc/pam.d/su and /etc/pam.d/login to use pam_limits.so: # cat /etc/pam.d/su
2004 Sep 13
2
CentOS 3.1: sshd and pam /etc/security/limits.conf file descriptor settings problem
Why can't non-uid 0 users have more than 1024 file descriptors when logging in via ssh? I'm trying to allow a user to have a hard limit of 8192 file descriptors(system defaults to 1024) via the following setting in /etc/security/limits.conf: jdoe hard nofile 8192 But when jdoe logs in via ssh and does 'ulimit -Hn' he gets '1024' as a response. If he tries to
2014 Apr 23
2
Ulimit problem - CentOS 5.10
Running across some curious stuff with ulimit on CentOS 5.10. We have a non CentOS packaged version of Asterisk (using their packages) that we start at boot time with a typical RC script. Recently it started whining that it couldn't open enough file handles. As we dug further into this, it appears that at boot time, it inherits ulimit from init, which is pretty low: 1024. We've set
2010 Apr 10
2
ulimit
I need to to change the ulimit to 16384(ulimit -n 16384) on boot on Centos 5.4 64 bit. How do I do that? Been searching and have yet to find a good answer. Tried to do it in rc.local but it appears to happen to late there. Matt
2005 Feb 24
2
permanent ulimit -n on CentOS 3.4
Hi! Question from the novice. I have to permanently increase number of opened files ( ulimit -n 16384 and ulimit -Hn 16384) for some application. I did custom kernel based on https://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/sysadmin-guide/s1-custom-kernel-modularized.html and application documentation ( written for RH 9), no error during all makes but I have panic during the
2015 Aug 14
4
persistent change of max_stack_depth
Hi Thomas, > Could anybody point me in the right direction for setting the kernel > parameter, max_stack_depth, to 10240 for database tuning? > > I have currently set it by running 'ulimit -s 10240' but this does not > survive a reboot. > > Thanks for the response, I've been nosing around that file recently but noted the first two lines; #This file sets the
2008 Feb 25
4
1.1rc1: Maximum number of mail processes exceeded
I'm getting "Maximum number of mail processes exceeded" messages when 512 imap Processes are active. Dovecot reports: Warning: fd limit 1024 is lower than what Dovecot can use under full load (more than 1712). Either grow the limit or change login_max_processes_count and max_mail_processes settings But in my /var/service/dovecot/run script I use: #!/bin/sh mkdir /var/core chmod
2016 Oct 13
2
Openfile Issue
[root at abc asterisk]# lsof -u 50771 | wc -l 0 BTW, I'm using CentOS 6.5 > > Date: Thu, 13 Oct 2016 10:20:19 -0400 >> From: Dovid Bender <dovid at telecurve.com> >> To: Asterisk Users Mailing List - Non-Commercial Discussion >> <asterisk-users at lists.digium.com> >> Subject: Re: [asterisk-users] Openfile Issue >> Message-ID:
2018 Feb 15
3
wbinfo -U id gives different users on same dc
Sure there is, Install debian, follow my howto and you will have success. Just, your using an .local domain, and thats a reserved name for apples mDNS (zeroconf) And should not be used. ( same for .lan ) https://wiki.samba.org/index.php/FAQ#Can_I_Use_the_.local_Top-level_Domain_for_My_AD_DNS_Zone.3F So the info is good, thats not the problem, finding it, is. Can you post your /etc/hosts
2018 Feb 14
3
wbinfo -U id gives different users on same dc
RID solved my problem. But while reading docs I saw new things and I changed my smb.conf completely. I have read almost every parameter but i'm still not %100 sure. Can you do me a last favor? Please can you tell me do I have any problem with new smb.conf? Kernel: Linux 4.14.13-1-ARCH Filesystem: zfs-linux 0.7.5.4.14.13.1-1 Thank you so much for your help. --------------------- [global]