File Desciptor Limit Preventing Connections
Problem
In Linux, it is not uncommon for high traffic applications to exhaust all available file descriptors, leading to connectivity issues. If you encounter connectivity errors with no apparent cause, you may observe repeating errors related to socket accept failures in your catalina.out log file, indicating a "Too many open files" error.
24-Apr-2021 20:00:57.691 SEVERE [http-nio-28080-Acceptor-0] org.apache.tomcat.util.net.NioEndpoint$Acceptor.run Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:455)
at java.lang.Thread.run(Thread.java:748)
Solution
To address this issue, follow the steps below:
Identify the PID (Process ID) of the affected process. In the example provided, the PID is "1664" for the ActiveMQ process.
[codeglitch@localhost ~]$ ps -ef | grep activemq
activemq 1664 1 0 May03 ? 00:01:44 /usr/bin/java -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/opt/activemq//conf/login.config -Dcom.sun.management.jmxremote -Djava.awt.headless=true -Djava.io.tmpdir=/opt/activemq//tmp -Dactivemq.classpath=/opt/activemq//conf:/opt/activemq//../lib/: -Dactivemq.home=/opt/activemq/ -Dactivemq.base=/opt/activemq/ -Dactivemq.conf=/opt/activemq//conf -Dactivemq.data=/opt/activemq//data -jar /opt/activemq//bin/activemq.jar start
codegli+ 33170 29722 0 21:29 pts/0 00:00:00 grep --color=auto activemq
[codeglitch@localhost ~]$
Check the file descriptor limit for the identified process using the PID.
[codeglitch@localhost ~]$ cat /proc/1664/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 11060 11060 processes
Max open files 262144 262144 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 11060 11060 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
[codeglitch@localhost ~]$
Use the lsof
command to determine the number of file descriptors being used by the process.
[codeglitch@localhost ~]$ sudo lsof -a -p 1664 | wc -l
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
314
[codeglitch@localhost ~]$
Adjust the file descriptor limit according to the instructions specific to your Linux distribution and version.
By increasing the file descriptor limit, you can mitigate the "Too many open files" error and resolve the connectivity issue. Be sure to refer to the documentation or resources provided by your Linux distribution to understand the appropriate steps for adjusting the file descriptor limit.