Folks, I'm using Gluster 3.4 on CentOS 6. very simple two-server, two-brick (replica 2) setup. The volume itself has many small files across a reasonably large directory tree, though I'm not sure if that plays a role. The FUSE client is being used. I can live with the performance limitations of small files with Gluster, but the problem I'm having is that file descriptor usage on the glusterfs servers just continues to grow. not sure when it might actually top off, if ever. No rebalance has been or is running. The application running on the client servers (two) are not leaving the files open. I've tuned Linux behavior on the glusterfs servers, via /proc, to accept over 1 million per-process file descriptors, but that doesn't seem to be enough. This volume hit the FD max some time ago and had to be recovered. I thought it was a fluke so started watching the open FD count and see that it's growing again. # gluster volume top users open Brick: node-75:/storage/users Current open fds: 765651, Max open fds: 1048558, Max openfd time: 2013-10-02 22:26:18.327010 Brick: node-76:/storage/users Current open fds: 768936, Max open fds: 768938, Max openfd time: 2013-10-28 17:11:04.184964 Clients: # cat /proc/sys/fs/file-nr 5100 0 1572870 # cat /proc/sys/fs/file-nr 2550 0 1572870 Looking for thoughts or suggestions here. Anyone else encountered this? Is the recommended solution to just define a ridiculously high per-process and global file descriptor max? -Joel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131028/bf0ba706/attachment.html>
There was fdleak in 3.4.0, if you are running that you might consider upgrading to 3.4.1 or disable open-behind feature: https://bugzilla.redhat.com/show_bug.cgi?id=991622 On Mon, Oct 28, 2013 at 6:36 PM, Joel Stalder <jstalder at panterranetworks.com> wrote:> ** ** > > Folks,**** > > ** ** > > I?m using Gluster 3.4 on CentOS 6? very simple two-server, two-brick > (replica 2) setup. The volume itself has many small files across a > reasonably large directory tree, though I?m not sure if that plays a role. > The FUSE client is being used. **** > > ** ** > > I can live with the performance limitations of small files with Gluster, > but the problem I?m having is that file descriptor usage on the glusterfs > servers just continues to grow? not sure when it might actually top off, if > ever. No rebalance has been or is running. The application running on the > client servers (two) are not leaving the files open.**** > > ** ** > > I?ve tuned Linux behavior on the glusterfs servers, via /proc, to accept > over 1 million per-process file descriptors, but that doesn?t seem to be > enough. This volume hit the FD max some time ago and had to be recovered? I > thought it was a fluke so started watching the open FD count and see that > it?s growing again.**** > > ** ** > > # gluster volume top users open**** > > Brick: node-75:/storage/users**** > > Current open fds: 765651, Max open fds: 1048558, Max openfd time: > 2013-10-02 22:26:18.327010**** > > ** ** > > Brick: node-76:/storage/users**** > > Current open fds: 768936, Max open fds: 768938, Max openfd time: > 2013-10-28 17:11:04.184964**** > > ** ** > > Clients:**** > > ** ** > > # cat /proc/sys/fs/file-nr**** > > 5100 0 1572870**** > > ** ** > > # cat /proc/sys/fs/file-nr**** > > 2550 0 1572870**** > > ** ** > > ** ** > > Looking for thoughts or suggestions here. Anyone else encountered this? Is > the recommended solution to just define a ridiculously high per-process and > global file descriptor max? **** > > ** ** > > -Joel**** > > ** ** > > ** ** > > ** ** > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131028/bbd0c889/attachment.html>
Pranith Kumar Karampuri
2013-Oct-29 05:31 UTC
[Gluster-users] Excessive file descriptor usage
hi Joel, I fixed some fd-leaks in open-behind xlator after 3.4 release. Could you do "gluster volume set <volname> performance.open-behind off" and see if open-fd-count still increases periodically. Commit information: commit 8c1304b03542eefbbff82014827fc782c3c3584f Author: Pranith Kumar K <pkarampu at redhat.com> Date: Sat Aug 3 08:27:27 2013 +0530 performance/open-behind: Fix fd-leaks in unlink, rename Change-Id: Ia8d4bed7ccd316a83c397b53b9c1b1806024f83e BUG: 991622 Signed-off-by: Pranith Kumar K <pkarampu at redhat.com> Reviewed-on: http://review.gluster.org/5493 Tested-by: Gluster Build System <jenkins at build.gluster.com> Reviewed-by: Anand Avati <avati at redhat.com> Pranith ----- Original Message -----> From: "Joel Stalder" <jstalder at panterranetworks.com> > To: gluster-users at gluster.org > Sent: Monday, October 28, 2013 11:06:22 PM > Subject: [Gluster-users] Excessive file descriptor usage > > > > > > Folks, > > > > I?m using Gluster 3.4 on CentOS 6? very simple two-server, two-brick (replica > 2) setup. The volume itself has many small files across a reasonably large > directory tree, though I?m not sure if that plays a role. The FUSE client is > being used. > > > > I can live with the performance limitations of small files with Gluster, but > the problem I?m having is that file descriptor usage on the glusterfs > servers just continues to grow? not sure when it might actually top off, if > ever. No rebalance has been or is running. The application running on the > client servers (two) are not leaving the files open. > > > > I?ve tuned Linux behavior on the glusterfs servers, via /proc, to accept over > 1 million per-process file descriptors, but that doesn?t seem to be enough. > This volume hit the FD max some time ago and had to be recovered? I thought > it was a fluke so started watching the open FD count and see that it?s > growing again. > > > > # gluster volume top users open > > Brick: node-75:/storage/users > > Current open fds: 765651, Max open fds: 1048558, Max openfd time: 2013-10-02 > 22:26:18.327010 > > > > Brick: node-76:/storage/users > > Current open fds: 768936, Max open fds: 768938, Max openfd time: 2013-10-28 > 17:11:04.184964 > > > > Clients: > > > > # cat /proc/sys/fs/file-nr > > 5100 0 1572870 > > > > # cat /proc/sys/fs/file-nr > > 2550 0 1572870 > > > > > > Looking for thoughts or suggestions here. Anyone else encountered this? Is > the recommended solution to just define a ridiculously high per-process and > global file descriptor max? > > > > -Joel > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users