Good morning Btrfs list, I have been loading a btrfs file system via a script rsyncing data files from an nfs mounted directory. The script runs well but after several days (moving about 10TB) rsync reports that it is sending the file list but stops moving data because btrfs balks saying too many files open. A simple umount/mount fixes the problem. What am I flushing when I remount that would affect this, and is there a way to do this without a remount. Once again thanks for any assistance. Jim -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 05 Oct 2011 11:24:27 -0400 Jim <jim@webstarts.com> wrote:> Good morning Btrfs list, > I have been loading a btrfs file system via a script rsyncing data files > from an nfs mounted directory. The script runs well but after several > days (moving about 10TB) rsync reports that it is sending the file list > but stops moving data because btrfs balks saying too many files open. A > simple umount/mount fixes the problem. What am I flushing when I > remount that would affect this, and is there a way to do this without a > remount. Once again thanks for any assistance.Are you sure it''s a btrfs problem? Check "ulimit -n", see "help ulimit" (assuming you use bash). -- With respect, Roman
Checked ulimit and processes are not the issue here. Rsync never has more than 15 instances running and even accounting for children and other processes they wouldnt approach the process limit. The error ddoes seem to be with btrfs as I cant ls the file system while this condition exists. Ls also returns "too many files open". Btrfs sub list also shows the same too many files open condition. Actually, there should be no files open after the script has failed (the script runs, just reports the errors). Something either reports files as being open or is holding them open, and a remount flushes this and the fs is back to normal. Very confusing. Jim On 10/05/2011 11:32 AM, Jim wrote:> Thanks very much for the idea. I will check and get back. > Jim > > > On 10/05/2011 11:31 AM, Roman Mamedov wrote: >> On Wed, 05 Oct 2011 11:24:27 -0400 >> Jim<jim@webstarts.com> wrote: >> >>> Good morning Btrfs list, >>> I have been loading a btrfs file system via a script rsyncing data >>> files >>> from an nfs mounted directory. The script runs well but after several >>> days (moving about 10TB) rsync reports that it is sending the file list >>> but stops moving data because btrfs balks saying too many files >>> open. A >>> simple umount/mount fixes the problem. What am I flushing when I >>> remount that would affect this, and is there a way to do this without a >>> remount. Once again thanks for any assistance. >> Are you sure it''s a btrfs problem? Check "ulimit -n", see "help >> ulimit" (assuming you use bash). >>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Well, I hate to grasp for a flyswatter when a hammer might be better, but what''s /proc/sys/fs/file-nr show? The first number is your currently opened files, the last one is your maximum files (as dictated by /proc/sys/fs/file-max), and the middle one''s allocated-but-unused file handles. If it''s showing a number anything near your max files, it''s probably a fine time to check out lsof. Looking for where the disparity lies will probably offer some insights, I imagine. $.02, -Ken On Wed, 05 Oct 2011 11:54:35 -0400 Jim <jim@webstarts.com> wrote> Checked ulimit and processes are not the issue here. Rsync never has > more than 15 instances running and even accounting for children and > other processes they wouldnt approach the process limit. The error > ddoes seem to be with btrfs as I cant ls the file system while this > condition exists. Ls also returns "too many files open". Btrfs sub > list also shows the same too many files open condition. Actually, there > should be no files open after the script has failed (the script runs, > just reports the errors). Something either reports files as being open > or is holding them open, and a remount flushes this and the fs is back > to normal. Very confusing. > Jim > > On 10/05/2011 11:32 AM, Jim wrote: > > Thanks very much for the idea. I will check and get back. > > Jim > > > > > > On 10/05/2011 11:31 AM, Roman Mamedov wrote: > >> On Wed, 05 Oct 2011 11:24:27 -0400 > >> Jim<jim@webstarts.com> wrote: > >> > >>> Good morning Btrfs list, > >>> I have been loading a btrfs file system via a script rsyncing data > >>> files > >>> from an nfs mounted directory. The script runs well but after several > >>> days (moving about 10TB) rsync reports that it is sending the file list > >>> but stops moving data because btrfs balks saying too many files > >>> open. A > >>> simple umount/mount fixes the problem. What am I flushing when I > >>> remount that would affect this, and is there a way to do this without a > >>> remount. Once again thanks for any assistance. > >> Are you sure it''s a btrfs problem? Check "ulimit -n", see "help > >> ulimit" (assuming you use bash). > >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Ken, That was a great $.02, more like a nickle. max files are 3255380 but file handles are 0. Current files are 832. I am unfamiliar with how this part of the fs works so how can I increase file handles? Thanks Jim On 10/05/2011 12:07 PM, Ken D''Ambrosio wrote:> Well, I hate to grasp for a flyswatter when a hammer might be better, but > what''s /proc/sys/fs/file-nr show? The first number is your currently opened > files, the last one is your maximum files (as dictated by > /proc/sys/fs/file-max), and the middle one''s allocated-but-unused file handles. > If it''s showing a number anything near your max files, it''s probably a fine > time to check out lsof. Looking for where the disparity lies will probably > offer some insights, I imagine. > > $.02, > > -Ken > > > On Wed, 05 Oct 2011 11:54:35 -0400 Jim<jim@webstarts.com> wrote > >> Checked ulimit and processes are not the issue here. Rsync never has >> more than 15 instances running and even accounting for children and >> other processes they wouldnt approach the process limit. The error >> ddoes seem to be with btrfs as I cant ls the file system while this >> condition exists. Ls also returns "too many files open". Btrfs sub >> list also shows the same too many files open condition. Actually, there >> should be no files open after the script has failed (the script runs, >> just reports the errors). Something either reports files as being open >> or is holding them open, and a remount flushes this and the fs is back >> to normal. Very confusing. >> Jim >> >> On 10/05/2011 11:32 AM, Jim wrote: >>> Thanks very much for the idea. I will check and get back. >>> Jim >>> >>> >>> On 10/05/2011 11:31 AM, Roman Mamedov wrote: >>>> On Wed, 05 Oct 2011 11:24:27 -0400 >>>> Jim<jim@webstarts.com> wrote: >>>> >>>>> Good morning Btrfs list, >>>>> I have been loading a btrfs file system via a script rsyncing data >>>>> files >>>>> from an nfs mounted directory. The script runs well but after several >>>>> days (moving about 10TB) rsync reports that it is sending the file list >>>>> but stops moving data because btrfs balks saying too many files >>>>> open. A >>>>> simple umount/mount fixes the problem. What am I flushing when I >>>>> remount that would affect this, and is there a way to do this without a >>>>> remount. Once again thanks for any assistance. >>>> Are you sure it''s a btrfs problem? Check "ulimit -n", see "help >>>> ulimit" (assuming you use bash). >>>> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > > >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Ok. I have been studying-up, and I am as confused as ever. As much as google has totally conflicting descriptions and the latest article found was 2007 (enough google rant), I believe that I am looking at file descriptors not files. Lsof shows about 4000 files open. If I read /proc/sys/fs/file-nr correctly I am using 832 handles of 3M available but 0 free. With so many available, does the kernel allocate dynamically. The articles I read were mostly talking about 2.4 kernels. I have compiled 3.1.0-rc4 on a centos 6 base. I assume things have changed since 2.4 :). Bottom line am I out of descriptors? I don''t understand this. Jim Ken, That was a great $.02, more like a nickle. max files are 3255380 but file handles are 0. Current files are 832. I am unfamiliar with how this part of the fs works so how can I increase file handles? Thanks Jim On 10/05/2011 12:07 PM, Ken D''Ambrosio wrote:> Well, I hate to grasp for a flyswatter when a hammer might be better, but > what''s /proc/sys/fs/file-nr show? The first number is your currently opened > files, the last one is your maximum files (as dictated by > /proc/sys/fs/file-max), and the middle one''s allocated-but-unused file handles. > If it''s showing a number anything near your max files, it''s probably a fine > time to check out lsof. Looking for where the disparity lies will probably > offer some insights, I imagine. > > $.02, > > -Ken > > > On Wed, 05 Oct 2011 11:54:35 -0400 Jim<jim@webstarts.com> wrote > >> Checked ulimit and processes are not the issue here. Rsync never has >> more than 15 instances running and even accounting for children and >> other processes they wouldnt approach the process limit. The error >> ddoes seem to be with btrfs as I cant ls the file system while this >> condition exists. Ls also returns "too many files open". Btrfs sub >> list also shows the same too many files open condition. Actually, there >> should be no files open after the script has failed (the script runs, >> just reports the errors). Something either reports files as being open >> or is holding them open, and a remount flushes this and the fs is back >> to normal. Very confusing. >> Jim >> >> On 10/05/2011 11:32 AM, Jim wrote: >>> Thanks very much for the idea. I will check and get back. >>> Jim >>> >>> >>> On 10/05/2011 11:31 AM, Roman Mamedov wrote: >>>> On Wed, 05 Oct 2011 11:24:27 -0400 >>>> Jim<jim@webstarts.com> wrote: >>>> >>>>> Good morning Btrfs list, >>>>> I have been loading a btrfs file system via a script rsyncing data >>>>> files >>>>> from an nfs mounted directory. The script runs well but after several >>>>> days (moving about 10TB) rsync reports that it is sending the file list >>>>> but stops moving data because btrfs balks saying too many files >>>>> open. A >>>>> simple umount/mount fixes the problem. What am I flushing when I >>>>> remount that would affect this, and is there a way to do this without a >>>>> remount. Once again thanks for any assistance. >>>> Are you sure it''s a btrfs problem? Check "ulimit -n", see "help >>>> ulimit" (assuming you use bash). >>>> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > > >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html