We do backups using rsync --link-dest. On one of our volumes, we just hit a limit in ext3 which generated the error: rsync: link "..." => ... failed: Too many links (31) This appears to be related to a limit in the number of directory entries to which an inode may be connected. In other words, it's a limit on the number of hard links that can exist to a given file. This limit is apparently 32000. This isn't specifically an rsync problem, of course. I can recreate it with judicious use of "cp -Rl", for example. But any site using --link- dest as heavily as we are - and ext3 - is vulnerable to this. So I thought I'd share our experience. This is admittedly an extreme case: We've a lot of snapshots preserved for this volume. And the files failing are under /usr/lib/locale; there is a lot of hardlinking already occurring in there. I've thought of two solutions: (1) deliberating breaking linking (and therefore wasting disk space) or (2) using a different file system. This is running on CentOS 5, so xfs was there to be tried. I've had positive experiences with xfs in the past, and from what I have read this limit does not exist in that file system. I've tried it out, and - so far - the problem has been avoided. There are inodes with up to 32868 links at the moment on the xfs copy of this volume. I'm curious, though, what thoughts others might have. I did wonder, for example, whether rsync should, when faced with this error, fall back on creating a copy. But should rsync include behavior that exists only to work around a file system limit? Perhaps only as a command line option (ie. definitely not the default behavior)? Thanks... - Andrew
On Wed, 30 Jun 2010 01:43:02 +0000, Andrew Gideon wrote:> I've thought of two solutions: (1) deliberating breaking linking (and > therefore wasting disk space) or (2) using a different file system. > > This is running on CentOS 5, so xfs was there to be tried. I've had > positive experiences with xfs in the past, and from what I have read > this limit does not exist in that file system. I've tried it out, and - > so far - the problem has been avoided. There are inodes with up to > 32868 links at the moment on the xfs copy of this volume. > > I'm curious, though, what thoughts others might have. > > I did wonder, for example, whether rsync should, when faced with this > error, fall back on creating a copy. But should rsync include behavior > that exists only to work around a file system limit? Perhaps only as a > command line option (ie. definitely not the default behavior)?I know it's been a while, but I thought I'd follow up on this. First: The problem is occurring with yum databases. A change was introduced a while back that saves space under /var/lib/yum by hard- linking at least some files (eg. the changed_by files). This isn't the only situation where our backups are failing due to "too many links", but it is the most reliable failure. The yum change included a fallback: if the linking failed, a new file is created. I mention this because I'm wondering (see above) if this is an appropriate solution for rsync. Apparently, it is so for yum. Second: xfs does seem to completely eliminate this issue. I don't quite trust xfs as much as I do ext3, so we're only using it where the "too many links" problem occurred. But as our systems are upgraded to the new yum, this will be more and more of our backup volumes. So I'm still wondering if an rsync-centric solution, perhaps similar to yum's fallback, is appropriate. - Andrew
Apparently Analagous Threads
- rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
- Access to session data
- rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
- rsync --link-dest and --files-from lead by a "change list" from some file system audit tool (Was: Re: cut-off time for rsync ?)
- DO NOT REPLY [Bug 7670] New: rsync --hard-links fails where ditto succeeds