I just ran this again and got this error: leelab/NCBI_Data_old/GenBank/htg write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 rsync error: error in file IO (code 11) at receiver.c(243) Received signal 16. (no core) rsync error: received SIGUSR1 or SIGINT (code 20) at rsync.c(229) The command I am running is: /usr/local/bin/rsync -auv --delete --rsh=/usr/bin/ssh lpgfs104:/share/group/* /share/group/> An update on this problem... I get the error below (and the error I > reported previously) when running rsync 2.5.2 compiled from > source. I saw > different behavior when I used the rsync 2.5.2 binary > compiled on Solaris > 2.5.1 by Dave Dykstra. That binary complained of "Value too large for > defined data type" whenever it encountered a large file (over > 2GB), but did > not exit. The impression I got was that the Solaris 2.5.1 > binary did not > support or even try to support files over 2 GB, where the > binary compiled on > Solaris 7 or 8 *thinks* it can support large files but fails, > since it exits > as soon as it encounters the large file. > > So the problem still remains: rsync is dying when it > encounters a large > file. One person suggested using --exclude, but this only > matches against > file names, not file sizes. (I can't do "--exclude=size>2GB" > for example.) > > Questions I still have: > > - Is rsync supposed to support files >2GB on Solaris 7 and Solaris 8? > > - If so, what is causing the errors I am seeing? Is there > something I can > do at compile time? > > - If not, is there a way for it to skip large files > gracefully so that at > least the rsync process completes? > > leelab/NCBI_Data_old/GenBank/htg > write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 > rsync error: error in file IO (code 11) at receiver.c(243) > > Received signal 16. (no core) > rsync: connection unexpectedly closed (23123514 bytes read so far) > rsync error: error in rsync protocol data stream (code 12) at > io.c(140) > >
The SIGUSR1 or SIGINT is just a secondary message in this case that you can ignore. The receiver side of rsync splits into two processes, and that's just the message that the second one prints after the first one kills it off because it had a problem. The real problem is your write failure. I don't have any experience with using >2GB files on Solaris 7 or 8 so hopefully somebody else can help you with that or you can figure out the problem yourself. The Solaris tools I distribute inside my company are all compiled on 2.5.1 (because I need to support users on that OS version and up) so I'm stuck with the 32 bit limit. - Dave Dykstra On Tue, Feb 12, 2002 at 11:31:55AM -0500, Granzow, Doug (NCI) wrote:> I just ran this again and got this error: > > leelab/NCBI_Data_old/GenBank/htg > write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 > rsync error: error in file IO (code 11) at receiver.c(243) > > Received signal 16. (no core) > rsync error: received SIGUSR1 or SIGINT (code 20) at rsync.c(229) > > The command I am running is: > > /usr/local/bin/rsync -auv --delete --rsh=/usr/bin/ssh > lpgfs104:/share/group/* /share/group/ > > > > An update on this problem... I get the error below (and the error I > > reported previously) when running rsync 2.5.2 compiled from > > source. I saw > > different behavior when I used the rsync 2.5.2 binary > > compiled on Solaris > > 2.5.1 by Dave Dykstra. That binary complained of "Value too large for > > defined data type" whenever it encountered a large file (over > > 2GB), but did > > not exit. The impression I got was that the Solaris 2.5.1 > > binary did not > > support or even try to support files over 2 GB, where the > > binary compiled on > > Solaris 7 or 8 *thinks* it can support large files but fails, > > since it exits > > as soon as it encounters the large file. > > > > So the problem still remains: rsync is dying when it > > encounters a large > > file. One person suggested using --exclude, but this only > > matches against > > file names, not file sizes. (I can't do "--exclude=size>2GB" > > for example.) > > > > Questions I still have: > > > > - Is rsync supposed to support files >2GB on Solaris 7 and Solaris 8? > > > > - If so, what is causing the errors I am seeing? Is there > > something I can > > do at compile time? > > > > - If not, is there a way for it to skip large files > > gracefully so that at > > least the rsync process completes? > > > > leelab/NCBI_Data_old/GenBank/htg > > write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 > > rsync error: error in file IO (code 11) at receiver.c(243) > > > > Received signal 16. (no core) > > rsync: connection unexpectedly closed (23123514 bytes read so far) > > rsync error: error in rsync protocol data stream (code 12) at > > io.c(140) > > > >
Well, I found my problem. After doing many trusses, modifying rsync source code to generate various debugging information, and learning my way around a few parts of rsync a lot better than an end-user should, I determined that the problem has nothing to do with rsync whatsoever. :) The problem was that the destination filesystem (a vxfs filesystem) was not configured to support large files. You were right, Dave, when you wrote "The real problem is your write failure." I enabled large file support on vxfs and rsync is now happily copying files of all sizes. To answer my own questions (from the bottom of my last message, below) (and for the benefit of someone searching through the mailing list archive): 1) Yes, rsync supports files over 2GB when compiled on Solaris 7 or 8. 2) The problem was the destination filesystem (a Veritas File System) was not configured to support files over 2GB. 3) I didn't find a way to do this, but I no longer have any need to. Thanks, Doug> -----Original Message----- > From: Dave Dykstra [mailto:dwd@bell-labs.com] > Sent: Tuesday, February 12, 2002 11:54 AM > To: Granzow, Doug (NCI) > Cc: 'rsync@lists.samba.org' > Subject: Re: large file error is now SIGUSR1 or SIGINT error > > > The SIGUSR1 or SIGINT is just a secondary message in this > case that you can > ignore. The receiver side of rsync splits into two > processes, and that's > just the message that the second one prints after the first > one kills it > off because it had a problem. The real problem is your write failure. > > I don't have any experience with using >2GB files on Solaris 7 or 8 so > hopefully somebody else can help you with that or you can > figure out the > problem yourself. The Solaris tools I distribute inside my > company are all > compiled on 2.5.1 (because I need to support users on that OS > version and > up) so I'm stuck with the 32 bit limit. > > - Dave Dykstra > > On Tue, Feb 12, 2002 at 11:31:55AM -0500, Granzow, Doug (NCI) wrote: > > I just ran this again and got this error: > > > > leelab/NCBI_Data_old/GenBank/htg > > write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 > > rsync error: error in file IO (code 11) at receiver.c(243) > > > > Received signal 16. (no core) > > rsync error: received SIGUSR1 or SIGINT (code 20) at rsync.c(229) > > > > The command I am running is: > > > > /usr/local/bin/rsync -auv --delete --rsh=/usr/bin/ssh > > lpgfs104:/share/group/* /share/group/ > > > > > > > An update on this problem... I get the error below (and > the error I > > > reported previously) when running rsync 2.5.2 compiled from > > > source. I saw > > > different behavior when I used the rsync 2.5.2 binary > > > compiled on Solaris > > > 2.5.1 by Dave Dykstra. That binary complained of "Value > too large for > > > defined data type" whenever it encountered a large file (over > > > 2GB), but did > > > not exit. The impression I got was that the Solaris 2.5.1 > > > binary did not > > > support or even try to support files over 2 GB, where the > > > binary compiled on > > > Solaris 7 or 8 *thinks* it can support large files but fails, > > > since it exits > > > as soon as it encounters the large file. > > > > > > So the problem still remains: rsync is dying when it > > > encounters a large > > > file. One person suggested using --exclude, but this only > > > matches against > > > file names, not file sizes. (I can't do "--exclude=size>2GB" > > > for example.) > > > > > > Questions I still have: > > > > > > - Is rsync supposed to support files >2GB on Solaris 7 > and Solaris 8? > > > > > > - If so, what is causing the errors I am seeing? Is there > > > something I can > > > do at compile time? > > > > > > - If not, is there a way for it to skip large files > > > gracefully so that at > > > least the rsync process completes? > > > > > > leelab/NCBI_Data_old/GenBank/htg > > > write failed on leelab/NCBI_Data_old/GenBank/htg : Error 0 > > > rsync error: error in file IO (code 11) at receiver.c(243) > > > > > > Received signal 16. (no core) > > > rsync: connection unexpectedly closed (23123514 bytes read so far) > > > rsync error: error in rsync protocol data stream (code 12) at > > > io.c(140) > > > > > > >