While uploading a file via SFTP, the file which is being overwritten is done so in real-time rather than waiting until the transfer has finished. In other words, if it takes you 60 seconds to upload the new file, the old file is unusable for a period of 60 seconds. This is a major problem if the file you are overwriting, for example, is a server script which is accessed hundreds or thousands of times per minute. Of course, it's an even bigger problem if the connection is interrupted mid-transfer. I suggest that files transferred via SSH should be saved to a temp folder until the transfer is complete, then only would the original file be overwritten.
On 08/08/2009 10:31 PM, dawg wrote:> I suggest that files transferred via SSH should be saved to a temp > folder until the transfer is complete, then only would the original file > be overwritten.I believe this is already possible with sftp: put important-file important-file.XXXX rename important-file.XXXX important-file Would this workflow solve your problem? The only primitive i see missing for a complete/robust implementation would be a mktemp equivalent as an sftp primitive (to avoid collisions in the choice of temporary filenames), but that seems like a very different feature request. --dkg -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 890 bytes Desc: OpenPGP digital signature URL: <http://lists.mindrot.org/pipermail/openssh-unix-dev/attachments/20090808/ebc1c4cb/attachment.bin>
dawg wrote:> This is a major problem if the file you are overwriting, for > example, is a server script which is accessed hundreds or thousands > of times per minute.I would suggest that you investigate a more robust routine than using SFTP for updating your production services. You want to find an atomic system feature which allows you to switch between two files, or two sets of files. I suggest that you look into the application server (httpd if this is a web server) for this. With Apache, one way would be: Create new site folder Upload new contents Change virtualhost configuration to point to new site folder apachectl graceful # will not abort any current connections If you do not have permission to do this on your current server, either look for another server, look at other alternatives (rsync can do what you request, and can run over ssh) or simply use the suggested workaround of manually renaming the file two times. Maybe you can even script some SFTP clients to do it automatically. //Peter
Actually, the sshd could check whether there is enough disk space while uploading the files and make the choice automatically. On 08/09/2009 10:11 AM, dawg wrote:> On 08/09/2009 12:57 AM, Ben Lindstrom wrote: >> >> I very much disagree. Making "put" hold two copies of a file by >> default would cause people transferring large files to have failures. > That's ridiculous. The one in a million case versus the 999,999 in a > million cases? Disk space is rarely an issue that would affect this. > It could be made an option to be toggled via config in any case. >> >> It isn't a bug. There is no defect to report. You are requesting a >> new feature. And I can see making a request for a mktemp(3) >> feature. However, this would be specialized to OpenSSH's servers, >> and FileZilla may prefer a more universal feature set. > Sure it is. The current implementation corrupts the existing file for > a period of time. > > Anyway, I will waste no more breath on this. Thanks for taking the > time to reply. >> >> This is definitely a client-side problem and not a server side. >> >> - Ben >>
It doesn't matter how you intend the software to be used, it matters how it is used and how the end user or developer expects it to behave. I couldn't be bothered with reading the documentation, and hence, I expect the software to give me a pony... However, and much to my dismay, I have transfered several thousand files, and still no pony!!! This is clearly a bug!