I have been losing my mind over this problem, so any ideas are welcome. When running non-interactive jobs on remote machines, I want output to stderr and to stdout to remain separate in two streams just as if they were run locally. I also want jobs running remotely to die when I press Ctrl-C just as if they were run locally. At first glance these do not seem to be mutual exclusive, but when you get further into the details it seems they more or less are. "ssh example.com ls no_such_file" will print "ls: no_such_file: No such file or directory" on stderr. "ssh -tt example.com sleep 1000" will run sleep on the remote machine. Pressing Ctrl-C will kill sleep. But: "ssh -tt example.com ls no_such_file" will print "ls: no_such_file: No such file or directory" on STDOUT. "ssh example.com sleep 1000" will run sleep on the remote machine. Pressing Ctrl-C will NOT kill sleep. So 'ssh -tt' will do the right thing for Ctrl-C, but the wrong thing for stderr, and 'ssh' will do the right thing for stderr but the wrong thing for Ctrl-C. I have asked on StackExchange: http://unix.stackexchange.com/questions/134139/stderr-over-ssh-t and the solution there: * works for: suse, debian, mandriva, scosysv, ubuntu, unixware, redhat, raspberrypi. * does not kill remote job for: tru64, hurd, miros, freebsd, openbsd, netbsd, qnx, dragonfly * blocks for: solaris, centos(sometimes works), openindiana, irix, aix, hpux Is there a way where I can avoid mixing stderr with stdout and have remote jobs killed when I press Ctrl-C (as 'ssh -tt' does)? Background GNU Parallel uses 'ssh -tt' to propagate Ctrl+C. This makes it possible to kill remotely running jobs. But data sent to STDERR should continue to go to STDERR at the receiving end. /Ole
Ron Frederick
2014-Jul-17 05:17 UTC
Allow for passing Ctrl-C and don't mix stderr with stdout
On Jul 16, 2014, at 3:24 AM, Ole Tange <tange at gnu.org> wrote:> I have been losing my mind over this problem, so any ideas are welcome. > > When running non-interactive jobs on remote machines, I want output to > stderr and to stdout to remain separate in two streams just as if they > were run locally. I also want jobs running remotely to die when I > press Ctrl-C just as if they were run locally.With OpenSSH, you can only get separate stdout and stderr streams if you don?t allocate a pseudo-tty when you open up the SSH session. If you allocate a tty, the remote program can still have separate stdout and stderr, but whichever of those is directed to the pseudo-tty will be output by the local ssh client on stdout and nothing with go to the local stderr. Running with ?-tt? forces a pseudo-tty allocation and that will lose the stdout/stderr independence for the reasons explained in the stackexchange.com response related to having only a single pseudo-tty fd to read the output from. If you want separate stdout & stderr, you need to find a different way to trigger the interrupt signal. The SSH protocol provides a way for a client to send signals to the remote system without relying on a pseudo-tty, but I don?t think OpenSSH implements this feature yet (on either the client or server side). With it, it might be possible to add a ?~? SSH escape sequence which you could use instead of the Ctrl-C, similar to the way ?~B? sends a break request or ?~R? trigger key renegotiation. Without it, though, I can?t think of an easy way to accomplish what you are asking. -- Ron Frederick ronf at timeheart.net
Reasonably Related Threads
- Duplicate value used in disconnect reason definitons
- [Bug 2147] New: OpenSSH remote forwarding of dynamic ports doesn't work when you create more than one
- Keyboard Interactive Attack?
- [Bug 2366] New: ssh-keygen doesn't correctly decode new format GCM-encrypted keys
- Feature Request: Invalid sshd port fallback