Displaying 20 results from an estimated 20000 matches similar to: "[Bug 1014] SCP slow bandwidth with Solaris8 on n240"
2005 Oct 26
1
[Bug 1014] SCP slow bandwidth with Solaris8 on n240
http://bugzilla.mindrot.org/show_bug.cgi?id=1014
dtucker at zip.com.au changed:
What |Removed |Added
----------------------------------------------------------------------------
Attachment #884 is|0 |1
obsolete| |
------- Comment #4 from dtucker at zip.com.au 2005-10-26 18:20 -------
Created
2005 Apr 12
3
[Bug 1014] SCP slow bandwidth with Solaris8 on n240
http://bugzilla.mindrot.org/show_bug.cgi?id=1014
Summary: SCP slow bandwidth with Solaris8 on n240
Product: Portable OpenSSH
Version: 3.7.1p2
Platform: ix86
OS/Version: SunOS
Status: NEW
Severity: normal
Priority: P2
Component: scp
AssignedTo: openssh-bugs at mindrot.org
ReportedBy:
2013 Jan 02
1
ssh / scp slow on 10GBE
Hello list,
right now SSH Tunnel / scp is reaches just around 76Mb/s on my E5 Xeon
using AES-NI but openssl reaches around 600-700Mb/s using 128aes-cbc cipher.
As far as i understand http://www.psc.edu/index.php/hpn-ssh this is due
to very small buffers in ssh / scp.
Is there any work on this? Like autotuning the buffer size? Are there
plans to integrate the hpn patches?
Greets,
Stefan
2015 Aug 11
2
rsync stuck at +- 50 MB/s, cp and scp are +- 200 MB/s
Hi,
I tried different encryptions like arc four, but always with the same result. BTW: googling shows some similar questions and they are stuck on set same speed +-.
But non of that solutions helped me.
/G?tz
> Am 11.08.2015 um 12:14 schrieb Eero Volotinen <eero.volotinen at iki.fi>:
>
> Usually problem in encryption.
>
> try cipher arcfour or apply hpn patches to
2006 Mar 25
1
High Performance SSH/SCP - HPN-SSH when?
Hi,
http://www.psc.edu/networking/projects/hpn-ssh/
Clearly, the HPN patches significantly boost throughput performance.
This enhancement is entirely from tuning the SSH buffer sizes.
Alex Tavcar
2007 Aug 21
1
High Performance SSH/SCP - HPN-SSH
Dear CentOS lovers,
Could you consider to include a patch,
http://www.psc.edu/networking/projects/hpn-ssh/
for openssh maybe as CentosPlus packages?
It has great speed impact for long-distance ( high delay ) transfer.
Regards,
Yuji Tsuchimoto
2005 Jun 17
3
New Set of High Performance Networking Patches Available
http://www.psc.edu/networking/projects/hpn-ssh/
Mike Stevens and I just released a new set of high performance
networking patches for OpenSSH 3.9p1, 4.0p1, and 4.1p1. These patches
will provide the same set of functionality across all 3 revisions. New
functionality includes
1) HPN performance even without both sides of the connection being HPN
enabled. As long as the bulk data flow is in the
2015 Aug 11
0
rsync stuck at +- 50 MB/s, cp and scp are +- 200 MB/s
Usually problem in encryption.
try cipher arcfour or apply hpn patches to ssh. (
http://www.psc.edu/index.php/hpn-ssh)
--
Eero
2015-08-11 12:37 GMT+03:00 G?tz Reinicke - IT Koordinator <
goetz.reinicke at filmakademie.de>:
> Hi,
>
> i have two servers, connected to to the lan by 10Gb with 10Gb and DAS
> hardware raid.
>
> Each system con read and write locally or to the
2011 Feb 06
3
OpenSSH could be faster...then why don't they path it??
https://www.psc.edu/networking/projects/hpn-ssh/hpn-v-ssh-tput.jpg
"SCP and the underlying SSH2 protocol implementation in OpenSSH is network performance limited by statically defined internal flow control buffers. These buffers often end up acting as a bottleneck for network throughput of SCP, especially on long and high bandwith network links. Modifying the ssh code to allow the buffers
2006 May 19
1
New HPN Patch Released
The HPN12 patch available from
http://www.psc.edu/networking/projects/hpn-ssh addresses performance
issues with bulk data transfer over high bandwidth delay paths. By
adjusting internal flow control buffers to better fit the outstanding
data capacity of the path significant improvements in bulk data
throughput performance are achieved.
In other words, transfers over the internet are a lot
2007 Mar 12
0
HPN patch now available for OpenSSH 4.6
The HPN patch set has been updated to work with OpenSSH4.6. This patch
can help improve performance of bulk data transfers when using SSH, SCP,
or SFTP. Please see http://www.psc.edu/networking/projects/hpn-ssh
for more information.
The patch is available from the above address or directly with
http://www.psc.edu/networking/projects/hpn-ssh/openssh-4.6p1-hpn12v16.diff.gz
If you have any
2005 Sep 08
1
HPN Patch for OpenSSH 4.2p1 Available
Howdy,
As a note, we now have HPN patch for OpenSSH 4.2 at
http://www.psc.edu/networking/projects/hpn-ssh/
Its still part of the last set of patches (HPN11) so there aren't any
additional changes in the code. It patches, configures, compiles, and
passes make tests without a problem. I've not done extensive testing for
this version of openssh but I don't foresee any problems.
I
2006 Mar 16
0
New Version of HPN-SSH Patch
[NB: General information regarding HPN-SSH can be found at
http://www.psc.edu/networking/projects/hpn-ssh ]
This is a beta release of HPN12 but I'd like to get some user
experiences with it if anyone is so inclined. This version of the HPN
patch more closely conforms to the openssh nomenclature and coding
style, it eliminates the use of command line switches in favor of -o
options, it
2005 Mar 25
1
New HPN patch released for 3.9
We've released a new HPN (High Performance Network) patch for OpenSSH
3.9p1. We've made two major changes - first off we backed out of all
the modifications we made to buffer.c. Turns out that it just wasn't
necessary once we fixed a nagging bug in channels.c. I also made a
minor change to the buffer sizes in the source and sink functions in
scp.c Increasing the size of both
2007 Nov 15
0
Extended Server Logging Patch
On the request of a coworker looking for more information about our SSH
users I developed a patch that provides extended logging capability for
SSHD. Its been written with an eye towards machine parsing. This patch
will write the following information to the standard system log:
remote ip, remote port, & remote user name
protocol number and client version information
Encryption method, MAC
2009 Feb 17
1
Support for merging LPK and hpn-ssh into mainline openssh?
Hello
Are there plans to merge the hpn-ssh
(http://www.psc.edu/networking/projects/hpn-ssh/) and the LPK
(http://code.google.com/p/openssh-lpk/) into the mainline openssh.
Adding lpk has been logged as a bug in bugzilla as
They are two patches that I always apply as the performance boost from
hpn-ssh is substantial to say the least, and centralisation of the
authorized_keys into a LDAP server
2007 May 05
1
[Bug 1311] Performance on high BDP networks
http://bugzilla.mindrot.org/show_bug.cgi?id=1311
Summary: Performance on high BDP networks
Product: Portable OpenSSH
Version: 4.6p1
Platform: All
OS/Version: All
Status: NEW
Severity: normal
Priority: P2
Component: Build system
AssignedTo: bitbucket at mindrot.org
ReportedBy: imorgan at
2008 Jan 29
0
Available: Multi-threaded AES-CTR Cipher
On multiple core systems OpenSSH is limited to using a single core for
all operations. On these systems this can result in a transfer being
processor bound even though additional CPU resources exist. In order to
open up this bottleneck we've developed a multi-threaded version of
the AES-CTR cipher. Unlike CBC mode, since there is no dependency
between cipher blocks in CTR mode we
2006 Sep 29
0
HPN-SSH for OpenSSH 4.4p1 Available
This is a preliminary release and as such should be used at your own
risk. In my testing the application builds under OS X and Linux, passes
the regression tests, and file transfer tests on our test connections
exhibited a 1600% increase in performance
(1.4MB/s versus 20.9MB/s 46ms RTT).
This patch (hpn12v10) is available from
2008 Feb 07
0
HPN-SSH: HPN13v1 Released
Ben Bennett and I (both researchers at the Pittsburgh Supercomputing
Center) have released the HPN13v1 patch set for OpenSSH 4.7p1. Primarily
this release incorporates the previously announced multi-threaded
AES-CTR mode cipher which will allow users to make better use of
multi-core environments. In our test environments we've seen upwards of
a 100% improvement in throughput performance