similar to: any more efficient way to transfer snapshot between two hosts than ssh tunnel?

Displaying 20 results from an estimated 800 matches similar to: "any more efficient way to transfer snapshot between two hosts than ssh tunnel?"

2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2010 Nov 18
9
WarpDrive SLP-300
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html Good stuff for ZFS. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101117/d48186f0/attachment.html>
2011 May 24
2
ndmp?
When I search around, I see that nexenta has ndmp, and solaris 10 does not, and there was at least some talk about supporting ndmp in opensolaris ... So ... Is ndmp present in solaris 11 express? Is it an installable 3rd party package? How would you go about supporting ndmp if you wanted to? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Jan 02
1
ssh / scp slow on 10GBE
Hello list, right now SSH Tunnel / scp is reaches just around 76Mb/s on my E5 Xeon using AES-NI but openssl reaches around 600-700Mb/s using 128aes-cbc cipher. As far as i understand http://www.psc.edu/index.php/hpn-ssh this is due to very small buffers in ssh / scp. Is there any work on this? Like autotuning the buffer size? Are there plans to integrate the hpn patches? Greets, Stefan
2005 Jun 17
3
New Set of High Performance Networking Patches Available
http://www.psc.edu/networking/projects/hpn-ssh/ Mike Stevens and I just released a new set of high performance networking patches for OpenSSH 3.9p1, 4.0p1, and 4.1p1. These patches will provide the same set of functionality across all 3 revisions. New functionality includes 1) HPN performance even without both sides of the connection being HPN enabled. As long as the bulk data flow is in the
2008 Dec 08
5
How to use mbuffer with zfs send/recv
>> How do i compile mbuffer for our system, Thanks to Mike Futerko for help with the compile, i now have it installed OK. >> and what syntax to i use to invoke it within the zfs send recv? Still looking for answers to this one? Any example syntax, gotchas etc would be much appreciated. -- Kind regards, Jules free. open. honest. love. kindness. generosity. energy. frenetic.
2010 Feb 02
7
Help needed with zfs send/receive
Hi folks, I''m having (as the title suggests) a problem with zfs send/receive. Command line is like this : pfexec zfs send -Rp tank/tsm@snapshot | ssh remotehost pfexec zfs recv -v -F -d tank This works like a charm as long as the snapshot is small enough. When it gets too big (meaning somewhere between 17G and 900G), I get ssh errors (can''t read from remote host). I tried
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2001 Dec 08
1
HTB Message Storm HTB Delay <large number> > 5 secs
Hello I''ve set up a simple system. It seems to work for a short while, but now I''ve got batches of 100''s of these messages. Also I can''t connect through that box any more. It''s as if forwarding died. Has anyone any advice? Regards John
2012 Jul 11
1
Decoding a continues stream
Hi, I've trying to decode a FLAC audio stream. I have a reader which sends raw byte data to my FLAC wrapper class. Only once the decode function below returns, the reader will send new data. Hence I want to decode until the stream is empty, but I will add new data to the stream once it is empty. *void **MyFlacCoder::**decode(char *data, int bytes) { mBuffer = input;
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2009 Nov 20
2
ZFS Send Priority and Performance
I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv. The process works quite well when there is little to no usage on the source systems. However when the source systems are under usage replication slows down to a near crawl. Without load replication streams along usually near 1 Gbps but drops down to anywhere between 0 - 5000
2007 Nov 09
1
HPN SSH
Hello, I know that this has been asked before, just wanted to mention that I, too, would like to see the HPN SSH functionality incorporated in the standard OpenSSH. Would there be technical disadvantages integrating the changes? I know we are all pretty busy, but I would certainly spend time to help, e.g. with testing, documentation, etc. Cheers --pwo -- Peter W. Osel - http://pwo.de/ - pwo
2011 Feb 06
3
OpenSSH could be faster...then why don't they path it??
https://www.psc.edu/networking/projects/hpn-ssh/hpn-v-ssh-tput.jpg "SCP and the underlying SSH2 protocol implementation in OpenSSH is network performance limited by statically defined internal flow control buffers. These buffers often end up acting as a bottleneck for network throughput of SCP, especially on long and high bandwith network links. Modifying the ssh code to allow the buffers
2009 Feb 17
1
Support for merging LPK and hpn-ssh into mainline openssh?
Hello Are there plans to merge the hpn-ssh (http://www.psc.edu/networking/projects/hpn-ssh/) and the LPK (http://code.google.com/p/openssh-lpk/) into the mainline openssh. Adding lpk has been logged as a bug in bugzilla as They are two patches that I always apply as the performance boost from hpn-ssh is substantial to say the least, and centralisation of the authorized_keys into a LDAP server
2010 Oct 02
3
out of HDD space - zfs degraded
Overnight I was running a zfs send | zfs receive (both within the same system / zpool). The system ran out of space, a drive went off line, and the system is degraded. This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18 23:43:48 EDT 2010. The following logs are also available at http://www.langille.org/tmp/zfs-space.txt <- no line wrapping This is what was running: #
2006 May 19
1
New HPN Patch Released
The HPN12 patch available from http://www.psc.edu/networking/projects/hpn-ssh addresses performance issues with bulk data transfer over high bandwidth delay paths. By adjusting internal flow control buffers to better fit the outstanding data capacity of the path significant improvements in bulk data throughput performance are achieved. In other words, transfers over the internet are a lot