similar to: Decoding a continues stream

Displaying 20 results from an estimated 200 matches similar to: "Decoding a continues stream"

2009 Nov 20
2
ZFS Send Priority and Performance
I have several X4540 Thor systems with one large zpool that replicate data to a backup host via zfs send/recv. The process works quite well when there is little to no usage on the source systems. However when the source systems are under usage replication slows down to a near crawl. Without load replication streams along usually near 1 Gbps but drops down to anywhere between 0 - 5000
2008 Dec 08
5
How to use mbuffer with zfs send/recv
>> How do i compile mbuffer for our system, Thanks to Mike Futerko for help with the compile, i now have it installed OK. >> and what syntax to i use to invoke it within the zfs send recv? Still looking for answers to this one? Any example syntax, gotchas etc would be much appreciated. -- Kind regards, Jules free. open. honest. love. kindness. generosity. energy. frenetic.
2010 Feb 02
7
Help needed with zfs send/receive
Hi folks, I''m having (as the title suggests) a problem with zfs send/receive. Command line is like this : pfexec zfs send -Rp tank/tsm@snapshot | ssh remotehost pfexec zfs recv -v -F -d tank This works like a charm as long as the snapshot is small enough. When it gets too big (meaning somewhere between 17G and 900G), I get ssh errors (can''t read from remote host). I tried
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2009 Jan 07
9
''zfs recv'' is very slow
On Wed 07/01/09 20:31 , Carsten Aulbert carsten.aulbert at aei.mpg.de sent: > Brent Jones wrote: > > > > Using mbuffer can speed it up dramatically, but > > this seems like a hack> without addressing a real problem with zfs > > send/recv.> Trying to send any meaningful sized snapshots > > from say an X4540 takes> up to 24 hours, for as little as 300GB
2010 Oct 02
3
out of HDD space - zfs degraded
Overnight I was running a zfs send | zfs receive (both within the same system / zpool). The system ran out of space, a drive went off line, and the system is degraded. This is a raidz2 array running on FreeBSD 8.1-STABLE #0: Sat Sep 18 23:43:48 EDT 2010. The following logs are also available at http://www.langille.org/tmp/zfs-space.txt <- no line wrapping This is what was running: #
2012 Dec 14
12
any more efficient way to transfer snapshot between two hosts than ssh tunnel?
Assuming in a secure and trusted env, we want to get the maximum transfer speed without the overhead from ssh. Thanks. Fred -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20121213/654f543f/attachment-0001.html>
2010 May 20
13
send/recv over ssh
I know i''m probably doing something REALLY stupid.....but for some reason i can''t get send/recv to work over ssh. I just built a new media server and i''d like to move a few filesystem from my old server to my new server but for some reason i keep getting strange errors... At first i''d see something like this: pfexec: can''t get real path of
2008 Nov 06
45
''zfs recv'' is very slow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 hi, i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i''m using ''zfs send -i'' to replicate changes on A to B. however, the ''zfs recv'' on B is running extremely slowly. if i run the zfs send on A and redirect output to a file, it sends at 2MB/sec. but when i use ''zfs send
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2001 Dec 08
1
HTB Message Storm HTB Delay <large number> > 5 secs
Hello I''ve set up a simple system. It seems to work for a short while, but now I''ve got batches of 100''s of these messages. Also I can''t connect through that box any more. It''s as if forwarding died. Has anyone any advice? Regards John
2010 Jan 05
2
FLAC C API / Visual Studio 2008 FILE* Issue
Hello, I am currently learning the FLAC C API and had the code working with FLAC__stream_decoder_init_file. However, since I'd need the Unicode filename support, I tried _wfopen_s in combination with FLAC__stream_decoder_init_FILE, however I get a runtime crash as sonn as I call FLAC__stream_decoder_process_until_end_of_stream. The same code (partially taken from the examples) is working
2010 Jan 05
3
FLAC C API / Visual Studio 2008 FILE* Issue
I managed to get around it. I used the stream functions and provided my own callbacks for reading and writing. What's strange is that what I've done is just copied the contents of read/write/seek/tell/eof callbacks from the sources to my application and it works just fine, no glitches. When I use the build-in implementation, it just crashes without any reason. It's not a problem to
2011 Sep 15
1
decoder lost after processing
Hi, I'm writing a simple flac playing program, and I've basically modified the example C decoder code to use libao. The example code works just fine, but when I use libao, after calling FLAC__stream_decoder_process_until_end_of_stream(), decoder points to an inaccessible area of memory (0x2). This invariable causes a segmentation fault when anything else thereafter uses the decoder (i.e.
2004 Sep 10
2
Storing FLAC in Matroska
First, Thank you for your answers. I using the following code to try simply decode a flac file and write the decoded data raw PCM file. The resulting file is just noise and pops, so is the decoded data in a different format than PCM? struct flacData { FILE *inputFile; FILE *outputFile; char *filename; }; FLAC__StreamDecoderReadStatus flac_DecoderReadCallback(const FLAC__StreamDecoder
2010 May 28
6
zfs send/recv reliability
After looking through the archives I haven''t been able to assess the reliability of a backup procedure which employs zfs send and recv. Currently I''m attempting to create a script that will allow me to write a zfs stream to a tape via tar like below. # zfs send -R pool at something | tar -c > /dev/tape I''m primarily concerned with in the possibility
2010 Oct 14
0
AMD/Supermicro machine - AS-2022G-URF
Sorry for the long post but I know trying to decide on hardware often want to see details about what people are using. I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am starting to use. I successfully transferred a deduped zpool with 1.x TB of files and 60 or so zfs filesystems using mbuffer from an old 134 system with 6 drives - it ran at about 50MB/s or
2006 Aug 02
10
[PATCH 0/6] htb: cleanup
The HTB scheduler code is a mess, this patch set does some basic house cleaning. The first four should cause no code change, but the last two need more testing. -- Stephen Hemminger <shemminger@osdl.org> "And in the Packet there writ down that doome" - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to
2004 Sep 10
2
Error initializing flac stream decoder.
I've cross-compiled flac for the armv4l processor (rio receiver), and i'm trying to start up a decode thread : #include <FLAC/stream_decoder.h> .... FLAC__StreamDecoder *flac = NULL; flac = FLAC__stream_decoder_new(); if (flac == NULL) { printf("[DECODE] Unable to initalize flac object\n");
2004 Sep 10
2
Error initializing flac stream decoder.
Thanks for that email. The one lihe change I made is this : from #define FLAC__MAX_RICE_PARTITION_ORDER (15u) to #define FLAC__MAX_RICE_PARTITION_ORDER (6u) and that seemed to make decoder_new() happy, but it's promptly crashing after making a call to the read callback (below), then to the meta callback. The meta callback did nothing but print a string and return. I removed it, and