search for: incompressible

Displaying 20 results from an estimated 21 matches for "incompressible".

2010 Sep 20
0
No subject
+0100 From: Daniel Schall <tinc-devel at mon-clan.de> Date: Thu, 6 Jan 2011 17:00:35 +0100 Subject: [PATCH] Improved PMTU discovery diff --git a/lib/dropin.c b/lib/dropin.c index 52fb5b8..2b803b1 100644 --- a/lib/dropin.c +++ b/lib/dropin.c @@ -165,8 +165,8 @@ #endif =20 #ifdef HAVE_MINGW -int usleep(long usec) { - Sleep(usec / 1000); - return 0; -} +//int usleep(long usec) { +//
2003 Aug 27
1
performance suggestion: sparse files
...of zeros. I worked around the problem by adding -z to compress the stream first (blocks of zeros compress remarkably well), and that made the virtual disk image transfer go much faster. Of course, all of the .tgzs and .tbzs in the same transfer got slower waiting on the source CPU to compress the incompressible. The obvious solution is to <music type=organ register=bass>change the protocol</music>, but that seems like a scary thing to do for a performance tweak. What about an option for "really-crappy-compression"? Something really cheezy (RLE) that can decide in a hurry whether to...
2014 Apr 14
4
[Bug 10552] New: Sender checksum calculation significantly slower with compression enabled
...dentical (or nearly so) but have different modification times takes significantly longer than without the -z option. It looks like the entire file is compressed as the checksum is calculated even when no data needs to be transmitted to the receiver. To replicate: Create test folders/files (using incompressible data to maximize effect): mkdir a b dd if=/dev/urandom of=a/a.tst bs=1M count=250 cp a/a.tst b/ run rsync without compression: touch a/a.tst time rsync -av a/ b run rsync with compression: touch a/a.tst time rsync -avz a/ b The second time with the -z option will take significantly longer, even...
2011 May 23
5
Variable Bit Rate
Is FLAC a variable bit rate format when streamed? If so, how can it be truly lossless? -- Dennis Brunnenmeyer Director of Engineering CEDAR RIDGE SYSTEMS 15019 Rattlesnake Road Grass Valley, CA 95945-8710 Office: 1 (530) 477-9015 Mobile: 1 (530) 320-9025 eMail: dennisb /at/ chronometrics /dot/ com http://www.chronometrics.com/crs/index.html <http://www.chronometrics.com/crs/index.html>
2013 Oct 25
1
LZ4 compression in openssh
I see. From reading that wikipedia article, I'm wondering what gets compressed when compression is enabled in openssh. Is it the ciphertext or the cleartext? Regards, Mark On Fri, 2013-10-25 at 15:47 -0400, Daniel Kahn Gillmor wrote: > On 10/25/2013 03:23 PM, Mark E. Lee wrote: > > Thanks for the response, what kind of problematic interactions would > > occur (other than
2011 Apr 21
0
Simple xen devel project: try out new compression algorithm (with tmem)
...rmail/devel/2011-April/015113.html From: Zeev Tarantov <zeev.tarantov at gmail.com> Google''s Snappy data compression library is a faster alternative to LZO, optimized for x86-64. On compressible input it compresses ~2.5x faster than LZO and decompresses ~1.5-2x faster than LZO. On incompressible input, it skips the input at 100x faster than LZO and decompresses ~4x faster than LZO. It is released under BSD license. This is a kernel port from user space C++ code. The current intended use is with zram (see next patch in series). Signed-off-by: Zeev Tarantov <zeev.tarantov at gmail.com&gt...
2011 Apr 21
0
Simple xen devel project: try out new compression algorithm (with tmem)
...rmail/devel/2011-April/015113.html From: Zeev Tarantov <zeev.tarantov at gmail.com> Google''s Snappy data compression library is a faster alternative to LZO, optimized for x86-64. On compressible input it compresses ~2.5x faster than LZO and decompresses ~1.5-2x faster than LZO. On incompressible input, it skips the input at 100x faster than LZO and decompresses ~4x faster than LZO. It is released under BSD license. This is a kernel port from user space C++ code. The current intended use is with zram (see next patch in series). Signed-off-by: Zeev Tarantov <zeev.tarantov at gmail.com&gt...
2012 Aug 22
3
opus lossless?
Hi All, It is possible to make Opus/CELT a lossless coder if I allow a sufficiently high bit rate? We considered using FLAC, but FLAC's latency is well beyond the acceptable range. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/opus/attachments/20120822/60c69469/attachment.htm
2011 Jan 05
1
PMTU Discovery
Dear Guus, while improving the PMTU Discovery algorithm, I found the following behavior in the method "send_udppacket": 1) The code checks, whether the data size is smaller than the MTU, thus if it fits into a single UDP packet. If not, you send the packet via TCP. 2) The data is compressed, changing its size. (Usually, making it smaller, but that's not always
2010 Nov 26
2
PMTU Discovery Question
Hi Guus, while checking the source code, I stumbled upon PMTU Discovery. I've got a question regarding the process of sending/receiving PMTU packets. As I understand, the packet flow is like this: 1 .Tinc creates a packet with a specific payload length to send it as an PMTU probe. (The data part is just some random bytes.) 2. This packet gets compressed and sent
2013 Nov 07
2
Segfaults on connection loss
Hi there, I'm seeing quite frequent segfaults around check_dead_connections() and terminate_connection() when the tcp meta connection to a node times out (or is e.g. firewalled), usually it happens when there's heavy packet loss: Program terminated with signal 11, Segmentation fault. #0 edge_del (e=0x1b71ba0) at edge.c:96 96 avl_delete(e->from->edge_tree, e); (gdb)
2012 Apr 12
2
Details about compression and extents
Hello, I''m currently trying to understand how compression in btrfs works. I could not find any detailed description about it. So here are my questions. 1. How is decided what to compress and what not? After a fast test with a 2g image file, I''ve looked into the extents of that file with find-new and it turned out that only some of the first extents were compressed. The file was
2020 Sep 08
3
[PATCH 0/5] ZSTD compression support for OpenSSH
...MB 82.6MB/s 00:06 | Transferred: sent 144496, received 55714260 bytes, in 6.6 seconds | Bytes per second: sent 21789.3, received 8401454.6 | debug1: compress outgoing: raw data 46014, compressed 61226, factor 1.33 | debug1: compress incoming: raw data 563267187, compressed 55281740, factor 0.10 incompressible data zlib | CPU, sshd 70%, ssh 14% | u 100% 300MB 22.5MB/s 00:13 | Transferred: sent 57068, received 315112228 bytes, in 13.5 seconds | Bytes per second: sent 4236.6, received 23393315.6 | debug1: compress outgoing: raw data 24981, compressed 11877, factor 0.48 | debug1: compress incoming: raw...
2012 Feb 13
10
[RFB] add LZ4 compression method to btrfs
Hi, so here it is, LZ4 compression method inside btrfs. The patchset is based on top of current Chris'' for-linus + Andi''s snappy implementation + the fixes from Li Zefan. Passes xfstests and stresstests. I haven''t measured performance on wide range of hardware or workloads, rather wanted to publish the patches before I get distracted again. I''d like to ask
2011 May 23
3
Variable Bit Rate
...o dunkle Flammen"). That's also why the format can't help but be VBR: different pieces of sound contain different amounts of information per second, and therefore have different compression ratio (and compression ratios can --- very rarely --- go above even 1: white noise is completely incompressible, when you add FLAC metadata you end up with a FLAC file bigger than the source WAV) -- Dennis Brunnenmeyer Director of Engineering CEDAR RIDGE SYSTEMS 15019 Rattlesnake Road Grass Valley, CA 95945-8710 Office: 1 (530) 477-9015 Mobile: 1 (530) 320-9025 eMail: dennisb /at/ chronometrics /dot/ co...
2015 May 10
3
Packet compression benchmark
...rk, then this is what you are interested in. If on the other hand you are bandwidth limited, then the compression ratio is what is most important. I also used for different types of data; zero bytes (perfectly compressible, but otherwise useless), HTML, binary executables, and random data (totally incompressible). The test was done with a single thread, and libraries were used provided by the distribution used. CPU: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz OS: Debian unstable, updated 2015-05-10 zlib: 1.2.8.dfsg-2+b1 LZO: 2.0.8-1.2 LZ4: 0.0~r122-2 Packet size 1451: Algorithm Zero bytes...
2003 Sep 18
2
[Fwd: Re: FreeBSD Security Advisory FreeBSD-SA-03:12.openssh]
Roger Marquis wrote: > [snip] > >It takes all of 2 seconds to generate a ssh 2 new session on a >500Mhz cpu (causing less than 20% utilization). Considering that >99% of even the most heavily loaded servers have more than enough >cpu for this task I don't really see it as an issue. > >Also, by generating a different key for each session you get better >entropy,
2012 Jun 23
9
[PATCH 0/5] btrfs: lz4/lz4hc compression
WARNING: This is not compatible with the previous lz4 patchset. If you''re using experimental compression that isn''t in mainline kernels, be prepared to backup and restore or decompress before upgrading, and have backups in case it eats data (which appears not to be a problem any more, but has been during development). These patches add lz4 and lz4hc compression
2008 Nov 29
75
Slow death-spiral with zfs gzip-9 compression
I am [trying to] perform a test prior to moving my data to solaris and zfs. Things are going very poorly. Please suggest what I might do to understand what is going on, report a meaningful bug report, fix it, whatever! Both to learn what the compression could be, and to induce a heavy load to expose issues, I am running with compress=gzip-9. I have two machines, both identical 800MHz P3 with
2020 Sep 05
8
[PATCH 0/5] ZSTD compression support for OpenSSH
I added ZSTD support to OpenSSH roughly over a year and I've been playing with it ever since. The nice part is that ZSTD achieves reasonable compression (like zlib) but consumes little CPU so it is unlikely that compression becomes the bottle neck of a transfer. The compression overhead (CPU) is negligible even when uncompressed data is tunneled over the SSH connection (SOCKS proxy, port