search for: compressible

Displaying 20 results from an estimated 8988 matches for "compressible".

2009 Jul 15
3
Conflicting perl packages?
The simple ordinary yum update of CentOS 5.3 spits a bunch of transaction check errors regarding packages perl-IO-Compress-Zlib-2.015-1.el5.rf.noarch and perl-IO-Compress-2.020-1.el5.rf.noarch which is supposed to replace perl-IO-Compress-Base-2.015-1.el5.rf.noarch: Transaction Check Error: file /usr/lib/perl5/vendor_perl/5.8.8/IO/Compress/Adapter/Deflate.pm from install of
2011 Dec 22
0
[linux test] 10593: tolerable FAIL - PUSHED
flight 10593 linux real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/10593/ Failures :-/ but no regressions. Tests which are failing intermittently (not blocking): test-amd64-amd64-xl-sedf 13 guest-localmigrate.2 fail pass in 10581 test-i386-i386-pv 16 guest-start.2 fail pass in 10570 test-amd64-i386-rhel6hvm-amd 7 redhat-install
2018 Aug 02
3
Default compression level for -compress-debug-info=zlib?
Folks, I'd like to get expert's opinion on which compression level is suitable for lld's -compress-debug-section=zlib option, which let the linker compress .debug_* sections using zlib. Currently, lld uses compression level 9 which produces the smallest output in exchange for a longer link time. My question is, is this what people actually want? We didn't consciously choose
2013 Nov 22
0
[xen-unstable test] 22084: tolerable FAIL - PUSHED
flight 22084 xen-unstable real [real] http://www.chiark.greenend.org.uk/~xensrcts/logs/22084/ Failures :-/ but no regressions. Tests which did not succeed, but are not blocking: test-amd64-amd64-xl-pcipt-intel 9 guest-start fail never pass test-armhf-armhf-xl 5 xen-boot fail never pass test-amd64-i386-xend-winxpsp3 16 leak-check/check
2018 Aug 02
2
Default compression level for -compress-debug-info=zlib?
...(i) / decompression (ii) bandwidth. For spinning drives it *might* be a win but for SATA and especially PCIe / NVMe SSD it could be a CPU bottleneck? Though we should also bear in mind that compression can be pipelined with writes in i) and debug info loading could be lazy in ii) (e.g. for highly compressible data we've generally seen ~10MiB/s output bandwidth on single thread i7 @3.2GHz memory to memory for zlib9 with 32KiB window, that doesn't stack up well against modern IO) How is the compression implemented in lld? Is it chunked and therefore paralellizable (and able to be pipelined with I...
2003 May 10
3
benchmarking rsync's -z compression utility
Hi, Is there a way in which rsync's -z compression (zlib) utility can be benchmarked? I'm trying to compare the compression ratio between rsync and external compression tools like gzip and bzip2. Are there any advantages to using rsync's internal compression mechanism specified with the -z option compared to solely applying external compression i.e. bzip2 to the files and invoking
2018 Aug 02
2
Default compression level for -compress-debug-info=zlib?
Also not an expert, but would it make sense for this to be configurable at a fine-grained level, perhaps with another option, or an extension to the compress-debug-sections switch interface? That way users who care about the finer details can configure it themselves. And we should pick sensible options for the default. James On 2 August 2018 at 11:08, Pavel Labath via llvm-dev < llvm-dev at
2018 Aug 02
3
Default compression level for -compress-debug-info=zlib?
...;> drives it *might* be a win but for SATA and especially PCIe / NVMe SSD it >> could be a CPU bottleneck? Though we should also bear in mind that >> compression can be pipelined with writes in i) and debug info loading could >> be lazy in ii) >> >> (e.g. for highly compressible data we've generally seen ~10MiB/s output >> bandwidth on single thread i7 @3.2GHz memory to memory for zlib9 with 32KiB >> window, that doesn't stack up well against modern IO) >> >> How is the compression implemented in lld? Is it chunked and therefore >> par...
2018 Aug 02
3
Default compression level for -compress-debug-info=zlib?
...SATA and especially PCIe / NVMe SSD it >>>> could be a CPU bottleneck? Though we should also bear in mind that >>>> compression can be pipelined with writes in i) and debug info loading could >>>> be lazy in ii) >>>> >>>> (e.g. for highly compressible data we've generally seen ~10MiB/s output >>>> bandwidth on single thread i7 @3.2GHz memory to memory for zlib9 with 32KiB >>>> window, that doesn't stack up well against modern IO) >>>> >>>> How is the compression implemented in lld? Is it ch...
2011 Sep 18
5
Inefficient storing of ISO images with compress=lzo
I''ve noticed that: - with x86-64 Fedora 15 DVD install images: - du -sh <ROOT VOLUME> was 36 GB - btrfs df | grep -i data have shown over 40 GB used - without - du -sh <ROOT VOLUME> is 34 GB - btrfs df | grep -i data have shown less then 34 GB used It seems that iso files are considered compressable while they may not be (and penalty is severe - 3x). Regards
2019 Jun 25
5
About rsync over SSH and compression
Rsync supports the capability of compressing data before sending. So does OpenSSH. It would be probably be a waste of resources and time to enable both compression capabilities at the same time, but it is not clear to me whether, in general, it makes better sense to enable rsync compression or SSH compression. My first thought would be that SSH compression might yield better results, on the
2013 Jan 28
2
Not respected repo priorities...
Ah ah, the demo effect... just after I said I did not have much issues with repos... ^_^ A collegue installed some packages (for perconna) and since then a server insists on replacing 2 base packages with 2 rfx packages, even when I gave a lower priority to rfx... Installed: perl-IO-Compress-Base-2.020-127.el6.x86_64 Installed: 1:perl-Compress-Raw-Zlib-2.020-127.el6.x86_64 Installed:
2012 Apr 12
2
Details about compression and extents
Hello, I''m currently trying to understand how compression in btrfs works. I could not find any detailed description about it. So here are my questions. 1. How is decided what to compress and what not? After a fast test with a 2g image file, I''ve looked into the extents of that file with find-new and it turned out that only some of the first extents were compressed. The file was
2007 Jul 05
17
ZFS Compression algorithms - Project Proposal
Bellow, follows a proposal for a new opensolaris project. Of course, this is open to change since I just wrote down some ideas I had months ago, while researching the topic as a graduate student in Computer Science, and since I''m not an opensolaris/ZFS expert at all. I would really appreciate any suggestion or comments. PROJECT PROPOSAL: ZFS Compression Algorithms. The main purpose of
2007 Jul 05
5
FLAC: getting compression level using metaflac
Why isn't the compression level added in a metadata block by the flac encoder itself (just like the encoder version)? In this way all programs that read the file can see what compression level was used. thx 2007/7/4, Scot Thompson <scot.thompson@cox.net>: > > This has been asked many times. The answer is no. I suggest saving the > compression level into a tag for future
2013 Mar 12
3
Flac compression levels?
"Using FLAC binary : /Users/Marcus/flac/test/../src/flac/flac Original file size 441044 bytes. Compression level 1, file size 421393 bytes. Compression level 2, file size 421393 bytes. Compression level 3, file size 373613 bytes. Compression level 4, file size 369517 bytes. Compression level 5, file size 369517 bytes. Compression level 6, file size 369517 bytes. Compression
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa
2020 Sep 08
3
[PATCH 0/5] ZSTD compression support for OpenSSH
...82.6MB/s 00:06 | Transferred: sent 144496, received 55714260 bytes, in 6.6 seconds | Bytes per second: sent 21789.3, received 8401454.6 | debug1: compress outgoing: raw data 46014, compressed 61226, factor 1.33 | debug1: compress incoming: raw data 563267187, compressed 55281740, factor 0.10 incompressible data zlib | CPU, sshd 70%, ssh 14% | u 100% 300MB 22.5MB/s 00:13 | Transferred: sent 57068, received 315112228 bytes, in 13.5 seconds | Bytes per second: sent 4236.6, received 23393315.6 | debug1: compress outgoing: raw data 24981, compressed 11877, factor 0.48 | debug1: compress incoming: raw...
2013 May 07
4
[Bug 9864] New: Allow permanent compression of destination files
https://bugzilla.samba.org/show_bug.cgi?id=9864 Summary: Allow permanent compression of destination files Product: rsync Version: 3.1.0 Platform: All OS/Version: All Status: NEW Severity: enhancement Priority: P5 Component: core AssignedTo: wayned at samba.org ReportedBy: me at haravikk.com
2003 Dec 11
2
read.spss question warning compression bias
Hello again I have a file from SPSS in .sav format. when I run library(foreign) cvar<-as.data.frame(read.spss("c:\\NDRI\\cvar\\data\\cvar2rev3.sav")) I get a warning Warning message: c:\NDRI\cvar\data\cvar2rev3.sav: Compression bias (0) is not the usual value of 100. The data appear to be OK, but I am concerned. (I tried searching the archives and the documenation for data