Displaying 20 results from an estimated 8987 matches for "compression".
2009 Jul 15
3
Conflicting perl packages?
The simple ordinary yum update of CentOS 5.3 spits a bunch of
transaction check errors regarding packages
perl-IO-Compress-Zlib-2.015-1.el5.rf.noarch and
perl-IO-Compress-2.020-1.el5.rf.noarch which is supposed to replace
perl-IO-Compress-Base-2.015-1.el5.rf.noarch:
Transaction Check Error:
file /usr/lib/perl5/vendor_perl/5.8.8/IO/Compress/Adapter/Deflate.pm
from install of
2011 Dec 22
0
[linux test] 10593: tolerable FAIL - PUSHED
flight 10593 linux real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/10593/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-amd64-amd64-xl-sedf 13 guest-localmigrate.2 fail pass in 10581
test-i386-i386-pv 16 guest-start.2 fail pass in 10570
test-amd64-i386-rhel6hvm-amd 7 redhat-install
2018 Aug 02
3
Default compression level for -compress-debug-info=zlib?
Folks,
I'd like to get expert's opinion on which compression level is suitable for
lld's -compress-debug-section=zlib option, which let the linker compress
.debug_* sections using zlib.
Currently, lld uses compression level 9 which produces the smallest output
in exchange for a longer link time. My question is, is this what people
actually want? We didn...
2013 Nov 22
0
[xen-unstable test] 22084: tolerable FAIL - PUSHED
flight 22084 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/22084/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-xl-pcipt-intel 9 guest-start fail never pass
test-armhf-armhf-xl 5 xen-boot fail never pass
test-amd64-i386-xend-winxpsp3 16 leak-check/check
2018 Aug 02
2
Default compression level for -compress-debug-info=zlib?
...nded advantage for compressing the debug sections? - (i)
Improved link time through smaller IO / (ii) Improved Load / startup time
for the debugger / (iii) Smaller exe with debug info for distribution /
disk space?
For i) and ii) how much this is worth it depends on balance of storage
bandwidth to compression (i) / decompression (ii) bandwidth. For spinning
drives it *might* be a win but for SATA and especially PCIe / NVMe SSD it
could be a CPU bottleneck? Though we should also bear in mind that
compression can be pipelined with writes in i) and debug info loading could
be lazy in ii)
(e.g. for highly...
2003 May 10
3
benchmarking rsync's -z compression utility
Hi,
Is there a way in which rsync's -z compression (zlib) utility can be
benchmarked?
I'm trying to compare the compression ratio between rsync and external
compression tools like gzip and bzip2.
Are there any advantages to using rsync's internal compression mechanism
specified with the -z option compared to solely applying external
com...
2018 Aug 02
2
Default compression level for -compress-debug-info=zlib?
...ace? That way users who care about the
finer details can configure it themselves. And we should pick sensible
options for the default.
James
On 2 August 2018 at 11:08, Pavel Labath via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> I don't claim to be an expert, but I did some zlib compression
> benchmarks in the past. IIRC, my conclusion from that was that the
> "DEFAULT" zlib level (6) is indeed a very good default for a lot of
> cases -- it does not generate much larger outputs, while being
> significantly faster than the max level. This all depends on the data
&...
2018 Aug 02
3
Default compression level for -compress-debug-info=zlib?
...distribute executables
with debug info widely
> I think it is at least less important than (1).
Agreed.
> I think (1) is definitely the case, and that's also true for a
distributed build system with which a lot of object files are copied
between machines.
> My suggestion was to use compression level 9 when both -O2 and
-compress-debug-section=zlib are specified.
Ok great, I'm less concerned if it still requires an explicit
-compress-debug-section=zlib
even with -O2 (I thought you were proposing to add to O2)
Still for informational / advisory purposes it would be good for us to
pro...
2018 Aug 02
3
Default compression level for -compress-debug-info=zlib?
Not really, as well as some sensitivity to the input data overall
performance of the link with compression will depend on how this is
implemented in lld - how is it parallelized? How is it chunked? Is it
effectively pipelined with IO?
Or, I wouldn't feel comfortable being able to make a recommendation to our
end-users on whether to use this option or not based on my existing
extensive benchmarking...
2011 Sep 18
5
Inefficient storing of ISO images with compress=lzo
I''ve noticed that:
- with x86-64 Fedora 15 DVD install images:
- du -sh <ROOT VOLUME> was 36 GB
- btrfs df | grep -i data have shown over 40 GB used
- without
- du -sh <ROOT VOLUME> is 34 GB
- btrfs df | grep -i data have shown less then 34 GB used
It seems that iso files are considered compressable while they may not be (and penalty is severe - 3x).
Regards
2019 Jun 25
5
About rsync over SSH and compression
Rsync supports the capability of compressing data before sending. So does
OpenSSH. It would be probably be a waste of resources and time to enable
both compression capabilities at the same time, but it is not clear to me
whether, in general, it makes better sense to enable rsync compression or
SSH compression.
My first thought would be that SSH compression might yield better results,
on the ground that SSH will try to cram as much data as possible in a
chann...
2013 Jan 28
2
Not respected repo priorities...
Ah ah, the demo effect... just after I said I did not have much issues with repos... ^_^
A collegue installed some packages (for perconna) and since then a server insists on replacing 2 base packages with 2 rfx packages, even when I gave a lower priority to rfx...
Installed: perl-IO-Compress-Base-2.020-127.el6.x86_64
Installed: 1:perl-Compress-Raw-Zlib-2.020-127.el6.x86_64
Installed:
2012 Apr 12
2
Details about compression and extents
Hello,
I''m currently trying to understand how compression in btrfs works. I
could not find any detailed description about it. So here are my
questions.
1. How is decided what to compress and what not? After a fast test
with a 2g image file, I''ve looked into the extents of that file with
find-new and it turned out that only some of the first exte...
2007 Jul 05
17
ZFS Compression algorithms - Project Proposal
...ct. Of course,
this is open to change since I just wrote down some ideas I had months
ago, while researching the topic as a graduate student in Computer
Science, and since I''m not an opensolaris/ZFS expert at all. I would
really appreciate any suggestion or comments.
PROJECT PROPOSAL: ZFS Compression Algorithms.
The main purpose of this project is the development of new
compression schemes for the ZFS file system. We plan to start with
the development of a fast implementation of a Burrows Wheeler
Transform based algorithm (BWT). BWT is an outstanding tool
and the currently known lossless compr...
2007 Jul 05
5
FLAC: getting compression level using metaflac
Why isn't the compression level added in a metadata block by the flac
encoder itself (just like the encoder version)? In this way all programs
that read the file can see what compression level was used.
thx
2007/7/4, Scot Thompson <scot.thompson@cox.net>:
>
> This has been asked many times. The answer is no....
2013 Mar 12
3
Flac compression levels?
"Using FLAC binary : /Users/Marcus/flac/test/../src/flac/flac
Original file size 441044 bytes.
Compression level 1, file size 421393 bytes.
Compression level 2, file size 421393 bytes.
Compression level 3, file size 373613 bytes.
Compression level 4, file size 369517 bytes.
Compression level 5, file size 369517 bytes.
Compression level 6, file size 369517 bytes.
Compression level 7, file siz...
2009 Jun 15
33
compression at zfs filesystem creation
Hi,
I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/...
2020 Sep 08
3
[PATCH 0/5] ZSTD compression support for OpenSSH
...ed to OpenBSD
> base, and I don't know if that would be accepted. Do you have any
> performance numbers for zstd in this application?
A key stroke is here 10 bytes of raw data which zstd compresses usually
into 10 bytes while zlib manages to squeeze it into 5 bytes. This leads
to better compression ratio for zlib in ssh's accounting (visible in
verbose mode after connection terminates). The data length, that will be
transferred over the wire, is the same for 5 and 10 bytes data after the
crypto part (with padding and so on).
Regarding statistics, do you have anything specific in mind?
A...
2013 May 07
4
[Bug 9864] New: Allow permanent compression of destination files
https://bugzilla.samba.org/show_bug.cgi?id=9864
Summary: Allow permanent compression of destination files
Product: rsync
Version: 3.1.0
Platform: All
OS/Version: All
Status: NEW
Severity: enhancement
Priority: P5
Component: core
AssignedTo: wayned at samba.org
ReportedBy: me at haravikk...
2003 Dec 11
2
read.spss question warning compression bias
Hello again
I have a file from SPSS in .sav format.
when I run
library(foreign)
cvar<-as.data.frame(read.spss("c:\\NDRI\\cvar\\data\\cvar2rev3.sav"))
I get a warning
Warning message:
c:\NDRI\cvar\data\cvar2rev3.sav: Compression bias (0) is not the usual
value of 100.
The data appear to be OK, but I am concerned.
(I tried searching the archives and the documenation for data import
export, but saw nothing).
Thanks as always
Peter
Peter L. Flom, PhD
Assistant Director, Statistics and Data Analysis Core
Center for D...