Displaying 17 results from an estimated 17 matches for "pbzip2".
Did you mean:
bzip2
2015 Mar 02
2
[LLVMdev] clang change function name
Hi,
I compile a .cpp with cmd:
clang++ -emit-llvm -c -g -O0 -w pbzip2.cpp -o pbzip2.bc -lbz2
llvm-dis pbzip2.bc
One function in .cpp is consumer_decompress. However, I look inside
pbzip2.ll. The function name is changed to "define i8*
@_Z19consumer_decompressPv(i8* %q) #0 {"
Why clang adds a "_Z19" prefix and "Pv" suffix?
Thanks,
2015 Jan 15
4
Request to speed up save()
Hi,
I am dealing with very large datasets and it takes a long time to save a
workspace image.
The options to save compressed data are: "gzip", "bzip2" or "xz", the
default being gzip. I wonder if it's possible to include the pbzip2
(http://compression.ca/pbzip2/) algorithm as an option when saving.
"PBZIP2 is a parallel implementation of the bzip2 block-sorting file
compressor that uses pthreads and achieves near-linear speedup on SMP
machines. The output of this version is fully compatible with bzip2
v1.0.2 or newe...
2015 Mar 02
2
[LLVMdev] clang change function name
Got it, thanks. But in my pass, I use function name to locate. Can I
disable mangling in clang?
Best,
Haopeng
On 3/1/15 10:44 PM, John Criswell wrote:
> On 3/1/15 11:38 PM, Haopeng Liu wrote:
>> Hi,
>>
>> I compile a .cpp with cmd:
>> clang++ -emit-llvm -c -g -O0 -w pbzip2.cpp -o pbzip2.bc -lbz2
>> llvm-dis pbzip2.bc
>>
>> One function in .cpp is consumer_decompress. However, I look inside
>> pbzip2.ll. The function name is changed to "define i8*
>> @_Z19consumer_decompressPv(i8* %q) #0 {"
>>
>> Why clang adds a &...
2015 Jan 15
0
Request to speed up save()
...wrote:
>
> Hi,
>
> I am dealing with very large datasets and it takes a long time to save a workspace image.
>
> The options to save compressed data are: "gzip", "bzip2" or "xz", the default being gzip. I wonder if it's possible to include the pbzip2 (http://compression.ca/pbzip2/) algorithm as an option when saving.
>
> "PBZIP2 is a parallel implementation of the bzip2 block-sorting file compressor that uses pthreads and achieves near-linear speedup on SMP machines. The output of this version is fully compatible with bzip2 v1.0.2 o...
2010 Aug 05
3
[LLVMdev] a problem when using postDominatorTree
...] %bb8 {7,20}
> [3] %bb7 {8,9}
> [3] %bb2 {10,11}
> [3] %bb6 {12,13}
> [3] %bb5 {14,19}
> [4] %bb4 {15,16}
> [4] %bb3 {17,18}
> 0 opt 0x085643e8
> Stack dump:
> 0. Program arguments: opt
> -load=/home/a_i/llvm/llvm-2.7/Release/lib/ConsDumper.so -consdumper -f
> -o pbzip2_2s.bc pbzip2.bc -debug
> 1. Running pass 'dump constraints' on module 'pbzip2.bc'.
> Segmentation fault/
> I have no hint about this. Does anyone know about the reason?
Not yet. However I would love to find the reason.
Can you reproduce this with the development version o...
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to
verify the integrity of that datastream without doing a ?zfs receive? and
occupying all that disk space?
I am aware that ?zfs send? is not a backup solution, due to vulnerability of
even a single bit error, and lack of granularity, and other reasons.
However ... There is an attraction to ?zfs send? as an augmentation to the
2010 Aug 05
0
[LLVMdev] a problem when using postDominatorTree
...> [3] %bb2 {10,11}
>> [3] %bb6 {12,13}
>> [3] %bb5 {14,19}
>> [4] %bb4 {15,16}
>> [4] %bb3 {17,18}
>> 0 opt 0x085643e8
>> Stack dump:
>> 0. Program arguments: opt
>> -load=/home/a_i/llvm/llvm-2.7/Release/lib/ConsDumper.so -consdumper -f
>> -o pbzip2_2s.bc pbzip2.bc -debug
>> 1. Running pass 'dump constraints' on module 'pbzip2.bc'.
>> Segmentation fault/
>> I have no hint about this. Does anyone know about the reason?
>
> Not yet. However I would love to find the reason.
>
> Can you reproduce this...
2010 Aug 05
1
[LLVMdev] a problem when using postDominatorTree
...3] %bb6 {12,13}
>>> [3] %bb5 {14,19}
>>> [4] %bb4 {15,16}
>>> [4] %bb3 {17,18}
>>> 0 opt 0x085643e8
>>> Stack dump:
>>> 0. Program arguments: opt
>>> -load=/home/a_i/llvm/llvm-2.7/Release/lib/ConsDumper.so -consdumper -f
>>> -o pbzip2_2s.bc pbzip2.bc -debug
>>> 1. Running pass 'dump constraints' on module 'pbzip2.bc'.
>>> Segmentation fault/
>>> I have no hint about this. Does anyone know about the reason?
>>>
>> Not yet. However I would love to find the reason.
&g...
2013 Apr 10
3
Logging SIP connection status for review
Is anyone using something to log SIP results (connected/not, latency) that
they really like? We do some logging using simple scripts writing the
results of sip show peers to a text file if customers report issues, but it
would be nice to have a tool that logs all the time and lets us do some
better reporting. For example, graphs of latency in a time range, or a
list of unreachable phones within
2010 Aug 05
0
[LLVMdev] a problem when using postDominatorTree
...ry {5,6}
[2] %bb8 {7,20}
[3] %bb7 {8,9}
[3] %bb2 {10,11}
[3] %bb6 {12,13}
[3] %bb5 {14,19}
[4] %bb4 {15,16}
[4] %bb3 {17,18}
0 opt 0x085643e8
Stack dump:
0. Program arguments: opt -load=/home/a_i/llvm/llvm-2.7/Release/lib/ConsDumper.so -consdumper -f -o pbzip2_2s.bc pbzip2.bc -debug
1. Running pass 'dump constraints' on module 'pbzip2.bc'.
Segmentation fault
I have no hint about this. Does anyone know about the reason?
Thanks a lot~
Regards,
--Wenbin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <ht...
2016 Dec 15
1
Parallel compression support for saving to rds/rdata files?
Hi,
I have tried to follow the instructions in the ``save`` documentation and
it doesn't seem to work (see below):
mydata <- do.call(rbind, rep(iris, 10000))
con <- pipe("pigz -p8 > fname.gz", "wb");
save(mydata, file = con); close(con) # This runs
R.utils::gunzip("fname.gz", "fname.RData", overwrite = TRUE)
load("fname.RData") #
2012 May 24
0
Btrfs and more compression algorithms
...er Btrfs-devs that I''ve forgot,
is there a chance we''ll see a xz file-compression support in Btrfs
anytime soon ?
I''m sure folks have been waiting for additional compression support
besides gzip and lzo (bzip2 seems out of question due to its slowness,
there''s pbzip2 but that''s not included in the kernel).
This would be a really nice bonus due to the processors getting faster
and SSD usage is more and more widespread - add an efficient
implementation and
we would have a fast, extremely efficient and feature-rich filesystem.
My current situation is t...
2009 Jan 26
1
Backup methods for an Oracle DB
...ing disk-based backup. I've tried scp'ing the files
directly to my backup server, but the operation is too long (120 min).
I tried generating a tar.gz directly to my backup server via SSH. A
decent 40 minutes, using mgzip (multi-thread gzip), 70 Gigs. 120
minutes for a tar.bz2, using pbzip2 (parralel bzip2) (54 Gigs). A tar
sent directly to my backup server is quite huge (318 Gigs). It is then
taken to tape on the regular nightly backup.
My concerns are:
- Time needed to perform backup (downtime).
- Time needed to do a recovery.
For the backup, sending a tar.gz directly to the...
2010 Nov 06
4
obtaining non-packaged software
I have been using Fedora on my home desktop for close to an year, and
I am happy with it, nevertheless I am considering switching to a
slower-moving distro.
CentOS + EPEL put together have less packages than Fedora. Moreover
RPM Fusion has fewer packages for EL than for Fedora. I am wondering
how can I install on my PC applications for which packages do not
exist from one of the above-mentioned
2012 Oct 05
24
Building an On-Site and Off-Size ZFS server, replication question
Good morning.
I am in the process of planning a system which will have 2 ZFS servers, one
on site, one off site. The on site server will be used by workstations and
servers in house, and most of that will stay in house. There will, however,
be data i want backed up somewhere else, which is where the offsite server
comes in... This server will be sitting in a Data Center and will have some
storage
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is.
I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer.
I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output.
For the moment I;
zfs send > pigz
scp arcfour the file gz file to the
2011 Feb 16
2
RE: [PATCH V2 0/3] drivers/staging: zcache: dynamic page cache/swap compression
...;> applied (first 2 patch(sets)) from list)
> >>>>
> >>>>
> >>>> in my case I tried to extract / play back a 1.7 GiB tarball of my
> >>>> portage-directory (lots of small files and some tar.bzip2
> archives)
> >>>> via pbzip2 or 7z when the error happened and the message was shown
> >>>>
> >>>> Due to KMS sound (webradio streaming) was still running but I
> couldn''t
> >>>> continue work (X switching to kernel output) so I did the magic
> sysrq
> >>>...