search for: lzop

Displaying 20 results from an estimated 44 matches for "lzop".

Did you mean: loop
2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2009 May 26
4
Oops on a converted ext4 system
I converted an ext4 filesystem with btrfs-convert, mounted it, and wanted to do "lzop -d ...". The result was an immediate Oops (btrfs is on LVM, on dm-crypt, on /dev/sdb which is USB-connected). mini-904.img.lzo dentry_open failed BUG: unable to handle kernel paging request at ffffffcd IP: [<c01b5f36>] fput+0x6/0x30 *pde = 00575067 *pte = 00000000 Oops: 0002 [#1] SMP...
2020 Nov 17
2
Best practice preparing for disk restoring system
...ezilla, write into their archive directories.? It's impressive. > > If you zero out all free space on all of your HDD partitions (dd bs=1M if=/dev/zero of=/path/deleteme; rm /path/deleteme) or use 'fstrim' for SSD's, you could use dd to image with fast & light compression (lzop or my current favorite, pzstd) and get maximum benefit of a bit-by-bit archival copy. > > > On 11/16/20 11:02 PM, H wrote: >> Short of backing up entire disks using dd, I'd like to collect all required information to make sure I can restore partitions, disk information, UUIDs and...
2004 Mar 09
1
rsync compression options
hi ! is the "only" compression in rsync zlib compression ? i would like to rsync huge files, which are compressible very good - i wondered how i could use lzop as an compresion option. unfortunately, rsync has no option "--use-compress-program=/path/to/bzip2,lzop,compress,whatever....", like gnu tar has. is this just a missing feature which just has to be done, or would that just be impossible to implement due to "architecture" of rsy...
2009 Aug 03
1
DO NOT REPLY [Bug 6603] New: Improve --skip-compress default values
...fault list of suffixes that will not be compressed is this (several of these are newly added for 3.0.0): gz/zip/z/rpm/deb/iso/bz2/t[gb]z/7z/mp[34]/mov/avi/ogg/jpg/jpeg SUGGESTION Please also add these compresser program extensions to the default list: *.lzo lzop(1) *.rzip rzip(1) *.lzma lzma(1) *.rar rar comressed *.ace ace compressed Archive can also be encrypted for backup purposes, so if possible exclude also these extensions by default: *.tar.gz.gpg *.tar.bz2.gpg *.tar.lzma.gpg *.tar.lzop.gpg -- Configure bugmail: https://bug...
2009 Mar 27
7
is zpool export/import | faster than rsync or cp
I need to move data from one zpool to another, lock stock and barrel, Being from linux background my instinct was to use rsync. But then I remembered seeing the `export/import options in man zpool.. And I''ve seen mention of them here too, but didn''t pay attention since I''d noticed no need yet. Now I''m wondering if the export/import sub commands might not be
2020 Nov 17
2
Best practice preparing for disk restoring system
Short of backing up entire disks using dd, I'd like to collect all required information to make sure I can restore partitions, disk information, UUIDs and anything else required in the event of losing a disk. So far I am collecting information from: - fdisk -l - blkid - lsblk - grub2-efi.cfg - grub - fstab Hoping that this would supply me with /all/ information to restore a system - with the
2011 Jan 25
7
flush-btrfs-1 hangs when building openwrt
Hi, Since update to 2.6.37 I can''t build openwrt on my btrfs buildroot anymore. I''m not sure if this is related to the other flush-btrfs-1 thread. plenty of diskspace is free: /dev/mapper/cruor-build 97G 68G 27G 73% /opt/build It always hangs when openwrt builds the ext4 image and runs tune2fs on it.
2016 Mar 16
1
Re: Improving supermin appliance startup time (lkvm/qboot)
...than libguestfs though. Another thought: why does guestfish need to boot the appliance more than once? Could virsh save/restore or managedsave/stop/start be used? The guestfs appliance seems to be around 80MB saved [*] (perhaps ballooning could help shrink this, or it could be compressed with lz4/lzop). [*] I copied the XML and changed some things in it, cause the original failed to save with: error: Requested operation is not valid: domain is marked for auto destroy Best regards, -- Edwin Török | Co-founder and Lead Developer Skylable open-source object storage: reliable, fast, secure http:...
2006 Feb 08
2
RSYNC via pipe/socket ?
...t use an existing remote-shell channel... ---------- so, this gives me some hope that this _could_ possible somehow. unfortunately i don`t have a clue, how to write the mentioned wrapper script. i think of something like this: rsync -a -dontConnect2RemoteHostButPipeToSTDOUT ./localdata2sync | lzop -c | netcat remothost 12345 where on remotehosts netcat listens to port 12345 and passes the data to lzop and then to rsync..... otoh, thinking about this, i`m not sure if a pipe could work, because rsync does bidirectional communication with the "remote" end. is something like this...
2012 Aug 14
7
[PATCH 0/7] Add tar compress, numericowner, excludes flags.
https://bugzilla.redhat.com/show_bug.cgi?id=847880 https://bugzilla.redhat.com/show_bug.cgi?id=847881 This patch series adds various optional arguments to the tar-in and tar-out commands. Firstly (1/7) an optional "compress" flag is added to select compression. This makes the calls tgz-in/tgz-out/txz-in/txz-out deprecated, and expands the range of compression types available.
2010 Jun 15
3
about rsyncing of block devices
...uot;s"} else {print "c" . $_}' dev1 | perl -ne 'BEGIN{$/=\1} if ($_ eq"s") {$s++} else {if ($s) { seek STDOUT,$s*1024,1; $s=0}; read ARGV,$buf,1024; print $buf}' 1<> dev2 And if dev2 is on a remote host, run the 1st and last perl over ssh (add some lzop or gzip compression to save bandwith if need be): ssh remote " perl -'MDigest::MD5 md5' -ne 'BEGIN{\$/=\1024};print md5(\$_)' dev2 | lzop -c" | lzop -dc | perl -'MDigest::MD5 md5' -ne 'BEGIN{$/=\1024};$b=md5($_); read STDIN,$a,16;if ($a eq $b) {print &...
2020 Nov 17
0
Best practice preparing for disk restoring system
...h as clonezilla, write into their archive directories.? It's impressive. If you zero out all free space on all of your HDD partitions (dd bs=1M if=/dev/zero of=/path/deleteme; rm /path/deleteme) or use 'fstrim' for SSD's, you could use dd to image with fast & light compression (lzop or my current favorite, pzstd) and get maximum benefit of a bit-by-bit archival copy. On 11/16/20 11:02 PM, H wrote: > Short of backing up entire disks using dd, I'd like to collect all required information to make sure I can restore partitions, disk information, UUIDs and anything else re...
2020 Nov 18
0
Best practice preparing for disk restoring system
...heir archive directories.? It's impressive. >> >> If you zero out all free space on all of your HDD partitions (dd >bs=1M if=/dev/zero of=/path/deleteme; rm /path/deleteme) or use >'fstrim' for SSD's, you could use dd to image with fast & light >compression (lzop or my current favorite, pzstd) and get maximum >benefit of a bit-by-bit archival copy. >> >> >> On 11/16/20 11:02 PM, H wrote: >>> Short of backing up entire disks using dd, I'd like to collect all >required information to make sure I can restore partitions, dis...
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know
2020 Nov 18
2
Best practice preparing for disk restoring system
...ve directories.? It's impressive. >>> If you zero out all free space on all of your HDD partitions (dd >> bs=1M if=/dev/zero of=/path/deleteme; rm /path/deleteme) or use >> 'fstrim' for SSD's, you could use dd to image with fast & light >> compression (lzop or my current favorite, pzstd) and get maximum >> benefit of a bit-by-bit archival copy. >>> >>> On 11/16/20 11:02 PM, H wrote: >>>> Short of backing up entire disks using dd, I'd like to collect all >> required information to make sure I can restore pa...
2002 Feb 09
1
Rsync -> TAR
If you're planning to rsync it over, tar it up, and delete the directory tree, you should just tar|gzip it on the work system and catch that in a file on the other end. Tim Conway tim.conway@philips.com 303.682.4917 Philips Semiconductor - Longmont TC 1880 Industrial Circle, Suite D Longmont, CO 80501 Available via SameTime Connect within Philips, n9hmg on AIM perl -e 'print
2012 Aug 30
2
[PATCH v2] daemon: collect list of called external commands
...--- a/daemon/compress.c +++ b/daemon/compress.c @@ -27,6 +27,12 @@ #include "daemon.h" #include "actions.h" +GUESTFSD_EXT_CMD(str_compress, compress); +GUESTFSD_EXT_CMD(str_gzip, gzip); +GUESTFSD_EXT_CMD(str_bzip2, bzip2); +GUESTFSD_EXT_CMD(str_xz, xz); +GUESTFSD_EXT_CMD(str_lzop, lzop); + /* Has one FileOut parameter. */ static int do_compressX_out (const char *file, const char *filter, int is_device) @@ -118,15 +124,15 @@ get_filter (const char *ctype, int level, char *ret, size_t n) reply_with_error ("compress: cannot use optional level parameter with this...
2010 Jul 19
22
zfs send to remote any ideas for a faster way than ssh?
I''ve tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I''vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I''m open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output. For the moment I; zfs send > pigz scp arcfour the file gz file to the
2012 Aug 30
1
[PATCH] collect list of called external commands
...0644 --- a/daemon/compress.c +++ b/daemon/compress.c @@ -27,6 +27,12 @@ #include "daemon.h" #include "actions.h" +GUESTFS_EXT_CMD(str_compress, compress); +GUESTFS_EXT_CMD(str_gzip, gzip); +GUESTFS_EXT_CMD(str_bzip2, bzip2); +GUESTFS_EXT_CMD(str_xz, xz); +GUESTFS_EXT_CMD(str_lzop, lzop); + /* Has one FileOut parameter. */ static int do_compressX_out (const char *file, const char *filter, int is_device) @@ -118,15 +124,15 @@ get_filter (const char *ctype, int level, char *ret, size_t n) reply_with_error ("compress: cannot use optional level parameter with this...