similar to: Slow death-spiral with zfs gzip-9 compression

Displaying 20 results from an estimated 60000 matches similar to: "Slow death-spiral with zfs gzip-9 compression"

2009 Dec 04
30
ZFS send | verify | receive
If there were a ?zfs send? datastream saved someplace, is there a way to verify the integrity of that datastream without doing a ?zfs receive? and occupying all that disk space? I am aware that ?zfs send? is not a backup solution, due to vulnerability of even a single bit error, and lack of granularity, and other reasons. However ... There is an attraction to ?zfs send? as an augmentation to the
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks. I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2009 Dec 27
7
How to destroy your system in funny way with ZFS
Hi all, I installed another OpenSolaris (snv_129) in VirtualBox 3.1.0 on Windows because snv_130 doesn''t boot anymore after installation of VirtualBox guest additions. Older builds before snv_129 were running fine too. I like some features of this OS, but now I end with something funny. I installed default snv_129, installed guest additions -> reboot, set
2007 Apr 18
33
LZO compression?
Hi, I don''t know if this has been discussed before, but have you thought about adding LZO compression to ZFS? One zfs-fuse user has provided a patch which implements LZO compression, and he claims better compression ratios *and* better speed than lzjb. The miniLZO library is licensed under the GPL, but the author specifically says that other licenses are available by request. Has this
2009 Aug 23
3
zfs send/receive and compression
Is there a mechanism by which you can perform a zfs send | zfs receive and not have the data uncompressed and recompressed at the other end? I have a gzip-9 compressed filesystem that I want to backup to a remote system and would prefer not to have to recompress everything again at such great computation expense. If this doesn''t exist, how would one go about creating an RFE for
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2010 Apr 29
39
Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it.
2010 Apr 10
41
Secure delete?
Hi all Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2009 Aug 23
23
incremental backup with zfs to file
FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to
2010 Jun 25
13
OCZ Vertex 2 Pro performance numbers
Now the test for the Vertex 2 Pro. This was fun. For more explanation please see the thread "Crucial RealSSD C300 and cache flush?" This time I made sure the device is attached via 3GBit SATA. This is also only a short test. I''ll retest after some weeks of usage. cache enabled, 32 buffers, 64k blocks linear write, random data: 96 MB/s linear read, random data: 206 MB/s linear
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty of grunt for this. Comments? Ian
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all... I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks). But i did not find anything about 100% slog activity (~115MB/s) blocks
2008 Jul 31
9
Terrible zfs performance under NFS load
Hello, We have a S10U5 server sharing with zfs sharing up NFS shares. While using the nfs mount for a log destination for syslog for 20 or so busy mail servers we have noticed that the throughput becomes severly degraded shortly. I have tried disabling the zil, turning off cache flushing and I have not seen any changes in performance. The servers are only pushing about 1MB/s of constant
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all, I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely to expire and get freed relatively quickly). I believe it was sometimes implied on this list that such fragmentation for "static" data can be currently combatted only by zfs send-ing existing
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at