similar to: zfs diff performance

Displaying 20 results from an estimated 2000 matches similar to: "zfs diff performance"

2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool. ''zfs destroy -r pool/dataset'' hung the machine within seconds
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings: zpool create data c8t1d0 zfs create data/shared zfs set dedup=on data/shared The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x. -- This
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2008 Jun 24
1
zfs primarycache and secondarycache properties
Moved from PSARC to zfs-code...this discussion is seperate from the case. Eric kustarz wrote: > > On Jun 23, 2008, at 1:20 PM, Darren Reed wrote: > >> eric kustarz wrote: >>> >>> On Jun 23, 2008, at 1:07 PM, Darren Reed wrote: >>> >>>> Tim Haley wrote: >>>>> .... >>>>> primarycache=all | none | metadata
2010 Feb 18
3
improve meta data performance
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops of which about 90% are meta data. In hind sight it would have been significantly better to use a mirrored configuration but we opted for 4 x (9+2) raidz2 at the time. We can not take the downtime necessary to change the zpool configuration. We need to improve the meta data performance with little to no money. Does anyone
2010 Jul 21
5
slog/L2ARC on a hard drive and not SSD?
Hi, Out of pure curiosity, I was wondering, what would happen if one tries to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)? I know these are designed with SSDs in mind, and I know it''s possible to use anything you want as cache. So would ZFS benefit from it? Would it be the same? Would it slow down? I guess it would slow things down, because it would be trying to
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Dec 08
5
very slow boot: stuck at mounting zfs filesystems
Hello list, I''m having trouble with a server holding a lot of data. After a few months of uptime, it is currently rebooting from a lockup (reason unknown so far) but it is taking hours to boot up again. The boot process is stuck at the stage where it says: mounting zfs filesystems (1/5) the machine responds to pings and keystrokes. I can see disk activity; the disk leds blink one after
2010 Jul 21
5
L2ARC and ZIL on same SSD?
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? -- This message posted from opensolaris.org
2011 Feb 12
1
existing performance data for on-disk dedup?
Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I''d ask here. Thanks for the help, Janice
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2019 Apr 24
1
Was: Re: Are linux distros redundant?, is zfs
Benjamin Smith wrote: > On Wednesday, April 24, 2019 11:25:00 AM PDT Andrew Holway wrote: > >>> Btw, right now, we've just built a new server as Ubuntu, because my >>> manager wants to use it to test zfs, including its ability to a) act >>> as a RAID, directly, without an underlying RAID, and b) encrypt the >>> whole thing natively. >> >>
2012 Dec 20
3
Pool performance when nearly full
Hi I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking (and I''d check the ZFS wikis but the websites are down at the moment). Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ? zfs list: used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4% zpool iostat: used
2011 Jun 30
1
cross platform (freebsd) zfs pool replication
Hi, I have two servers running: freebsd with a zpool v28 and a nexenta (opensolaris b134) running zpool v26. Replication (with zfs send/receive) from the nexenta box to the freebsd works fine, but I have a problem accessing my replicated volume. When I''m typing and autocomplete with tab key the command cd /remotepool/us (for /remotepool/users) I get a panic. check the panic @
2010 May 10
1
Daily snapshots as replacement for incremental backups
Hello, I have a situation where a zfs file server holding lots of graphic files cannot be backed up daily with a full backup. My idea was initially to run a full backup on Sunday through the lto library on more dedicated tapes, then have an incremental backup run on daily tapes. Brainstorming on this, led me to the idea that I could actually stop thinking about incremental backups (that may always
2012 Jul 30
10
encfs on top of zfs
Dear ZFS-Users, I want to switch to ZFS, but still want to encrypt my data. Native Encryption for ZFS was added in "ZFS Pool Version Number 30<http://en.wikipedia.org/wiki/ZFS#Release_history>", but I''m using ZFS on FreeBSD with Version 28. My question is how would encfs (fuse encryption) affect zfs specific features like data Integrity and deduplication? Regards
2011 Jul 13
4
How about 4KB disk sectors?
So, what is the story about 4KB disk sectors? Should such disks be avoided with ZFS? Or, no problem? Or, need to modify some config file before usage? -- This message posted from opensolaris.org
2010 Mar 18
2
lazy zfs destroy
OK I have a very large zfs snapshot I want to destroy. When I do this, the system nearly freezes during the zfs destroy. This is a Sun Fire X4600 with 128GB of memory. Now this may be more of a function of the IO device, but let''s say I don''t care that this zfs destroy finishes quickly. I actually don''t care, as long as it finishes before I run out of disk space. So a