similar to: ZFS and free space

Displaying 20 results from an estimated 30000 matches similar to: "ZFS and free space"

2006 Jun 22
2
ZFS throttling - how does it work?
Hi zfs-discuss, I have some questions about throttling on ZFS 1) I know that throttling is activating while one sync is waiting for another. (http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs) Is it possible to throttle only selected processes (e.g. nfsd) ? 2) How can I obtain some statistics about it? I want to know how often throttling is activating on my host etc. 3) Is it
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no further explanation] thanks... oz -- ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540 I have a hard time
2012 Dec 20
3
Pool performance when nearly full
Hi I know some of this has been discussed in the past but I can''t quite find the exact information I''m seeking (and I''d check the ZFS wikis but the websites are down at the moment). Firstly, which is correct, free space shown by "zfs list" or by "zpool iostat" ? zfs list: used 50.3 TB, free 13.7 TB, total = 64 TB, free = 21.4% zpool iostat: used
2006 Jan 04
8
Using same ZFS under different kernel versions
I build two zfs filesystems using b29 (from brandz). I then re-installed solaris express b28, preserving the zfs filesystems. When I tried to "zpool import" my zfs filesystems I got a kernel panic: > debugging crash dump vmcore.0 (32-bit) from blackbird > operating system: 5.11 snv_28 (i86pc) > panic message: > ZFS: bad checksum (read on /dev/dsk/c1d0p0 off 24d5e000: zio
2012 Jan 07
14
zfs defragmentation via resilvering?
Hello all, I understand that relatively high fragmentation is inherent to ZFS due to its COW and possible intermixing of metadata and data blocks (of which metadata path blocks are likely to expire and get freed relatively quickly). I believe it was sometimes implied on this list that such fragmentation for "static" data can be currently combatted only by zfs send-ing existing
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2007 Sep 18
3
ZFS and encryption
Hello zfs-discuss, I wonder if ZFS will be able to take any advantage of Niagara''s built-in crypto? -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune ''recordsize'', or would that not be worth the effort? -- Regards, Jeremy
2007 Sep 14
3
space allocation vs. thin provisioning
Short question: I''m curious as to how ZFS manages space (free and used) and how its usage interacts with thin provisioning provided by HDS arrays. Is there any effort to minimize the number of provisioned disk blocks that get writes so as to not negate any space benefits that thin provisioning may give? Background & more detailed questions: In Jeff Bonwick''s blog[1], he
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object? Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used? Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2007 Feb 05
6
snapdir visable recursively throughout a dataset
Is there an existing RFE for, what I''ll wrongly call, "recursively visable snapshots"? That is, .zfs in directories other than the dataset root. Frankly, I don''t need it available in all directories, although it''d be nice, but I do have a need for making it visiable 1 dir down from the dataset root. The problem is that while ZFS and Zones work smoothly
2018 May 30
8
[PATCH v4 0/2] drm/nouveau: tegra: Detach from ARM DMA/IOMMU mapping
From: Thierry Reding <treding at nvidia.com> An unfortunate interaction between the 32-bit ARM DMA/IOMMU mapping code and Tegra SMMU driver changes to support IOMMU groups introduced a boot- time regression on Tegra124. This was caught very late because none of the standard configurations that are tested on Tegra enable the ARM DMA/ IOMMU mapping code since it is not needed. The reason for
2001 Dec 16
3
Arima
I did a regression with ARMA errors using arima0 with ari<-arima0(y,order=c(2,0,2),xreg=reg1,delta=-1) or ari<-arima0(y,order=c(2,0,2),xreg=reg1) where reg1 is the matrix of the regressors and when I see diag(ari$var.coef) I get negative terms. Do you know what this mean ? I try to change transform.pars to 0 or 1 but this crash R on Windows. Is it possible to test the significativity
2006 Jun 12
3
zfs destroy - destroying a snapshot
Hello zfs-discuss, I''m writing a script to do automatically snapshots and destroy old one. I think it would be great to add to zfs destroy another option so only snapshots can be destroyed. Something like: zfs destroy -s SNAPSHOT so if something other than snapshot is provided as an argument zfs destroy wouldn''t actually destroy it. That way it would
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false? B) If I buy larger drives and resilver, does defrag happen? C) Does zfs send zfs receive mean it will defrag? -- This message posted from opensolaris.org
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2018 May 30
4
[PATCH v3 0/2] drm/nouveau: tegra: Detach from ARM DMA/IOMMU mapping
From: Thierry Reding <treding at nvidia.com> An unfortunate interaction between the 32-bit ARM DMA/IOMMU mapping code and Tegra SMMU driver changes to support IOMMU groups introduced a boot- time regression on Tegra124. This was caught very late because none of the standard configurations that are tested on Tegra enable the ARM DMA/ IOMMU mapping code since it is not needed. The reason for
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what''s the best approach to moving the ZFS
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code, http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c 72 * All i/os smaller than zfs_vdev_cache_max will be turned into 73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software 74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each 75 * vdev''s vdev_cache. While it
2008 Jan 18
7
how to relocate a disk
Hi, I''d like to move a disk from one controller to another. This disk is part of a mirror in a zfs pool. How can one do this without having to export/import the pool or reboot the system? I tried taking it offline and online again, but then zpool says the disk is unavailable. Trying a zpool replace didn''t work because it complains that the "new" disk is part of a