Displaying 20 results from an estimated 2000 matches similar to: "zpool create -f not applicable to hot spares"
2007 Sep 18
3
ZFS and encryption
Hello zfs-discuss,
I wonder if ZFS will be able to take any advantage of Niagara''s
built-in crypto?
--
Best regards,
Robert Milkowski mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2007 Apr 24
2
software RAID vs. HW RAID - part III
Hello zfs-discuss,
http://milek.blogspot.com/2007/04/hw-raid-vs-zfs-software-raid-part-iii.html
--
Best regards,
Robert Milkowski mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss,
Relatively low traffic to the pool but sync takes too long to complete
and other operations are also not that fast.
Disks are on 3510 array. zil_disable=1.
bash-3.00# ptime sync
real 1:21.569
user 0.001
sys 0.027
During sync zpool iostat and vmstat look like:
f3-1 504G 720G 370 859 995K 10.2M
misc 20.6M 52.0G 0 0
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss,
What would you rather propose for ZFS+ORACLE - zvols or just files
from the performance standpoint?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2007 Jan 09
2
ZFS Hot Spare Behavior
I physically removed a disk (c3t8d0 used by ZFS ''pool01'') from a 3310 JBOD connected to a V210 running s10u3 (11/06) and ''zpool status'' reported this:
# zpool status
pool: pool01
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the
2006 Jun 12
3
zfs destroy - destroying a snapshot
Hello zfs-discuss,
I''m writing a script to do automatically snapshots and destroy old
one. I think it would be great to add to zfs destroy another option
so only snapshots can be destroyed. Something like:
zfs destroy -s SNAPSHOT
so if something other than snapshot is provided as an argument
zfs destroy wouldn''t actually destroy it.
That way it would
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss,
Just to be sure - if I create ZFS filesystems on snv_39 and then
later I would want just to import that pool on S10U2 - can I safely
assume it will just work (I mean nothing new to on-disk format was
added or changed in last few snv releases which is not going to be
in u2)?
I want to put right now (I have to do it now) some data on ZFS and
later I want to
2006 Aug 17
7
in-kernel gzip compression
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2009 Aug 04
2
flowadm -i 1 - shows only first flow
Hi,
OSOL, b118
> milek at r600:~# flowadm show-flow
> FLOW LINK IPADDR PROTO PORT
> DSFLD
> local_25 iwh0 -- tcp 25 --
> local_22 iwh0 -- tcp 22 --
> milek at r600:~# flowadm show-flow -s -i 1
> FLOW IPACKETS RBYTES IERRORS
2007 Dec 12
1
6604198 - single thread for compression
Hello zfs-discuss,
http://sunsolve.sun.com/search/document.do?assetkey=1-1-6604198-1
Is there a patch for S10? I thought it''s been fixed.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it
2008 Apr 29
0
zpool attach vs. zpool iostat
Hello zfs-discuss,
S10U4+patches, SPARC
If I attach a disk to vdev in a pool to get mirrored configuration
then during resilver zpool iostat 1 will report only reads being
done from pool and basically no writes. If I do zpool iostat -v 1
then I can see it is writing to new device however on a pool and
mirror/vdev level it is still reporting only reads.
If during resilvering reads
2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare
support in ZFS. Below you can find a current draft of the proposed
interfaces. This has not yet been submitted for ARC review, but
comments are welcome. Note that this does not include any enhanced FMA
diagnosis to determine when a device is "faulted". This will come in a
follow-on project, of which some
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time
2005 Dec 22
9
truncating aggregation output only
Hello dtrace-discuss,
Sometimes I want to run a script for some time and every n second
output N top entries. trunc() isn''t suitable here as it also removed
keys/values. I want it ''coz over time if I use sum() entries which
are normally truncated can actually get to top over a time.
Maybe printa() extension, something like: printa(@b[10]) - to output
top 10?
--
2007 Feb 03
4
Which label a ZFS/ZPOOL device has ? VTOC or EFI ?
Hi All,
ZPOOL / ZFS commands writes EFI label on a device if we create ZPOOL/ZFS fs on it. Is it true ?
I formatted a device with VTOC lable and I created a ZFS file system on it.
Now which label the ZFS device has ? is it old VTOC or EFI ?
After creating the ZFS file system on a VTOC labeled disk, I am seeing the following warning messages.
Feb 3 07:47:00 scoobyb
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve
run across the recent "VTrak" SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
http://www.promise.com/product/product_detail_eng.asp?product_id=175
E310s SAS-connected RAID:
2007 Feb 05
6
snapdir visable recursively throughout a dataset
Is there an existing RFE for, what I''ll wrongly call, "recursively visable snapshots"? That is, .zfs in directories other than the dataset root.
Frankly, I don''t need it available in all directories, although it''d be nice, but I do have a need for making it visiable 1 dir down from the dataset root. The problem is that while ZFS and Zones work smoothly
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state
after a drive in a raidz2 vdev has been successfully replaced. In this
particular case drive c0t6d0 was failing so I ran,
zpool offline home/c0t6d0
zpool replace home c0t6d0 c8t1d0
and after the resilvering finished the pool reports a degraded state.
Hopefully this is incorrect. At this point is the vdev in question
now has