Displaying 20 results from an estimated 1000 matches similar to: "ZFS and encryption"
2007 Apr 24
2
software RAID vs. HW RAID - part III
Hello zfs-discuss,
http://milek.blogspot.com/2007/04/hw-raid-vs-zfs-software-raid-part-iii.html
--
Best regards,
Robert Milkowski mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss,
If you do ''zpool create -f test A B C spare D E'' and D or E contains
UFS filesystem then despite of -f zpool command will complain that
there is UFS file system on D.
workaround: create a test pool with -f on D and E, destroy it and
that create first pool with D and E as hotspares.
I''ve tested it on s10u3 + patches - can someone confirm
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss,
What would you rather propose for ZFS+ORACLE - zvols or just files
from the performance standpoint?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss,
Just to be sure - if I create ZFS filesystems on snv_39 and then
later I would want just to import that pool on S10U2 - can I safely
assume it will just work (I mean nothing new to on-disk format was
added or changed in last few snv releases which is not going to be
in u2)?
I want to put right now (I have to do it now) some data on ZFS and
later I want to
2006 Jun 12
3
zfs destroy - destroying a snapshot
Hello zfs-discuss,
I''m writing a script to do automatically snapshots and destroy old
one. I think it would be great to add to zfs destroy another option
so only snapshots can be destroyed. Something like:
zfs destroy -s SNAPSHOT
so if something other than snapshot is provided as an argument
zfs destroy wouldn''t actually destroy it.
That way it would
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss,
Relatively low traffic to the pool but sync takes too long to complete
and other operations are also not that fast.
Disks are on 3510 array. zil_disable=1.
bash-3.00# ptime sync
real 1:21.569
user 0.001
sys 0.027
During sync zpool iostat and vmstat look like:
f3-1 504G 720G 370 859 995K 10.2M
misc 20.6M 52.0G 0 0
2009 Aug 04
2
flowadm -i 1 - shows only first flow
Hi,
OSOL, b118
> milek at r600:~# flowadm show-flow
> FLOW LINK IPADDR PROTO PORT
> DSFLD
> local_25 iwh0 -- tcp 25 --
> local_22 iwh0 -- tcp 22 --
> milek at r600:~# flowadm show-flow -s -i 1
> FLOW IPACKETS RBYTES IERRORS
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object?
Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used?
Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code,
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c
72 * All i/os smaller than zfs_vdev_cache_max will be turned into
73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software
74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each
75 * vdev''s vdev_cache.
While it
2006 Aug 17
7
in-kernel gzip compression
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2007 Oct 12
5
ZFS on EMC Symmetrix
If anyone is running this configuration, I have some questions for you about Page83 data errors.
This message posted from opensolaris.org
2007 Feb 05
6
snapdir visable recursively throughout a dataset
Is there an existing RFE for, what I''ll wrongly call, "recursively visable snapshots"? That is, .zfs in directories other than the dataset root.
Frankly, I don''t need it available in all directories, although it''d be nice, but I do have a need for making it visiable 1 dir down from the dataset root. The problem is that while ZFS and Zones work smoothly
2007 Dec 12
1
6604198 - single thread for compression
Hello zfs-discuss,
http://sunsolve.sun.com/search/document.do?assetkey=1-1-6604198-1
Is there a patch for S10? I thought it''s been fixed.
--
Best regards,
Robert mailto:rmilkowski at task.gda.pl
http://milek.blogspot.com
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI,
T5220, S10U4 + patches
mdb -k
> ::memstat
While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94 0 0
So about 2mln xcalls per second.
Let''s check with dtrace:
2009 Dec 03
5
L2ARC in clusters
Hi,
When deploying ZFS in cluster environment it would be nice to be able
to have some SSDs as local drives (not on SAN) and when pool switches
over to the other node zfs would pick up the node''s local disk drives as
L2ARC.
To better clarify what I mean lets assume there is a 2-node cluster with
1sx 2540 disk array.
Now lets put 4x SSDs in each node (as internal/local drives). Now
2007 Aug 14
2
restore lost pool after vtoc re-label
hi all,
i''ve been using a SAN LUN as the sole member of a zpool with one additional zfs filesystem. this is a flat SAN fabric, so this LUN was available to other systems on the fabric, and one of them came up with "wrong magic number" for several drives, and, as best i can tell, the vtoc for my zpool LUN was over-written on that host via format labeling to correct the error.
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the
basis for this recommendation? i assume it is performance and not failure
resilience, but i am just guessing... [i know, recommendation was intended
for people who know their raid cold, so it needed no further explanation]
thanks... oz
--
ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540
I have a hard time
2006 Dec 21
12
Difference between ZFS and UFS with one LUN from a SAN
All,
I understand that ZFS gives you more error correction when using two LUNS from a SAN. But, does it provide you with less features than UFS does on one LUN from a SAN (i.e is it less stable).
Thanks,
Shawn
This message posted from opensolaris.org
2007 Feb 18
7
Zfs best practice for 2U SATA iSCSI NAS
Is there a best practice guide for using zfs as a basic rackable small
storage solution?
I''m considering zfs with a 2U 12 disk Xeon based server system vs
something like a second hand FAS250.
Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs.
Being able to take snapshots of running (or maybe paused) xen iscsi
vols and re-export then for cloning and remote backup
2006 Jun 22
2
ZFS throttling - how does it work?
Hi zfs-discuss,
I have some questions about throttling on ZFS
1) I know that throttling is activating while one sync is waiting for another. (http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs)
Is it possible to throttle only selected processes (e.g. nfsd) ?
2) How can I obtain some statistics about it? I want to know how often throttling is activating on my host etc.
3) Is it