similar to: software RAID vs. HW RAID - part III

Displaying 20 results from an estimated 2000 matches similar to: "software RAID vs. HW RAID - part III"

2007 Sep 18
3
ZFS and encryption
Hello zfs-discuss, I wonder if ZFS will be able to take any advantage of Niagara''s built-in crypto? -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2007 Sep 17
2
zpool create -f not applicable to hot spares
Hello zfs-discuss, If you do ''zpool create -f test A B C spare D E'' and D or E contains UFS filesystem then despite of -f zpool command will complain that there is UFS file system on D. workaround: create a test pool with -f on D and E, destroy it and that create first pool with D and E as hotspares. I''ve tested it on s10u3 + patches - can someone confirm
2006 Jul 15
2
zvol of files for Oracle?
Hello zfs-discuss, What would you rather propose for ZFS+ORACLE - zvols or just files from the performance standpoint? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2006 May 16
3
ZFS snv_b39 and S10U2
Hello zfs-discuss, Just to be sure - if I create ZFS filesystems on snv_39 and then later I would want just to import that pool on S10U2 - can I safely assume it will just work (I mean nothing new to on-disk format was added or changed in last few snv releases which is not going to be in u2)? I want to put right now (I have to do it now) some data on ZFS and later I want to
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code, http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c 72 * All i/os smaller than zfs_vdev_cache_max will be turned into 73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software 74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each 75 * vdev''s vdev_cache. While it
2006 Jun 12
3
zfs destroy - destroying a snapshot
Hello zfs-discuss, I''m writing a script to do automatically snapshots and destroy old one. I think it would be great to add to zfs destroy another option so only snapshots can be destroyed. Something like: zfs destroy -s SNAPSHOT so if something other than snapshot is provided as an argument zfs destroy wouldn''t actually destroy it. That way it would
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2009 Aug 04
2
flowadm -i 1 - shows only first flow
Hi, OSOL, b118 > milek at r600:~# flowadm show-flow > FLOW LINK IPADDR PROTO PORT > DSFLD > local_25 iwh0 -- tcp 25 -- > local_22 iwh0 -- tcp 22 -- > milek at r600:~# flowadm show-flow -s -i 1 > FLOW IPACKETS RBYTES IERRORS
2006 Aug 17
7
in-kernel gzip compression
Hello zfs-discuss, Is someone actually working on it? Or any other algorithms? Any dates? -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2007 Dec 12
1
6604198 - single thread for compression
Hello zfs-discuss, http://sunsolve.sun.com/search/document.do?assetkey=1-1-6604198-1 Is there a patch for S10? I thought it''s been fixed. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2009 Dec 03
5
L2ARC in clusters
Hi, When deploying ZFS in cluster environment it would be nice to be able to have some SSDs as local drives (not on SAN) and when pool switches over to the other node zfs would pick up the node''s local disk drives as L2ARC. To better clarify what I mean lets assume there is a 2-node cluster with 1sx 2540 disk array. Now lets put 4x SSDs in each node (as internal/local drives). Now
2006 Mar 10
3
pool space reservation
What is a use case of setting a reservation on the base pool object? Say I have a pool of 3 100GB drives dynamic striped (pool size of 300GB), and I set the reservation to 200GB. I don''t see any commands that let me ever reduce a pool''s size, so how is the 200GB reservation used? Related question: is there a plan in the future to allow me to replace the 3 100GB drives with 2
2007 Oct 12
5
ZFS on EMC Symmetrix
If anyone is running this configuration, I have some questions for you about Page83 data errors. This message posted from opensolaris.org
2007 Feb 05
6
snapdir visable recursively throughout a dataset
Is there an existing RFE for, what I''ll wrongly call, "recursively visable snapshots"? That is, .zfs in directories other than the dataset root. Frankly, I don''t need it available in all directories, although it''d be nice, but I do have a need for making it visiable 1 dir down from the dataset root. The problem is that while ZFS and Zones work smoothly
2007 May 12
3
zfs and jbod-storage
Hi. I''m managing a HDS storage system which is slightly larger than 100 TB and we have used approx. 3/4. We use vxfs. The storage system is attached to a solaris 9 on sparc via a fiberswitch. The storage is shared via nfs to our webservers. If I was to replace vxfs with zfs I could utilize raidz(2) instead of the built-in hardware raid-controller. Are there any jbod-only storage
2007 Aug 14
2
restore lost pool after vtoc re-label
hi all, i''ve been using a SAN LUN as the sole member of a zpool with one additional zfs filesystem. this is a flat SAN fabric, so this LUN was available to other systems on the fabric, and one of them came up with "wrong magic number" for several drives, and, as best i can tell, the vtoc for my zpool LUN was over-written on that host via format labeling to correct the error.
2008 Apr 29
0
zpool attach vs. zpool iostat
Hello zfs-discuss, S10U4+patches, SPARC If I attach a disk to vdev in a pool to get mirrored configuration then during resilver zpool iostat 1 will report only reads being done from pool and basically no writes. If I do zpool iostat -v 1 then I can see it is writing to new device however on a pool and mirror/vdev level it is still reporting only reads. If during resilvering reads
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS, System was rebooted and after reboot server again System is snv_39, SPARC, T2000 bash-3.00# ptree 7 /lib/svc/bin/svc.startd -s 163 /sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000
2008 Jan 18
7
how to relocate a disk
Hi, I''d like to move a disk from one controller to another. This disk is part of a mirror in a zfs pool. How can one do this without having to export/import the pool or reboot the system? I tried taking it offline and online again, but then zpool says the disk is unavailable. Trying a zpool replace didn''t work because it complains that the "new" disk is part of a
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no further explanation] thanks... oz -- ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540 I have a hard time