similar to: storage type for ZFS

Displaying 20 results from an estimated 20000 matches similar to: "storage type for ZFS"

2006 Aug 21
12
SCSI synchronize cache cmd
Hi, I work on a support team for the Sun StorEdge 6920 and have a question about the use of the SCSI sync cache command in Solaris and ZFS. We have a bug in our 6920 software that exposes us to a memory leak when we receive the SCSI sync cache command: 6456312 - SCSI Synchronize Cache Command is flawed It will take some time for this bug fix to role out to the field so we need to understand
2007 Jul 02
3
ZFS and VXVM/VXFS
We are looking at the alternatives to VXVM/VXFS. One of the feature which we liked in Veritas, apart from the obvious ones is the ability to call the disks by name and group them in to a disk group. Especially in SAN based environment where the disks may be shared by multiple machines, it is very easy to manage them by disk group names rather than cxtxdx numbers. Does zfs offer such
2007 Dec 28
14
Help needed ZFS vs Veritas Comparison
Hi Everyone ; I will soon be making a presentation comparing ZFS against Veritas Storage Foundation , do we have any document comparing features ? regards <http://www.sun.com/> http://www.sun.com/emrkt/sigs/6g_top.gif Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email
2009 Mar 04
5
Oracle database on zfs
Hi, I am wondering if there is a guideline on how to configure ZFS on a server with Oracle database? We are experiencing some slowness on writes to ZFS filesystem. It take about 530ms to write a 2k data. We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5 EMC EMX. This is a small database with about 18gb storage allocated. Is there a tunable parameters that we can apply to
2007 Nov 29
10
ZFS write time performance question
HI, The question is a ZFS performance question in reguards to SAN traffic. We are trying to benchmark ZFS vx VxFS file systems and I get the following performance results. Test Setup: Solaris 10: 11/06 Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) Sun Fire v490 server LSI Raid 3994 on backend ZFS Record Size: 128KB (default) VxFS Block Size: 8KB(default) The only thing
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks, Myself and a colleague are currently involved in a prototyping exercise to evaluate ZFS against our current filesystem. We are looking at the best way to arrange the disks in a 3510 storage array. We have been testing with the 12 disks on the 3510 exported as "nraid" logical devices. We then configured a single ZFS pool on top of this, using two raid-z arrays. We are getting
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi. System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1 We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()). It''s not a problem with network, there''s also plenty oc CPU available. Storage isn''t saturated either. First strange thing - normally on that server nfsd has about 1500-2500 number of threads. I did
2006 Nov 03
27
# devices in raidz.
for s10u2, documentation recommends 3 to 9 devices in raidz. what is the basis for this recommendation? i assume it is performance and not failure resilience, but i am just guessing... [i know, recommendation was intended for people who know their raid cold, so it needed no further explanation] thanks... oz -- ozan s. yigit | oz at somanetworks.com | 416 977 1414 x 1540 I have a hard time
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing. Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible. I figure theres no point in having overhead from 2 volume manages if we can avoid it. Has anyone
2006 Dec 22
6
Re: Difference between ZFS and UFS with one LUN froma SAN
This may not be the answer you''re looking for, but I don''t know if it''s something you''ve thought of. If you''re pulling a LUN from an expensive array, with multiple HBA''s in the system, why not run mpxio? If you ARE running mpxio, there shouldn''t be an issue with a path dropping. I have the setup above in my test lab and pull cables
2007 Apr 22
7
slow sync on zfs
Hello zfs-discuss, Relatively low traffic to the pool but sync takes too long to complete and other operations are also not that fast. Disks are on 3510 array. zil_disable=1. bash-3.00# ptime sync real 1:21.569 user 0.001 sys 0.027 During sync zpool iostat and vmstat look like: f3-1 504G 720G 370 859 995K 10.2M misc 20.6M 52.0G 0 0
2007 Dec 17
1
HA-NFS AND HA-ZFS
We are currently running sun cluster 3.2 on solaris 10u3. We are using ufs/vxvm 4.1 as our shared file systems. However, I would like to migrate to HA-NFS on ZFS. Since there is no conversion process from UFS to ZFS other than copy, I would like to migrate on my own time. To do this I am planning to add a new zpool HAStoragePlus resource to my existing HA-NFS resource group. This way I can migrate
2006 Dec 21
12
Difference between ZFS and UFS with one LUN from a SAN
All, I understand that ZFS gives you more error correction when using two LUNS from a SAN. But, does it provide you with less features than UFS does on one LUN from a SAN (i.e is it less stable). Thanks, Shawn This message posted from opensolaris.org
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2006 Dec 12
1
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2008 Aug 22
2
zpool autoexpand property - HowTo question
I noted this PSARC thread with interest: Re: zpool autoexpand property [PSARC/2008/353 Self Review] because it so happens that during a recent disk upgrade, on a laptop. I''ve migrated a zpool off of one partition onto a slightly larger one, and I''d like to somehow tell zfs to grow the zpool to fill the new partition. So, what''s the best way to do this? (and is it
2007 Sep 25
23
device alias
Hi. I''d like to request a feature be added to zfs. Currently, on SAN attached disk, zpool shows up with a big WWN for the disk. If ZFS (or the zpool command, in particular) had a text field for arbitrary information, it would be possible to add something that would indicate what LUN on what array the disk in question might be. This would make troubleshooting and general
2006 Nov 01
56
ZFS/iSCSI target integration
Rick McNeal and I have been working on building support for sharing ZVOLs as iSCSI targets directly into ZFS. Below is the proposal I''ll be submitting to PSARC. Comments and suggestions are welcome. Adam ---8<--- iSCSI/ZFS Integration A. Overview The goal of this project is to couple ZFS with the iSCSI target in Solaris specifically to make it as easy to create and export ZVOLs
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2007 Apr 26
7
device name changing
Hi. If I create a zpool with the following command: zpool create tank raidz2 da0 da1 da2 da3 da4 da5 da6 da7 and after a reboot the device names for some reason are changed so da2 and da5 are swapped, either by altering the LUN setting on the storage or by switching cables/swapping disks etc.? How will zfs handle that? Will it simply acknowledge that all devices are present and the pool is