similar to: ZFS stripe over EMC write performance.

Displaying 20 results from an estimated 40000 matches similar to: "ZFS stripe over EMC write performance."

2010 Aug 12
2
EMC migration and zfs
We are going to be migrating to a new EMC frame using Open Replicator. ZFS is sitting on volumes that are running MPXIO. So the controller number/disk number is going to change when we reboot the server. I would like to konw if anyone has done this and will the zfs filesystems "just work" and find the new disk id numbers when we go to zfs import the pool. Our process would be: zfs
2007 Jun 15
3
zfs and EMC
Hi there, have a strange behavior if i?ll create a zfs pool at an EMC PowerPath pseudo device. I can create a pool on emcpower0a but not on emcpower2a zpool core dumps with invalid argument .... ???? Thats my second maschine with powerpath and zfs the first one works fine, even zfs/powerpath and failover ... Is there anybody who has the same failure and a solution ? :) Greets Dominik
2010 Apr 19
4
upgrade zfs stripe
hi there, since i am really new to zfs, i got 2 important questions for starting. i got a nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future proof would be, if i could add just another drive to the pool and zfs can integrate it flawlessly? and second if this hdd could also be another size than 1,5tb? so could i put in 2tb also and integrate it? thanks in advance
2012 Feb 23
1
default cluster.stripe-block-size for striped volumes on 3.0.x vs 3.3 beta (128kb), performance change if i reduce to a smaller block size?
Hi, I've been migrating data from an old striped 3.0.x gluster install to a 3.3 beta install. I copied all the data to a regular XFS partition (4K blocksize) from the old gluster striped volume and it totaled 9.2TB. With the old setup I used the following option in a "volume stripe" block in the configuration file in a client : volume stripe type cluster/stripe option
2007 Apr 23
0
striping with ZFS and SAN(AMS500 HDS)
I have Storage-SAN of HDS(AMS500), and I want to do striping on luns from the storage. I don''t want any raid-5(because I have one on the disks in the HDS storage). I only want to do strip on 2 luns(25GB each) that cames from different controllers and deffrent fiber channel port(dual HBA). I test the preformance by writing 512MB in to the zfs, with 2 luns striped(each lun from diffrent
2010 Aug 09
2
ZFS with EMC PowerPath
On some machines running PowerPath, there''s sometimes issues after an update/upgrade of the PowerPath software. Sometimes the pseudo devices get remapped and change names. ZFS appears to handle it OK, however sometimes it then references half native device names and half the emcpower pseudo device names. So the SA can easily be confused. Is there a way to tell ZFS which device names
2006 Dec 12
1
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2007 Feb 22
0
ZFS vs UFS performance Using Different Raid Configurations
Since most of our customers are predominantly UFS based, we would like to use the same configuration and compare ZFS performance, so that we can announce support for ZFS. We''re planning on measuring the performance of a ZFS file system vs UFS file system. Please look at the following scenario and let us know if this is a good performance measurement criterion.
2007 Oct 12
5
ZFS on EMC Symmetrix
If anyone is running this configuration, I have some questions for you about Page83 data errors. This message posted from opensolaris.org
2007 Apr 18
2
zfs block allocation strategy
Hi, quoting from zfs docs "The SPA allocates blocks in a round-robin fashion from the top-level vdevs. A storage pool with multiple top-level vdevs allows the SPA to use dynamic striping to increase disk bandwidth. Since a new block may be allocated from any of the top-level vdevs, the SPA implements dynamic striping by spreading out writes across all available top-level vdevs" Now,
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote: > Hi Andy, > my comments below... > note that I didn''t see zfs-discuss at opensolaris.org in the CC for the > original... > > Andy Lubel wrote: >> Hi All, >> >> I have been asked to implement a zfs based solution using storedge 6130 and >> im chasing my own
2008 Nov 04
0
QUESTIONS from EMC: EFI and SMI Disk Labels
All, my apologies in advance for the wide distribution - it was recommended that I contact these aliases but if there is a more appropriate one, please let me know... I have received the following EFI disk-related questions from the EMC PowerPath team who would like to provide more complete support for EFI disks on Sun platforms... I would appreciate help in answering these questions...
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)
2009 Jan 24
3
zfs read performance degrades over a short time
I appear to be seeing the performance of a local ZFS file system degrading over a short period of time. My system configuration: 32 bit Athlon 1800+ CPU 1 Gbyte of RAM Solaris 10 U6 SunOS filer 5.10 Generic_137138-09 i86pc i386 i86pc 2x250 GByte Western Digital WD2500JB IDE hard drives 1 zfs pool (striped with the two drives, 449 GBytes total) 1 hard drive has
2007 Oct 02
1
zfs in san
I am planning to use zfs with fiber attached san disk from a emc symmetrix''s Based on a note in the admin guide it appears that even though the symmetrixs will handle the hardware raid it is still advisable to create a zfs mirror on the host side to take full advantage of zfs''s self healing/error checking and correcting. is this true ? Additionally I am wondering how the zfs
2009 Apr 23
1
ZFS SMI vs EFI performance using filebench
I have been testing the performance of zfs vs. ufs using filebench. The setup is a v240, 4GB RAM, 2 at 1503MHz, 1 320GB _SAN_ attached LUN, and using a ZFS mirrored root disk. Our SAN is a top notch NVRAM based SAN. There are lots of discussions using ZFS with SAN based storage.. and it seems ZFS is designed to perform best with dumb disk (JBODs). The test I ran support this observation.. and
2006 Apr 28
4
ZFS RAID-Z for Two-Disk Workstation Setup?
After reading the ZFS docs it does appear that RAID-Z can be used on a two-disk system and I was wondering if the system would [i]basically [/i]work as Intel''s Matrix RAID for two disks? [u] Intel Matrix RAID info:[/u] http://www.intel.com/design/chipsets/matrixstorage_sb.htm http://techreport.com/reviews/2005q1/matrix-raid/index.x?pg=1 My focus with this thread is some
2007 Apr 23
14
concatination & stripe - zfs?
I want to configure my zfs like this : concatination_stripe_pool : concatination lun0_controller0 lun1_controller0 concatination lun2_controller1 lun3_controller1 1. There is any option to implement it in ZFS? 2. there is other why to get the same configuration? thanks This message posted from opensolaris.org
2007 Jul 05
1
ZFS on CLARiiON SAN Hardware?
Does anyone on the list have definitive information on whether ZFS works with CLARiiON devices? bash-3.00# uname -a SunOS XXXXXXX 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V245 bash-3.00# powermt display dev=all Pseudo name=emcpower0a CLARiiON ID=APM00033500540 [XXXXXXX] Logical device ID=600601607C550E00F25F4629AFBEDB11 [LUN 61] state=alive; policy=BasicFailover; priority=0; queued-IOs=0