similar to: Re: Difference between ZFS and UFS with one LUN froma SAN

Displaying 20 results from an estimated 10000 matches similar to: "Re: Difference between ZFS and UFS with one LUN froma SAN"

2006 Dec 21
12
Difference between ZFS and UFS with one LUN from a SAN
All, I understand that ZFS gives you more error correction when using two LUNS from a SAN. But, does it provide you with less features than UFS does on one LUN from a SAN (i.e is it less stable). Thanks, Shawn This message posted from opensolaris.org
2009 Oct 09
22
Does ZFS work with SAN-attached devices?
Hi All, Its been a while since I touched zfs. Is the below still the case with zfs and hardware raid array? Do we still need to provide two luns from the hardware raid then zfs mirror those two luns? http://www.opensolaris.org/os/community/zfs/faq/#hardwareraid Thanks, Shawn -- This message posted from opensolaris.org
2007 Jul 12
9
Again ZFS with expanding LUNs!
Hello, I know, that you had this discussion a view days ago but I''m in the installation phase of our new production servers and I intend to migrate the data from UFS volumes to ZFS volumes in near future. For doing this it must be ABSOLUTELY sure that I can resize the SAN LUNs because during the last 4 years I had to double the LUN size every year. I tried to resize a test volume
2007 Jul 13
28
ZFS and powerpath
How much fun can you have with a simple thing like powerpath? Here''s the story: I have a (remote) system with access to a couple of EMC LUNs. Originally, I set it up with mpxio and created a simple zpool containing the two LUNs. It''s now been reconfigured to use powerpath instead of mpxio. My problem is that I can''t import the pool. I get: pool: ###### id:
2007 Apr 23
14
concatination & stripe - zfs?
I want to configure my zfs like this : concatination_stripe_pool : concatination lun0_controller0 lun1_controller0 concatination lun2_controller1 lun3_controller1 1. There is any option to implement it in ZFS? 2. there is other why to get the same configuration? thanks This message posted from opensolaris.org
2010 Feb 13
2
Oracle Performance - ZFS vs UFS (Jason King)
> There is of course the caveat of using raw devices with databases (it > becomes harder to track usage, especially as the number of LUNs > increases, slightly less visibility into their usage statistics at the > OS level ). However perhaps now someone can implement the CR I filed > a long time ago to add ASM support to libfstyp.so that would allow > zfs, mkfs, format, etc. to
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks, Myself and a colleague are currently involved in a prototyping exercise to evaluate ZFS against our current filesystem. We are looking at the best way to arrange the disks in a 3510 storage array. We have been testing with the 12 disks on the 3510 exported as "nraid" logical devices. We then configured a single ZFS pool on top of this, using two raid-z arrays. We are getting
2006 Jun 21
2
ZFS and Virtualization
Hi experts, I have few issues about ZFS and virtualization: [b]Virtualization and performance[/b] When filesystem traffic occurs on a zpool containing only spindles dedicated to this zpool i/o can be distributed evenly. When the zpool is located on a lun sliced from a raid group shared by multiple systems the capability of doing i/o from this zpool will be limited. Avoiding or limiting i/o to
2006 Sep 13
10
Snapshots and backing store
Hi, There''s something really bizarre in ZFS snaphot specs : "Uses no separate backing store." . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a "backing-store-file" option in a future release ? In the same idea, it would be great to
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS? Keith McAndrew Senior Systems Engineer Northern California SUN Microsystems - Data Management Group <mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com 916 715 8352 Cell CONFIDENTIALITY NOTICE The information contained in this transmission may contain privileged and confidential information of SUN
2007 Jul 04
3
zfs dynamic lun expansion
Hi, I had 2 luns in a zfs mirrored config. I increased the size of both the luns by x gig and offlined/online the individual luns in the zpool, also tried an export/import of the zpool, but i am unable to see the increased size....what would i need to do to see the increased size?...or is it not possible yet? This message posted from opensolaris.org
2011 May 19
2
Faulted Pool Question
I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I''d ask here as well, as this forum often provides better, faster answers :-) We have a server (M4000) with 6 FC attached SE-3511 disk arrays (some behind a 6920 DSP engine). There are many LUNs, all about 500 GB and mirrored via ZFS. The LUNs
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy). My question is what''s the best approach to moving the ZFS
2007 Mar 28
6
ZFS and UFS performance
We are running Solaris 10 11/06 on a Sun V240 with 2 CPUS and 8 GB of memory. This V240 is attached to a 3510 FC that has 12 x 300 GB disks. The 3510 is configured as HW RAID 5 with 10 disks and 2 spares and it''s exported to the V240 as a single LUN. We create iso images of our product in the following way (high-level): # mkfile 3g /isoimages/myiso # lofiadm -a /isoimages/myiso
2006 Sep 17
2
ZFS layout on hardware RAID-5?
Greetings, I followed closely the thread "ZFS and Storage", and other discussions about using ZFS on hardware RAID arrays, since we are deploying ZFS in a similar situation here. I''m sure I''m oversimplifying, but the consensus for general filesystem-type storage needs, as I''ve read it, tends toward doing ZFS RAID-Z (or RAID-Z2) on LUNS consisting of hardware
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing. Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible. I figure theres no point in having overhead from 2 volume manages if we can avoid it. Has anyone
2006 Sep 28
13
jbod questions
Folks, We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs and continue to have checksum errors -- they are corrected but this improves on the ufs inode errors that require system shutdown and fsck. So, I am recommending that we buy small jbods, do raidz2 and let zfs handle the raiding of these boxes. As we need more
2007 Oct 24
1
memory issue
Hello, I received the following question from a company I am working with: We are having issues with our early experiments with ZFS with volumes mounted from a 6130. Here is what we have and what we are seeing: T2000 (geronimo) on the fibre with a 6130. 6130 configured with UFS volumes mapped and mounted on several other hosts. it''s the only host using ZFS volume (only
2009 Sep 08
4
Can ZFS simply concatenate LUNs (eg no RAID0)?
Hi, I do have a disk array that is providing striped LUNs to my Solaris box. Hence I''d like to simply concat those LUNs without adding another layer of striping. Is this possibile with ZFS? As far as I understood, if I use zpool create myPool lun-1 lun-2 ... lun-n I will get a RAID0 striping where each data block is split across all "n" LUNs. If that''s