similar to: List of supported multipath drivers

Displaying 20 results from an estimated 800 matches similar to: "List of supported multipath drivers"

2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2007 Apr 19
14
Experience with Promise Tech. arrays/jbod''s?
Greetings, In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I''ve run across the recent "VTrak" SAS/SATA systems from Promise Technologies, specifically their E-class and J-class series: E310f FC-connected RAID: http://www.promise.com/product/product_detail_eng.asp?product_id=175 E310s SAS-connected RAID:
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2008 Apr 04
10
ZFS and multipath with iSCSI
We''re currently designing a ZFS fileserver environment with iSCSI based storage (for failover, cost, ease of expansion, and so on). As part of this we would like to use multipathing for extra reliability, and I am not sure how we want to configure it. Our iSCSI backend only supports multiple sessions per target, not multiple connections per session (and my understanding is that the
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2010 Jun 21
0
Seriously degraded SAS multipathing performance
I''m seeing seriously degraded performance with round-robin SAS multipathing. I''m hoping you guys can help me achieve full throughput across both paths. My System Config: OpenSolaris snv_134 2 x E5520 2.4 GHz Xeon Quad-Core Processors 48 GB RAM 2 x LSI SAS 9200-8e (eight-port external 6Gb/s SATA and SAS PCIe 2.0 HBA) 1 X Mellanox 40 Gb/s dual port card PCIe 2.0 1 x JBOD:
2007 Feb 17
8
ZFS with SAN Disks and mutipathing
Hi, I just deploy the ZFS on an SAN attach disk array and it''s working fine. How do i get dual pathing advantage of the disk ( like DMP in Veritas). Can someone point to correct doc and setup. Thanks in Advance. Rgds Vikash Gupta This message posted from opensolaris.org
2009 Jan 09
7
Desperate question about MPXIO with ZFS-iSCSI
I''m trying to set up a iscsi connection (with MPXIO) between my Vista64 workstation and a ZFS storage machine running OpenSolaris 10 (forget the exact version). On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 is 192.168.2.102. The NICs are connected to two separate switches serving two separate IP spaces. On my Vista64 machine, I also have two NICs connected in
2012 Oct 03
14
Changing rpool device paths/drivers
Hello all, It was often asked and discussed on the list about "how to change rpool HDDs from AHCI to IDE mode" and back, with the modern routine involving reconfiguration of the BIOS, bootup from separate live media, simple import and export of the rpool, and bootup from the rpool. The documented way is to reinstall the OS upon HW changes. Both are inconvenient to say the least.
2010 May 28
21
expand zfs for OpenSolaris running inside vm
hello, all I am have constraint disk space (only 8GB) while running os inside vm. Now i want to add more. It is easy to add for vm but how can i update fs in os? I cannot use autoexpand because it doesn''t implemented in my system: $ uname -a SunOS sopen 5.11 snv_111b i86pc i386 i86pc If it was 171 it would be grate, right? Doing following: o added new virtual HDD (it becomes
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured: T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks OpenSolaris Nevada Build 91 Solaris Express Community Edition snv_91 SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 03 June 2008
2006 Dec 22
6
Re: Difference between ZFS and UFS with one LUN froma SAN
This may not be the answer you''re looking for, but I don''t know if it''s something you''ve thought of. If you''re pulling a LUN from an expensive array, with multiple HBA''s in the system, why not run mpxio? If you ARE running mpxio, there shouldn''t be an issue with a path dropping. I have the setup above in my test lab and pull cables
2006 Dec 21
1
Dtrace + mpxio
Hi ! I am using MPXIO for doing multipathing across four HBA''s. Now I need to know how much IO/sec each HBA card is serving. With EMC powerpath installed, we could get this using powermt display every=5. is there any way I can use Dtrace to extract this output ? I guess MPXIO just shows one Device name to the OS so dev__pathname won''t be able to tell me per HBA i/O. Regds,
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing. Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible. I figure theres no point in having overhead from 2 volume manages if we can avoid it. Has anyone
2007 Jul 13
28
ZFS and powerpath
How much fun can you have with a simple thing like powerpath? Here''s the story: I have a (remote) system with access to a couple of EMC LUNs. Originally, I set it up with mpxio and created a simple zpool containing the two LUNs. It''s now been reconfigured to use powerpath instead of mpxio. My problem is that I can''t import the pool. I get: pool: ###### id:
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2010 Nov 05
3
ZFS vs mpxio vs cfgadm in Solaris.
Folks, I''m trying to figure out whether we should give ZFS / mpxio a shot on one of our research servers, or simply skip it (as we have previously). In Nov 2009 Cindy responded to a thread concerning ZFS device issues, cfgadm, and mpxio: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-November/033496.html I''ve got an x2270 with the Sun EZ-SAS HBA and external SATA
2012 Dec 04
2
[releng_8 tinderbox] failure on sparc64/sparc64
TB --- 2012-12-04 23:10:18 - tinderbox 2.9 running on freebsd-legacy2.sentex.ca TB --- 2012-12-04 23:10:18 - FreeBSD freebsd-legacy2.sentex.ca 9.0-RELEASE FreeBSD 9.0-RELEASE #0: Tue Jan 3 07:46:30 UTC 2012 root at farrell.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 TB --- 2012-12-04 23:10:18 - starting RELENG_8 tinderbox run for sparc64/sparc64 TB --- 2012-12-04 23:10:18 - cleaning
2011 Aug 16
2
solaris 10u8 hangs with message Disconnected command timeout for Target 0
Hi, My solaris storage hangs. I login to the console and there is messages[1] display on the console. I can''t login into the console and seems the IO is totally blocked. The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2 HBA cable connect the server and MD3000. The symptom is random. It is very appreciated if any one can help me out. Regards, Ding [1] Aug 16
2006 Jun 21
2
ZFS and Virtualization
Hi experts, I have few issues about ZFS and virtualization: [b]Virtualization and performance[/b] When filesystem traffic occurs on a zpool containing only spindles dedicated to this zpool i/o can be distributed evenly. When the zpool is located on a lun sliced from a raid group shared by multiple systems the capability of doing i/o from this zpool will be limited. Avoiding or limiting i/o to