similar to: Experience with Promise Tech. arrays/jbod''s?

Displaying 20 results from an estimated 600 matches similar to: "Experience with Promise Tech. arrays/jbod''s?"

2013 Jan 07
5
mpt_sas multipath problem?
Greetings, We''re trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue, although the internal drives do appear to the OS with multipath device names (despite the fact
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS? Keith McAndrew Senior Systems Engineer Northern California SUN Microsystems - Data Management Group <mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com 916 715 8352 Cell CONFIDENTIALITY NOTICE The information contained in this transmission may contain privileged and confidential information of SUN
2008 Apr 12
5
Newbie question: ZFS on Xserve RAID with Solaris 10
--Apologies if you get two copies of this message - it was submitted for moderation and hasn''t appeared on the list in two days, so I''m resubmitting. Hi all, Just a quick question. Is it possible to utilise an Apple Xserve RAID as an array for use with ZFS with RAID-Z in Solaris? I''ve seen various mentions of ''zfs'' and ''xserve
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2008 Apr 04
10
ZFS and multipath with iSCSI
We''re currently designing a ZFS fileserver environment with iSCSI based storage (for failover, cost, ease of expansion, and so on). As part of this we would like to use multipathing for extra reliability, and I am not sure how we want to configure it. Our iSCSI backend only supports multiple sessions per target, not multiple connections per session (and my understanding is that the
2006 Dec 22
6
Re: Difference between ZFS and UFS with one LUN froma SAN
This may not be the answer you''re looking for, but I don''t know if it''s something you''ve thought of. If you''re pulling a LUN from an expensive array, with multiple HBA''s in the system, why not run mpxio? If you ARE running mpxio, there shouldn''t be an issue with a path dropping. I have the setup above in my test lab and pull cables
2006 Dec 21
1
Dtrace + mpxio
Hi ! I am using MPXIO for doing multipathing across four HBA''s. Now I need to know how much IO/sec each HBA card is serving. With EMC powerpath installed, we could get this using powermt display every=5. is there any way I can use Dtrace to extract this output ? I guess MPXIO just shows one Device name to the OS so dev__pathname won''t be able to tell me per HBA i/O. Regds,
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2008 Feb 01
2
Un/Expected ZFS performance?
I''m running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS
2007 Jan 10
2
using veritas dmp with ZFS (but not vxvm)
We have some HDS storage that isn''t supported by mpxio, so we have to use veritas dmp to get multipathing. Whats the recommended way to use DMP storage with ZFS. I want to use DMP but get at the multipathed virtual luns at as low a level as possible to avoid using vxvm as much as possible. I figure theres no point in having overhead from 2 volume manages if we can avoid it. Has anyone
2006 Apr 13
1
device-mapper multipath
I am attempting to get multipath working with device-mapper (CentOS 4.2 and 4.3). It works on EVERY install of mine from RH (also v4.2, 4.3), but the same multipath.conf imported to all my installs of CentOS do not work. Note that I have tested a working 4.2 configuration file from RH on CentOS 4.2 and a working 4.3 configuration (it changed slightly) on CentOS 4.3. Neither worked. Our production
2007 Jul 13
28
ZFS and powerpath
How much fun can you have with a simple thing like powerpath? Here''s the story: I have a (remote) system with access to a couple of EMC LUNs. Originally, I set it up with mpxio and created a simple zpool containing the two LUNs. It''s now been reconfigured to use powerpath instead of mpxio. My problem is that I can''t import the pool. I get: pool: ###### id:
2012 Oct 03
14
Changing rpool device paths/drivers
Hello all, It was often asked and discussed on the list about "how to change rpool HDDs from AHCI to IDE mode" and back, with the modern routine involving reconfiguration of the BIOS, bootup from separate live media, simple import and export of the rpool, and bootup from the rpool. The documented way is to reinstall the OS upon HW changes. Both are inconvenient to say the least.
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all Sorry if it''s kind of off-topic for the list but after talking to lots of vendors I''m running out of ideas... We are looking for JBOD systems which (1) hold 20+ 3.3" SATA drives (2) are rack mountable (3) have all the nive hot-swap stuff (4) allow 2 hosts to connect via SAS (4+ lines per host) and see all available drives as disks, no RAID volume. In a
2009 Jan 09
7
Desperate question about MPXIO with ZFS-iSCSI
I''m trying to set up a iscsi connection (with MPXIO) between my Vista64 workstation and a ZFS storage machine running OpenSolaris 10 (forget the exact version). On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 is 192.168.2.102. The NICs are connected to two separate switches serving two separate IP spaces. On my Vista64 machine, I also have two NICs connected in
2010 Nov 05
3
ZFS vs mpxio vs cfgadm in Solaris.
Folks, I''m trying to figure out whether we should give ZFS / mpxio a shot on one of our research servers, or simply skip it (as we have previously). In Nov 2009 Cindy responded to a thread concerning ZFS device issues, cfgadm, and mpxio: http://mail.opensolaris.org/pipermail/zfs-discuss/2009-November/033496.html I''ve got an x2270 with the Sun EZ-SAS HBA and external SATA
2008 Jul 09
8
Using zfs boot with MPxIO on T2000
Here is what I have configured: T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root disks OpenSolaris Nevada Build 91 Solaris Express Community Edition snv_91 SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 03 June 2008
2006 Jun 21
2
ZFS and Virtualization
Hi experts, I have few issues about ZFS and virtualization: [b]Virtualization and performance[/b] When filesystem traffic occurs on a zpool containing only spindles dedicated to this zpool i/o can be distributed evenly. When the zpool is located on a lun sliced from a raid group shared by multiple systems the capability of doing i/o from this zpool will be limited. Avoiding or limiting i/o to
2007 Jul 18
1
Converting exisitng ZFS pool to MPxIO
We have a Sun v890, and I''m interested converting existing ZFS zpool from c#t#d# to MPxIO. % zpool status pool: data state: ONLINE status: ONLINE scrub: scrub completed with 0 errors on Sun Jul 15 10:58:33 2007 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0