On 07/11/2013 21:14, Todor Petkov wrote:> Hello,
>
> I have IBM Storwize V3700 storage, connected to 2 IBM x3550 M4 servers
> via fiber channel. The servers are with QLogic ISP2532-based 8Gb Fibre
> Channel to
> PCI Express HBA cards and run Centos 5.10
>
> When I export a volume to the servers, each of them sees the volume
> twice, i.e /dev/sdb and /dev/sdc, with the same size.
>
> Previously I have installed many systems with IBM DS3500 series of
> storage and the servers see one disk per export. I am using the MPP
> drives from this package:
>
http://support.netapp.com/NOW/public/apbu/oemcp/apbu_lic.cgi/public/apbu/oemcp/09.03.0C05.0652/rdac-LINUX-09.03.0C05.0652-source.tar.gz
>
> I came upon the IBM site, saying to configure multipath (I never did it
> for DS3500 series). When I did, a new device came, /dev/dm-7, but my
> goal is to have one /dev/sdX type of device and no device mapper. I read
> that Storwize support DMP RDAC, and DS support MPP RDAC, but does anyone
> else have experience with such setup and can give an advice/hint?
>
> Thanks in advance.
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
I's been a while since I set this up, and this on on XIV, not v3700
(which we also have but it only has VMware connected to it) but this is
a RHEL 5.10 box, so is reasonably compatible.
This is, IMO, normal behaviour for a multipath device. For example on
one of our boxes if I run:
[root at server ~]# multipath -ll
mpath0 (20017380011ea0c74) dm-2 IBM,2810XIV
[size=224G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
\_ 3:0:0:1 sdb 8:16 [active][ready]
\_ 3:0:1:1 sdc 8:32 [active][ready]
\_ 3:0:2:1 sdd 8:48 [active][ready]
\_ 3:0:3:1 sde 8:64 [active][ready]
\_ 3:0:4:1 sdf 8:80 [active][ready]
\_ 3:0:5:1 sdg 8:96 [active][ready]
\_ 4:0:0:1 sdh 8:112 [active][ready]
\_ 4:0:1:1 sdi 8:128 [active][ready]
\_ 4:0:2:1 sdj 8:144 [active][ready]
\_ 4:0:3:1 sdk 8:160 [active][ready]
\_ 4:0:4:1 sdl 8:176 [active][ready]
\_ 4:0:5:1 sdm 8:192 [active][ready]
You can see there are 12 sdx devices, but that all maps to just 1 LUN.
With multipathd installed and running this all maps to a single volume
under /dev/mpath/mpath0 which I then use LVM to manage:
--- Physical volume ---
PV Name /dev/mpath/mpath0
VG Name vg_data
PV Size 224.00 GB / not usable 4.00 MB
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 57343
Free PE 0
Allocated PE 57343
PV UUID GY4ekC-KuXE-LyW6-kiHB-F9g6-ivB2-BD01Ih
This all works fine and allows us to loose paths to the SAN with out
disruption to the servers. There is no reason to be using /dev/sdx
devices to control your underlying hardware, and is in fact coincided
bad practice as there is no assurance that when the server next boots it
will detect the hardware in the same order. You should really be using
UUIDS or device labels to address your storage as that is immutable
between boots.
Tris
*************************************************************
This email and any files transmitted with it are confidential
and intended solely for the use of the individual or entity
to whom they are addressed. If you have received this email
in error please notify postmaster at bgfl.org
The views expressed within this email are those of the
individual, and not necessarily those of the organisation
*************************************************************