similar to: Xen and Linux-RDAC driver for Engenio storage controller

Displaying 20 results from an estimated 4000 matches similar to: "Xen and Linux-RDAC driver for Engenio storage controller"

2008 Feb 18
1
kernel-devel: compile MPP/RDAC kernel module
Hi Centos Users I am running Centos 4.6 wit latest updates. For I/O multipathing I need to use kernel 2.6.9-55 (IBM support matrix). Trying to compile Engenio Linux RDAC Driver[0] I fail to install kernel-devel for this exact kernel (with up2date). Up2date always installs the newest. # make clean make V=1 -C/lib/modules/2.6.9-55.ELsmp/build M=/root/tmp/linuxrdac-09.02.B5.15
2010 Jul 20
1
RDAC for IBM DS4700
Hi all, I have problem with my servers. I use two of HP Blade Servers and RHEL 4.6 installed on them, and have IBM DS4700 connected to them. In those servers are running RHCS (Red Hat Cluster Suite) with GFS for handling Oracle Database. Yesterday, I had missing one partition from the storage, suddenly. I called IBM and they suggested to use RDAC. The question is, Why should use IBM RDAC for
2007 Sep 06
0
Zfs with storedge 6130
On 9/4/07 4:34 PM, "Richard Elling" <Richard.Elling at Sun.COM> wrote: > Hi Andy, > my comments below... > note that I didn''t see zfs-discuss at opensolaris.org in the CC for the > original... > > Andy Lubel wrote: >> Hi All, >> >> I have been asked to implement a zfs based solution using storedge 6130 and >> im chasing my own
2011 Sep 05
2
[Xen-API] XCP - How to compile IBM Mpp-rdac driver in XCP
Hi all! I need to compile de IBM Mpp-rdac driver in XCP 1.0 to use with the IBM DS4700 storage. Is there any way to do this? -- Rogério da Costa _______________________________________________ xen-api mailing list xen-api@lists.xensource.com http://lists.xensource.com/mailman/listinfo/xen-api
2006 Jan 27
2
Do I have a problem? (longish)
Hi, to shorten the story, I describe the situation. I have 4 disks in a zfs/svm config: c2t9d0 9G c2t10d0 9G c2t11d0 18G c2t12d0 18G c2t11d0 is devided in two: selecting c2t11d0 [disk formatted] /dev/dsk/c2t11d0s0 is in use by zpool storedge. Please see zpool(1M). /dev/dsk/c2t11d0s1 is part of SVM volume stripe:d11. Please see metaclear(1M). /dev/dsk/c2t11d0s2 is in use by zpool storedge. Please
2013 Nov 07
1
IBM Storwize V3700 storage - device names
Hello, I have IBM Storwize V3700 storage, connected to 2 IBM x3550 M4 servers via fiber channel. The servers are with QLogic ISP2532-based 8Gb Fibre Channel to PCI Express HBA cards and run Centos 5.10 When I export a volume to the servers, each of them sees the volume twice, i.e /dev/sdb and /dev/sdc, with the same size. Previously I have installed many systems with IBM DS3500 series of
2008 Jan 22
0
zpool attach problem
On a V240 running s10u4 (no additional patches), I had a pool which looked like this: <pre> > # zpool status > pool: pool01 > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > pool01 ONLINE 0 0 0 > mirror
2015 Feb 05
0
UC multipathd
>-----Original Message----- >From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On >Behalf Of Alexander Dalloz >Sent: 04 February 2015 22:44 >To: CentOS mailing list >Subject: Re: [CentOS] multipathd > >Am 04.02.2015 um 15:02 schrieb Rushton Martin: >> Our cluster was supplied with two IBM DS3400 RAID arrays connected >> with fibre channel.
2015 Feb 05
0
UC multipathd
>-----Original Message----- >From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On >Behalf Of Alexander Dalloz >Sent: 04 February 2015 22:44 >To: CentOS mailing list >Subject: Re: [CentOS] multipathd > >Am 04.02.2015 um 15:02 schrieb Rushton Martin: >> Our cluster was supplied with two IBM DS3400 RAID arrays connected >> with fibre channel.
2015 Feb 05
0
UC multipathd
>-----Original Message----- >From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On >Behalf Of Alexander Dalloz >Sent: 04 February 2015 22:44 >To: CentOS mailing list >Subject: Re: [CentOS] multipathd > >Am 04.02.2015 um 15:02 schrieb Rushton Martin: >> Our cluster was supplied with two IBM DS3400 RAID arrays connected >> with fibre channel.
2015 Feb 05
0
UC multipathd
>-----Original Message----- >From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On >Behalf Of Alexander Dalloz >Sent: 04 February 2015 22:44 >To: CentOS mailing list >Subject: Re: [CentOS] multipathd > >Am 04.02.2015 um 15:02 schrieb Rushton Martin: >> Our cluster was supplied with two IBM DS3400 RAID arrays connected >> with fibre channel.
2015 Feb 05
0
UC multipathd
>-----Original Message----- >From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On >Behalf Of Alexander Dalloz >Sent: 04 February 2015 22:44 >To: CentOS mailing list >Subject: Re: [CentOS] multipathd > >Am 04.02.2015 um 15:02 schrieb Rushton Martin: >> Our cluster was supplied with two IBM DS3400 RAID arrays connected >> with fibre channel.
2015 Feb 05
0
UC multipathd
>-----Original Message----- >From: centos-bounces at centos.org [mailto:centos-bounces at centos.org] On >Behalf Of Alexander Dalloz >Sent: 04 February 2015 22:44 >To: CentOS mailing list >Subject: Re: [CentOS] multipathd > >Am 04.02.2015 um 15:02 schrieb Rushton Martin: >> Our cluster was supplied with two IBM DS3400 RAID arrays connected >> with fibre channel.
2015 Feb 04
0
multipathd
Am 04.02.2015 um 15:02 schrieb Rushton Martin: > Our cluster was supplied with two IBM DS3400 RAID arrays connected with > fibre channel. Both are old and one is failing so we bought an IBM > V3700 to replace it. The V3700 complained that we were using the IBM's > RDAC driver (true) and we were advised to change to using Linux > multipath. I've done that but the default
2015 Feb 04
0
multipathd
Our cluster was supplied with two IBM DS3400 RAID arrays connected with fibre channel. Both are old and one is failing so we bought an IBM V3700 to replace it. The V3700 complained that we were using the IBM's RDAC driver (true) and we were advised to change to using Linux multipath. I've done that but the default configuration for the DS3400s is: device { vendor
2015 Feb 04
0
multipathd
Our cluster was supplied with two IBM DS3400 RAID arrays connected with fibre channel. Both are old and one is failing so we bought an IBM V3700 to replace it. The V3700 complained that we were using the IBM's RDAC driver (true) and we were advised to change to using Linux multipath. I've done that but the default configuration for the DS3400s is: device { vendor
2015 Feb 04
0
multipathd
Our cluster was supplied with two IBM DS3400 RAID arrays connected with fibre channel. Both are old and one is failing so we bought an IBM V3700 to replace it. The V3700 complained that we were using the IBM's RDAC driver (true) and we were advised to change to using Linux multipath. I've done that but the default configuration for the DS3400s is: device { vendor
2015 Feb 04
0
multipathd
Our cluster was supplied with two IBM DS3400 RAID arrays connected with fibre channel. Both are old and one is failing so we bought an IBM V3700 to replace it. The V3700 complained that we were using the IBM's RDAC driver (true) and we were advised to change to using Linux multipath. I've done that but the default configuration for the DS3400s is: device { vendor
2015 Feb 04
0
multipathd
Our cluster was supplied with two IBM DS3400 RAID arrays connected with fibre channel. Both are old and one is failing so we bought an IBM V3700 to replace it. The V3700 complained that we were using the IBM's RDAC driver (true) and we were advised to change to using Linux multipath. I've done that but the default configuration for the DS3400s is: device { vendor
2015 Feb 04
0
multipathd
Our cluster was supplied with two IBM DS3400 RAID arrays connected with fibre channel. Both are old and one is failing so we bought an IBM V3700 to replace it. The V3700 complained that we were using the IBM's RDAC driver (true) and we were advised to change to using Linux multipath. I've done that but the default configuration for the DS3400s is: device { vendor