Displaying 20 results from an estimated 2000 matches similar to: "Boot from FC SAN CentOS 5.1 x86-64"
2011 Mar 14
2
Libvirt with multipath devices and/or FC on NPIV
Hello,
I am trying to find out a best practice for a specific scenario.
First of all I would like to know what is the proper way to set up
multipath, who should care about it the host or the guest. Right now I have
a setup where I have one multipath which sets my host to boot from FC SAN. I
have another multipathed LUN in the host which is essentially a dm which I
attached to a guest, however
2011 Sep 05
2
[Xen-API] XCP - How to compile IBM Mpp-rdac driver in XCP
Hi all!
I need to compile de IBM Mpp-rdac driver in XCP 1.0 to use with the IBM
DS4700 storage.
Is there any way to do this?
--
Rogério da Costa
_______________________________________________
xen-api mailing list
xen-api@lists.xensource.com
http://lists.xensource.com/mailman/listinfo/xen-api
2007 Apr 15
1
Multipath-root (mpath) problems with CentOS 5
Hi list!
I have a server with dual port Qlogic iSCSI HBA. I set up the same LUN for
both ports, and boot the CentOS installer with "linux mpath".
Installer detects multipathing fine, and creates mpath0 device for root disk.
Installation goes fine, and the system boots up and works fine after the
install from the multipath root device.
After install the setup is like this:
LUN 0 on
2010 Jul 20
1
RDAC for IBM DS4700
Hi all,
I have problem with my servers. I use two of HP Blade Servers and RHEL 4.6
installed on them, and have IBM DS4700 connected to them.
In those servers are running RHCS (Red Hat Cluster Suite) with GFS for
handling Oracle Database.
Yesterday, I had missing one partition from the storage, suddenly. I called
IBM and they suggested to use RDAC.
The question is, Why should use IBM RDAC for
2007 Apr 23
14
concatination & stripe - zfs?
I want to configure my zfs like this :
concatination_stripe_pool :
concatination
lun0_controller0
lun1_controller0
concatination
lun2_controller1
lun3_controller1
1. There is any option to implement it in ZFS?
2. there is other why to get the same configuration?
thanks
This message posted from opensolaris.org
2012 May 31
1
NPIV setup?
I'm missing something.
The purpose of NPIV (as I understand it) is to give a guest OS an HBA that
it can scan, play with new luns, etc all without making changes to the
physical server(s) the guest is living in currently.
However, I can't find a way to either have the guest's XML config create
the HBA or for the physical server to successfully GIVE the HBA to the
guest. I can give
2007 Dec 14
3
Qlogic HBA scanning issues with CentOS 5.1 ?
Hi,
I got few servers (IBM HS20,HS21 blades, IBM xSeries 3650 & others)
connected to a dual fabric san throught Qlogic HBA's (23xx, 24xx).
Multipathing is done with device-mapper-multipath.
On CentOS 4.x i can scan for new scsi devices without any problems, get
them up with multipathing & use them without any problems.
However, after i started installing CentOS 5.1 (did not
2012 Jun 29
1
Storage Pools & nodedev-create specific scsi_host#?
Hello everyone,
Current host build is RHEL 6.2, soon to be upgrading.
I'm in the process of mapping out a KVM/RHEV topology. I have questions about the landscape of storage pools and the instantiation of specific scsi_host IDs using virsh nodedev-create and some magic XML definitions. I'm grouping these questions together because the answer to one may impact the other.
High-level
2004 Mar 06
1
OCFS and multipathing
I've got my RAC cluster running pretty smoothly now (thanks again for all
the help so far), but I only have single connections between the servers and
the RAID array. The servers each have two Qlogic HBAs, and I'd like to find
out if there's any reasonable way to implement multipathing.
The platform is RHEL 3, and Red Hat's knowledgebase indicates that they
strongly recommend
2012 Jan 30
1
fc storage examples
Hello,
I will have two servers with fc storage. storage will be connected with
two links to both servers
___________ ___________
| server 1 | | server 2 |
-------------- --------------
| | | |
--------------------------
| storage |
--------------------------
on servers i will have multipathing enabled. storage has
2009 Mar 18
2
connecting 2 servers using an FC card via iSCSI
Hi there,
I have one server acting as a iscsi target running windows storage server r2
sp2 and the other server is running centos as an initiator. They are
connected to a switch over a 1Gbit ethernet connection. the target is a Dell
NF600 and the server running centos is a Poweredge R900.
We want to move this configuration to a FC based installation using a Dell
QLE2462 HBA (this is the hba we can
2015 Apr 14
1
HBA enumeration and multipath configuration
# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
# uname -r
3.10.0-123.20.1.el7.x86_64
Hi,
We use iSCSI over a 10G Ethernet Adapter and SRP over an Infiniband adapter to provide multipathing
to our storage:
# lspci | grep 10-Gigabit
81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
81:00.1 Ethernet controller: Intel Corporation
2008 Nov 14
1
kickstart install on SAN with multipath
Using CentOS 5.1, though with a few hours work I could update
to 5.2.. I can install to SAN with a single path no problem
but I'd like to be able to use dm-multipath. From the kickstart
docs it seems this is supported but there is no information
as to what the various options mean
http://www.centos.org/docs/5/html/5.1/Installation_Guide/s1-kickstart2-options.html
--
multipath (optional)
2008 Nov 19
1
qlogic driver not scanning scsi bus on load CentOS 5.1
Very strange behavior, been banging my head against the
wall on this one too. It works fine on CentOS 4.6, the
behavior is when the driver loads the bus is not scanned
or at least the devices that are exported are not detected.
(they are detected in the HBA bios no problem). If I issue
the qlogic bus scan command it doesn't get anything back
either. If I manually add the devices with
2009 Sep 26
10
Adding handling for Multipath storage devices
The following patches introduce support for multipath and cciss devices to the ovirt-node and node-image. Comments are appreciated.
These patches assume that the 3 patches (2 node, 1 node-image) from Joey are all incorporated.
Mike
2009 Oct 01
1
Repost of Patch 6/6 for ovirt-node
All other patches from the sequence remain unchanged. Repost of patch 6 based on comments from Joey to follow.
Mike
2011 May 30
13
JBOD recommendation for ZFS usage
Dear all
Sorry if it''s kind of off-topic for the list but after talking
to lots of vendors I''m running out of ideas...
We are looking for JBOD systems which
(1) hold 20+ 3.3" SATA drives
(2) are rack mountable
(3) have all the nive hot-swap stuff
(4) allow 2 hosts to connect via SAS (4+ lines per host) and see
all available drives as disks, no RAID volume.
In a
2010 Mar 16
0
can I configure a fc-san initiator for a storage array?
I have a machine with opensolaris snv111b .I want to let it use to a fc-san initiator NAS header in my total system.
Now I configure FC HBA port from qlt to qlc mode as initiator wich command update_drv .
I can use stmfadm list-target -v to find the FC-SAN target is conneted
Target: wwn.2100001B328A3224
Operational Status: Online
Provider Name : qlt
Alias : qlt2,0
2013 Jul 03
1
KVM virtual machine and SAN storage with FC
Hi Team,
Is there any body got any experience on setting up a virtualized environment in which the vm's can access a fiber channel SAN storage connected to host? the host access the SAN through its own HBA, but the hba is not recognized inside the virtual machines. Please let me know the step to go through this.
Regards
-------------- next part --------------
An HTML attachment was
2009 May 04
1
local disk FC san CLVM + migration of virtual machines
Hello all
We are looking into ovirt for several weeks now and i have to say that it
looks very promising.
Our current environment is based on a lot off shell scripts and xen.
We would like to move to ovirt but there are some things we would like to
have working before moving.
1. using FC, san and clvm for storage
2. The ability to migrate virtual machines when we have to do maintenance
on a