Thanks for the pointer to pciback. Your cleverly simple logic and statement
of the obvious kicked my brain into gear. I needed that :-)
I was possibly misusing the term "SAN" to mean
yet-another-kind-of-network,
rather than a-special-kind-of-disk. Viewing an HBA as analogous to a
regular NIC, I extended the analogy by saying "An HBA would have a
''SAN
bridge'' like a NIC has an ''ethernet-bridge''."
However, having pondered this
for a few hours, I realize that''s an idea that would not work well (at
all?)
when you get down to things like, oh, zoning the SAN switch, or something.
So, it seems there really are only two ways to approach this problem:
1. Pass the (in my case) /dev/mpath/mpath* devices from dom0 to domU and
pretend like I don''t have an HBA in domU.
2. Use pciback to hide the HBA from dom0 and assign it to domU.
Option #1, as you pointed out, does not give you I/O fencing, but would
otherwise work fine. Additionally, you still have the flexibility to have
additional LUNs attached to dom0 for things like VBDs on a SAN, etc, without
having to add additional HBAs, but at the expense of performance. Option #2
is probably a better solution from the perspective of pure performance and
correctness, at the expense of extra ... expense.
I will now shame myself publicly by admitting that I was struggling with
this problem because I was thinking about the problem in terms of volume
groups and logical volumes, rather than physical LUNS. I couldn''t
figure
out how the heck I was going to present the volume group to domU. Duh.
Thanks for the help!
--
Brandon
On Jan 11, 2008 11:44 AM, John Madden <jmadden@ivytech.edu> wrote:
> > How do I go about giving a domU access to SAN? Can we create SAN
> > bridges like we can Ethernet bridges, and then define a virtual HBA in
> > the domU? Can we hide the real HBA from dom0 and present it to domU,
> > somehow? Is it even possible to do what I am asking? After days of
> > Googling and reading various documentation, plus reading hundreds of
> > possibly relevant emails in this list''s archive, I have not
found what
> > I am looking for. I have more questions than answers.
>
> If you want all of the i/o fencing capabilities (etc) of GFS,
you''ll
> have to use the pciback/etc Xen stuff to give the HBA to the domU.
> However, simply for GFS to work, I believe all you''d need to do is
pass
> the SAN LUN down to the domU via your Xen config (phy:/dev/sdX,...).
> GFS works based on the fact that it''s a SCSI disk, not so much on
the
> concept that it''s "a SAN."
>
> (Disclaimer: I haven''t actually done this, it''s just what
seems
> logical. :))
>
> John
>
>
>
>
> --
> John Madden
> Sr. UNIX Systems Engineer
> Ivy Tech Community College of Indiana
> jmadden@ivytech.edu
>
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users