Realistically speaking, you don''t want to use SATA for general-purpose
random I/O-heavy storage, which is most likely what your disk pattern is
going to be, for multiple windows clients and hosted VMs.
Frankly, if you can afford it, you really want to find someone who will
sell you a combined SAS/SATA JBOD enclosure, which you can populate 50%
with SAS drives, and 50% with SATA drives, to better optimize your
performance. Take a look at our StorageTek 6140 series WITHOUT the
hardware raid controller. That''s the kind of thing you want (I know
other people make it).
If you go for the external JBOD, you can choose host-attach via SAS,
SCSI, or FC, depending on exactly how much $$ you have, and the
tradeoffs in redundancy/speed for each. As a host system, just get any
2U or so machine with sufficient PCI-X or PCI-Express slots to handle
your HBAs. Personally, given that you have to buy at least a KVM (and,
maybe an IP-KVM) to handle the console, I wouldn''t consider any machine
without true local management, which basically leaves you with the Sun
X4200-series, and similar HP machines.
Also, you _really_ want to spend the extra cash for a gigabit switch.
You don''t necessarily have to go full-managed, as Netgear and others
make a "smart switch" which has virtually all the features (except
full
SNMP and a few more exotic features) that full-managed ones have. A
Netgear 24-port GS724T runs well under $600. Get it. That allows you to
do trunking to increase aggregate bandwidth from the server. You
_really_ don''t want to bother with direct-to-host attach, for either
NFS
or iSCSI.
_Strongly_ consider using AFS (not NFS), especially for the VMWare
machines. AFS''s large local cache volume makes it ideal for things like
caching VMWare images locally (NFS w/ cachefs also works OK, but I
prefer AFS). Using AFS allows you to use ZFS on the server for volmgt,
which is what your really want.
-Erik
On Thu, 2007-07-26 at 14:50, Peter Baumgartner wrote:> I''m looking to use ZFS to store about 6-10 live virtual machine
images
> (served via VMWare Server on Linux) and network file storage for ~50
> Windows clients. I''ll probably start at about 1TB of storage and
want
> to be able to scale to at least 4TB. Cost and reliability are my two
> greatest concerns. I''d like to send daily snapshots across a WAN
to a
> backup box offsite as well.
>
> I''m looking at something like this for hardware:
> http://www.siliconmechanics.com/i6091/dual-xeon-server.php?cat=393
> which the vendor has confirmed will be OpenSolaris friendly.
>
> I''m not sure about whether to connect this to the VMWare Server
and a
> Windows server via iSCSI or to use NFS/Samba to get the files to where
> they need to be. I''m leaning towards iSCSI right now because I
have
> concerns about configuring and supporting NFS and Samba on
> OpenSolaris. The main drawback is that I would still need to use the
> filesystem/disk management on Linux and Windows that would make
> growing/shrinking filesystems more difficult than a ZFS only solution.
>
> If I do go iSCSI, I am thinking I would do direct connect GigE to each
> server, saving money and complexity over buying a gigabit switch.
>
> I''m new to Solaris and SANs, but have a fair amount of Linux and
> Windows admin experience. Any feedback or wisdom you can pass my way
> is appreciated.
>
> --
> Pete
>
>
> ______________________________________________________________________
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss