First off, thanks for the reply!
On 11/2/05, Andreas Dilger <adilger@clusterfs.com>
wrote:> Depends entirely upon the expected client load. For testing it is
> possible to run an MDS + 5 OSTs + 2 clients in a 64MB UML image.
> In practise, any modern dual CPU SMP machine is up to the task
> unless you have very high performance requirements.
>
> More RAM and 64-bit CPU can improve performance because the MDS can
> cache a lot more filesystem metadata in memory and avoid disk IO.
>
> Low-latency networking (e.g. Elan or Infiniband) dramatically improve
> the RPC rates to a server, but are much more expensive than Ethernet.
I''ll be using gigabit ethernet. I''ll also be starting out with
4
webservers, 2 LVS servers (for loadbalancing, and I''m thinking I can
use the same 2 servers for Lustre MDS primary and failover), a
database server, and whatever I find out from storage (redundant "head
unit" type appliances/servers - I''m guessing what would be
considered
the OSSes - I assume are required, both connected to the same physical
drive array)
I guess my needs are going to be quite small, compared to the 1,000
node cluster talked about on the URL you''ve provided.
I guess I should put this a different way. I''ve found a 10" deep
Celeron-based server, that would be great (and I know LVS would work
fine on it) - I''m wondering if it would be reasonable to run the
MDS''s
on this hardware:
http://www.asaservers.com/config_system.asp?config_id=3DCT071%2D31126%2D000
CPU INTEL CELERON 2.6GHZ 400FSB 128K S478 CPU(BX80532RC2600B)
MEMORY 512MB PC2100 NON ECC UNBUFFERED DDR(5M21NUL)
HARD DISK 1 IDE 40GB 7200 RPM ATA100(I-004-07-0)
PCI Card 1 INTEL PRO/1000 MT GIGABIT DESKTOP COPPER.(PWLA8390MT)
PCI Card 2 INTEL PRO/1000 MT GIGABIT DESKTOP COPPER.(PWLA8390MT)
CHIPSET Intel 82845GV Graphics and Memory Controller Hub (GMCH).
CASE 1U 10in Dual-PCI Slots mini-ITX Rackmount.
Also, just to sanity check myself, using proper terminology:
Each webserver would be a "client"
I''d have two OSS machines (in active/passive, of course)
I''ve have two MDS machines (for failover)
> FC is by far the most common way of doing this today. We have also
> done some limited testing with Firewire, but I don''t know the
details.
> It might also be possible to use something like iSCSI, but I have not
> heard of anyone doing that yet. You definitely do not need GFS or any
> other "SAN filesystem" (which itself need FC or other SAN to
connect
> both the servers AND clients to the same storage).
Thanks, I need to look into this more. I don''t have a huge budget, but
have been researching failover for filesystems and Lustre seems to be
the best. Right now I use NFS and that can''t be failed over properly.