On 2009-10-30, at 06:47, Corey Kovacs wrote:> My question is basically this. What are people gravitating towards
> these days? For a while, it seemed like Infiniband was going to the
> interconnect to use due to cost/port as compare t o 10G Ethernet,
> but 10G seems to be coming down in price. What are people using for
> OSS/OST''s ? I''ve been reading about the DDN stuff which
is
> impressive but I don''t have access to that kind of budget yet.
Even if the per-port cost of 10GigE was the same as SDR infiniband
(also running at 10Gb/s, DDR or QDR or EDR will be noticably faster)
the performance of the two is NOT the same, not even close in some
cases. TCP adds a LOT of overhead to the IO processing because of
extra (unavoidable) data copies in the networking layer, due to the
lack of RDMA (iWARP hasn''t appeared anywhere that I''ve heard
about).
Also, the RPC rate of TCP is much lower than that of native IB.
> My initial plan is to tie some HP DL160''s to MSA70''s
filled with
> 300GB drives. I realize this won''t give me any sort of failover
> capability but this is just a proof for now. If there is a low cost,
> high speed shared device, I''d rather use a failover config in
order
> to get a real feel for that this is going to require.
You should look at the Sun Lustre Storage system, which is configured
with HA-OSS pairs, each exporting 48x 1TB SATA drives. You can get
over 1GB/s from each OSS, and 64TB of usable space (after RAID-6,
journaling,
and hot spares are taken into account).
> Also, the roadmap for Lustre used to include "raid" personalities
> for OSS''s. has that functionality been deferred or dropped
> altogether, or is it still in the works?
It is still on the horizon for now, unfortunately.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.