Some answers below,
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +902123352222
Email mertol.ozyoney at Sun.COM
-----Original Message-----
From: zfs-discuss-bounces at opensolaris.org
[mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Ross
Sent: Tuesday, November 06, 2007 1:50 PM
To: zfs-discuss at opensolaris.org
Subject: [zfs-discuss] ZFS and clustering - wrong place?
I''m just starting to learn about Solaris, ZFS, etc... It''s
amazing me how
much is possible, but it''s just shy of what I''d really, really
like to see.
I can see there''s a fair amount of interest in ZFS and clustering, and
it
seems Sun are actively looking into this, but I''m wondering if
that''s the
right place to do it?
Now I may be missing something obvious here, but it seems to me that for
really reliable clustering of data you need to be dealing with it at a
higher layer, effectively where iSCSI sits. Instead of making ZFS cluster
aware, wouldn''t it be easier to add support for things like mirroring,
striping (even raid) to the iSCSI protocol?
You are missing something obvious here. Looking from an aplication layer
file system is at a higher level then a storage protokol. Besides iSCSI is a
protokol and ZFS is a file system so there is virtualy no reason to compare
them.
What Sun is doing at the moment is trying to support active active access of
cluster nodes to the same ZFS file system. And active active access is
managed at FS level.
Accessing shared store is an other thing. ZFS defines nothing about how you
access raw devices (FCP , iSCSI, Sata etc.) You can access your storage with
iSCSI and use ZFS over it.
That way you get to use ZFS locally with all the benefits that entails
(guaranteed data integrity, etc), and you also have a protocol somewhere in
the network layer that guarantees data integrity to the client (confirming
writes at multiple locations, painless failover, etc...). Essentially doing
for iSCSI what ZFS did for disk.
You''d need support for this in the iSCSI target as it would seem make
sense
to store the configuration of the cluster on every target. That way the
client can connect to any target and read the information on how it is to
connect.
But once that''s done, your SAN speed is only limited by the internal
speed
of your switch. If you need fast performance, add half a dozen devices and
stripe data across them. If you need reliability mirror them. If you want
both, use a raid approach. Who needs expensive fibre channel when you can
just stripe a pile of cheap iSCSI devices?
It would make disaster recovery and HA a piece of cake. For any network
like ourselves with a couple of offices and a fast link between them (any
university campus would fit that model), you just have two completely
independent servers and configure the clients to stream data to them both.
No messy configuration of clustered servers, and support for multicast on
the network means you don''t even have to slow your clients down.
The iSCSI target would probably need to integrate with the file system to
cope with disasters. You''d need an intelligent way to re-synchronise
machines when they came back online, but that shouldn''t be too
difficult
with ZFS.
I reckon you could turn Solaris & ZFS into the basis for one of the most
flexible SAN solutions out there.
What do you all think? Am I off my rocker or would an approach like this
work?
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss at opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071106/42f7645b/attachment.html>