Hey all, I currently work for a company that has purchased a number of different SAN solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file store over fiber channel. Basically I''ve taken slices from each of the sans and added them to a ZFS pool on this box (which I''m calling a ''ZFS proxy''). I''ve then carved out LUN''s from this pool and assigned them to other servers. I then have snapshots taken on each of the LUN''s and replication off site for DR. This all works perfectly (backups for ESXi!) However, I''d like to be able to a) expand and b) make it HA. All the documentation i can find on setting up a HA cluster for file stores replicates data from 2 servers and then serves from these computers (i trust the SAN''s to take care of the data and don''t want to replicate anything -- cost!). Basically all i want is for the node that serves the ZFS pool to be HA (if this was to be put into production we have around 128tb and are looking to expand to a pb). We have a couple of IBM SVC''s that seem to handle the HA node setup in some obscure property IBM way so logically it seems possible. Clients would only be making changes via a single ''zfs proxy'' at a time (multi-pathing setup for fail over only) so i don''t believe I''d need to OCFS the setup? If i do need to setup OCFS can i put ZFS on top of that? (want snap-shotting/rollback and replication to a off site location, as well as all the goodness of thin provisioning and de-duplication) However when i import the ZFS pool onto the 2nd box i got large warnings about it being mounted elsewhere and i needed to force the import, then when importing the LUN''s i saw that the GUUID was different so multi-pathing doesn''t pick that the LUN''s are the same? can i change a GUUID via smtfadm? Is any of this even possible over fiber channel? Is anyone able to point me at some documentation? Am i simply crazy? Any input would be most welcome. Thanks in advance, -- This message posted from opensolaris.org
be very careful here!! On 8/26/2010 9:16 PM, Michael Dodwell wrote:> Hey all, > > I currently work for a company that has purchased a number of different SAN solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file store over fiber channel. > > Basically I''ve taken slices from each of the sans and added them to a ZFS pool on this box (which I''m calling a ''ZFS proxy''). I''ve then carved out LUN''s from this pool and assigned them to other servers. I then have snapshots taken on each of the LUN''s and replication off site for DR. This all works perfectly (backups for ESXi!) > > However, I''d like to be able to a) expand and b) make it HA. All the documentation i can find on setting up a HA cluster for file stores replicates data from 2 servers and then serves from these computers (i trust the SAN''s to take care of the data and don''t want to replicate anything -- cost!). Basically all i want is for the node that serves the ZFS pool to be HA (if this was to be put into production we have around 128tb and are looking to expand to a pb). We have a couple of IBM SVC''s that seem to handle the HA node setup in some obscure property IBM way so logically it seems possible. > > Clients would only be making changes via a single ''zfs proxy'' at a time (multi-pathing setup for fail over only) so i don''t believe I''d need to OCFS the setup? If i do need to setup OCFS can i put ZFS on top of that? (want snap-shotting/rollback and replication to a off site location, as well as all the goodness of thin provisioning and de-duplication) > > However when i import the ZFS pool onto the 2nd box i got large warnings about it being mounted elsewhere and i needed to force the import, then when importing the LUN''s i saw that the GUUID was different so multi-pathing doesn''t pick that the LUN''s are the same? can i change a GUUID via smtfadm?if U force the import and zfs are mounted by two hosts, Ur zfs could become corrupted!!! recovery is not easy> Is any of this even possible over fiber channel?please at least take look the document on oracle solaris cluster software it detail the way to use ZFS in cluster env http://docs.sun.com/app/docs/prod/sun.cluster32?l=en&a=view zfs http://docs.sun.com/app/docs/doc/820-7359/gbspx?l=en&a=view> Is anyone able to point me at some documentation? Am i simply crazy? > > Any input would be most welcome. > > Thanks in advance,-------------- next part -------------- A non-text attachment was scrubbed... Name: laotsao.vcf Type: text/x-vcard Size: 221 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100826/2ddfd403/attachment.vcf>
Lao, I had a look at the HAStoragePlus etc and from what i understand that''s to mirror local storage across 2 nodes for services to be able to access ''DRBD style''. Having a read thru the documentation on the oracle site the cluster software from what i gather is how to cluster services together (oracle/apache etc) and again any documentation i''ve found on storage is how to duplicate local storage to multiple hosts for HA failover. Can''t really see anything on clustering services to use shared storage/zfs pools. -- This message posted from opensolaris.org
Hi Michael, Have a look at this Blog/WP http://blogs.sun.com/TF/entry/new_white_paper_practicing_solaris for an example on how to use a iSCSI target from a NAS device as storage, you can just replace the tomcat/mysql HA services with HA nfs and you have what you are looking for. /peter On 8/27/10 11:25 , Michael Dodwell wrote:> Lao, > > I had a look at the HAStoragePlus etc and from what i understand that''s to mirror local storage across 2 nodes for services to be able to access ''DRBD style''. > > Having a read thru the documentation on the oracle site the cluster software from what i gather is how to cluster services together (oracle/apache etc) and again any documentation i''ve found on storage is how to duplicate local storage to multiple hosts for HA failover. Can''t really see anything on clustering services to use shared storage/zfs pools.
On 8/27/2010 12:25 AM, Michael Dodwell wrote:> Lao, > > I had a look at the HAStoragePlus etc and from what i understand that''s to mirror local storage across 2 nodes for services to be able to access ''DRBD style''.not true, HAS+ use shred storage. in this case since ZFS is not clustered FS so it need to be configured as failover FS, only one host can access the zpool at time. it need to export and import to failover between hosts. oracle solaris cluster provide the cluster framework e.g. private interconnect global did allow U setup NFS on top of failover HAS+(with ZFS) etc> Having a read thru the documentation on the oracle site the cluster software from what i gather is how to cluster services together (oracle/apache etc) and again any documentation i''ve found on storage is how to duplicate local storage to multiple hosts for HA failover. Can''t really see anything on clustering services to use shared storage/zfs pools.-------------- next part -------------- A non-text attachment was scrubbed... Name: laotsao.vcf Type: text/x-vcard Size: 221 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100827/493b5b81/attachment.vcf>