Does anyone have any particularly creative ZFS replication strategies they could share? I have 5 high-performance Cyrus mail-servers, with about a Terabyte of storage each of which only 200-300 gigs is used though even including 14 days of snapshot space. I am thinking about setting up a single 3511 with 4 terabytes of storage at a remote site as a backup device for the content. Struggling with how to organize the idea of wedging 5 servers into the one array though. Simplest way that occurs is one big RAID-5 storage pool with all disks. Then slice out 5 LUNs each as it''s own ZFS pool. Then use zfs send & receive to replicate the pools. Ideally I''d love it if ZFS directly supported the idea of rolling snapshots out into slower secondary storage disks on the SAN, but in the meanwhile looks like we have to roll our own solutions. This message posted from opensolaris.org
On Feb 1, 2008, at 1:15 PM, Vincent Fox wrote:> Ideally I''d love it if ZFS directly supported the idea of rolling > snapshots out into slower secondary storage disks on the SAN, but in > the meanwhile looks like we have to roll our own solutions.If you''re running some recent SXCE build, you could use ZFS with AVS for remote replication over IP. http://blogs.sun.com/AVS/entry/avs_and_zfs_seamless /dale
Take a look on NexentaStor - its a complete 2nd tier solution: http://www.nexenta.com/products and AVS is nicely integrated via management RPC interface which is connecting multiple NexentaStor nodes together and greatly simplifies AVS usage with ZFS... See demo here: http://www.nexenta.com/demos/auto-cdp.html On Fri, 2008-02-01 at 10:15 -0800, Vincent Fox wrote:> Does anyone have any particularly creative ZFS replication strategies they could share? > > I have 5 high-performance Cyrus mail-servers, with about a Terabyte of storage each of which only 200-300 gigs is used though even including 14 days of snapshot space. > > I am thinking about setting up a single 3511 with 4 terabytes of storage at a remote site as a backup device for the content. Struggling with how to organize the idea of wedging 5 servers into the one array though. > > Simplest way that occurs is one big RAID-5 storage pool with all disks. Then slice out 5 LUNs each as it''s own ZFS pool. Then use zfs send & receive to replicate the pools. > > Ideally I''d love it if ZFS directly supported the idea of rolling snapshots out into slower secondary storage disks on the SAN, but in the meanwhile looks like we have to roll our own solutions. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Erast,> Take a look on NexentaStor - its a complete 2nd tier solution: > > http://www.nexenta.com/products > > and AVS is nicely integrated via management RPC interface which is > connecting multiple NexentaStor nodes together and greatly simplifies > AVS usage with ZFS... See demo here: > > http://www.nexenta.com/demos/auto-cdp.htmlVery nice job.. Its refreshing to see something I know oh too well, with an updated management interface, and a good portion of the "plumbing" hidden away. - Jim> > > On Fri, 2008-02-01 at 10:15 -0800, Vincent Fox wrote: >> Does anyone have any particularly creative ZFS replication >> strategies they could share? >> >> I have 5 high-performance Cyrus mail-servers, with about a Terabyte >> of storage each of which only 200-300 gigs is used though even >> including 14 days of snapshot space. >> >> I am thinking about setting up a single 3511 with 4 terabytes of >> storage at a remote site as a backup device for the content. >> Struggling with how to organize the idea of wedging 5 servers into >> the one array though. >> >> Simplest way that occurs is one big RAID-5 storage pool with all >> disks. Then slice out 5 LUNs each as it''s own ZFS pool. Then use >> zfs send & receive to replicate the pools. >> >> Ideally I''d love it if ZFS directly supported the idea of rolling >> snapshots out into slower secondary storage disks on the SAN, but >> in the meanwhile looks like we have to roll our own solutions. >> >> >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussJim Dunham Storage Platform Software Group Sun Microsystems, Inc. wk: 781.442.4042 http://blogs.sun.com/avs http://www.opensolaris.org/os/project/avs/ http://www.opensolaris.org/os/project/iscsitgt/ http://www.opensolaris.org/os/community/storage/