Hi all, I''m about to embark on my first voyage into ZFS (and Solaris, frankly) as it seems very appealing for a low-cost SAN/NAS solution. I am in the process of building up a HCL-compliant whitebox server which ultimately will contain 8x1TB SATA disks. I would appreciate some advice and recommendations based on my requirements, which are :- - Right now I don''t seen any reason to not simply dump all 8 disks into a single raidz (or raidz2) pool - anything I may have missed here? - To provide a slice of storage (~1TB) to a vmware host for vmdk''s (i''m considering iSCSI) - To provide a large slice of storage (~4TB) to a Windows 2003/8 file server guest on the vmware host, to be accessed by Windows clients over CIFS. Right i''m now what''s in my head is using an iSCSI slice for the VMDK''s (so vmware can manage the storage) and NFS for the file server storage. I''m intruged in your thoughts on this. Also, some other questions :- - My vmware host is only going to by ESXi, so I don''t win any fancy backup functionality. Does anyone have any suggestions on how I could backup vmware guests, without using guest-client software (eg, backupexec)? Can I use ZFS snapshots or clones to do this on a live, running vmware guest? - I''ve heard that NFS is faster than iSCSI with regards to presenting storage to vmware - is this true? And one last OT thing; i''m a reasonably experienced bsd/linux administrator but am totally new to Solaris - for the simple (hopefully) task of building and maintaining a SAN, do you think advanced Solaris administration skills will be beneficial and worthwhile? Many thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080912/021b0360/attachment.html>
Hi, I''m by no means a ZFS expert, but I do have one comment: gm_sjo wrote:> - To provide a large slice of storage (~4TB) to a Windows 2003/8 file > server guest on the vmware host, to be accessed by Windows clients over > CIFS.Solaris provide CIFS support natively too - maybe you can save yourself the hassle of going through the vmware + windows combo. Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see ''Recursion''
Currently, you can mirror your boot but not raidz2 it. I''d recommend using 2 of the drives for a mirrored boot and the other 6 drives for raidz2. I used 2x Addonics AE5RCS35NSA to hold the drives to give me hot swappability. Out of curiousity, is there any reason you are going with vmware rather than xVM with the HVM hosts? I am in the process of setting up xVM on my machine, and would be interested to hear your thoughts. I had originally planned on using iSCSI and NAS to provide the space... I haven''t decided yet whether I will do that. But I do plan on using zfs snapshots/clones with the xVM hosts. Malachi On Fri, Sep 12, 2008 at 5:19 AM, gm_sjo <saqmaster at gmail.com> wrote:> Hi all, > > > I''m about to embark on my first voyage into ZFS (and Solaris, frankly) as > it seems very appealing for a low-cost SAN/NAS solution. I am in the process > of building up a HCL-compliant whitebox server which ultimately will contain > 8x1TB SATA disks. > > I would appreciate some advice and recommendations based on my > requirements, which are :- > > - Right now I don''t seen any reason to not simply dump all 8 disks into a > single raidz (or raidz2) pool - anything I may have missed here? > > - To provide a slice of storage (~1TB) to a vmware host for vmdk''s (i''m > considering iSCSI) > > - To provide a large slice of storage (~4TB) to a Windows 2003/8 file > server guest on the vmware host, to be accessed by Windows clients over > CIFS. > > > Right i''m now what''s in my head is using an iSCSI slice for the VMDK''s (so > vmware can manage the storage) and NFS for the file server storage. I''m > intruged in your thoughts on this. > > > Also, some other questions :- > > - My vmware host is only going to by ESXi, so I don''t win any fancy backup > functionality. Does anyone have any suggestions on how I could backup vmware > guests, without using guest-client software (eg, backupexec)? Can I use ZFS > snapshots or clones to do this on a live, running vmware guest? > > - I''ve heard that NFS is faster than iSCSI with regards to presenting > storage to vmware - is this true? > > > And one last OT thing; i''m a reasonably experienced bsd/linux administrator > but am totally new to Solaris - for the simple (hopefully) task of building > and maintaining a SAN, do you think advanced Solaris administration skills > will be beneficial and worthwhile? > > > Many thanks in advance. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080912/5a29f8f0/attachment.html>
2008/9/12 Malachi de ?lfweald:> Currently, you can mirror your boot but not raidz2 it. I''d recommend using 2 of the drives for a mirrored boot and the other 6 drives for raidz2. I used 2x Addonics AE5RCS35NSA to hold the drives to give me hot swappability.Sorry, forgot to mention - I have two other 40GB disks that will be mirrored for the o/s. I haven''t decided yet whether a hardware mirror would be best for this.> Out of curiousity, is there any reason you are going with vmware rather than xVM with the HVM hosts? I am in the process of setting up xVM on my machine, and would be interested to hear your thoughts.Mainly because of familiarity. This isn''t a test-bed so I don''t want to commit to something and then have to change it in the future. However, as dramatic as it sounds, this is just at home :-) I''ll investigate xVM though.
2008/9/12 Michael Schuster:> Solaris provide CIFS support natively too - maybe you can save yourself the > hassle of going through the vmware + windows combo.There will be approx. 20 vmware guests running on this infrastructure, so having a windows guest there for serving files isn''t a problem. This windows guest will also be doing some other stuff to with a heavy dependancy on the storage.
Comments inline On Fri, Sep 12, 2008 at 8:24 AM, gm_sjo <saqmaster at gmail.com> wrote:> 2008/9/12 Malachi de ?lfweald: > > > Currently, you can mirror your boot but not raidz2 it. I''d recommend > using 2 of the drives for a mirrored boot and the other 6 drives for raidz2. > I used 2x Addonics AE5RCS35NSA to hold the drives to give me hot > swappability. > > Sorry, forgot to mention - I have two other 40GB disks that will be > mirrored for the o/s. I haven''t decided yet whether a hardware mirror > would be best for this.I originally planned on doing hardware raid, then realized that Asus lied about the capabilities of the board I bought. I''m pretty happy doing the zfs mirror for the boot, since I can hotswap it if one of them goes down.> > Out of curiousity, is there any reason you are going with vmware rather > than xVM with the HVM hosts? I am in the process of setting up xVM on my > machine, and would be interested to hear your thoughts. > > Mainly because of familiarity. This isn''t a test-bed so I don''t want > to commit to something and then have to change it in the future.I''d say that if you are planning on using Windows to host the VMs, then either vmware or virtualbox is your best bet. If you are looking to have the OpenSolaris box host the VMs, xVM might be a better choice.> However, as dramatic as it sounds, this is just at home :-)Mine too ;) Malachi -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080912/ef03b418/attachment.html>
2008/9/12 Malachi de ?lfweald:> I''d say that if you are planning on using Windows to host the VMs, then > either vmware or virtualbox is your best bet. If you are looking to have the > OpenSolaris box host the VMs, xVM might be a better choice.I''m not - as per my original post, the vmware host will be ESXi - Windows will only exist as a guest on that box! However I think I will definately at least have a play with VirtualBox/xVM Server before I commit.
Hi, My setup is arguably smaller than yours, so YMMV: Key Point: I have found that using infrastructure provided natively by Solaris/ZFS are the best choices. I have been using CIFS... it''s unpredictable when some random windows machines would stop seeing them. XP/Server 2003/Vista - Too many things go wrong. So here is what I do: 1. I use xVM VirtualBox for Windows 2. Snapshot/Clones are managed by ZFS... Just put the vmdk on its own FS and let zfs handle all sorts of shiny stuff. In your case same thing can be done with ZFS Volumes, if you choose to go with iSCSI. 3. There is a reason whole industry relies on NFS (learned the hard way). In short - it works! I have just installed SFU on all windows clients and couldn''t be happier. No matter what happens to server or network (unplugging the wrong cable or router rebooting) , the clients *always* behave predictably. When the server/network goes up, it all again starts working. I do *all* shares through NFS now - windows, Linux and Solaris. Easy to setup *and* predictable! In short, for minimum fuss, stick with letting the bottom most layer which makes sense manage stuff... and in any case, don''t let the virtualization products manage the storage in _any_ way. ZFS does it best and let it do it... and stick with NFS for sharing. SFU NFS client/server are small and they work really well, and they come bundled with Windows Server 200x. For 8 drives, go to raidz2. Two drives going bad is easy in an 8 drive setup. -- This message posted from opensolaris.org