Hey I am not entirely sure if this question belongs here or to another list, so feel free to direct me elsewhere :) Anyways, I am trying to figure out the best way to configure a NAS system I will soon get my hands on, it's a Tranquil BBS2 ( http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html ). which has 5 SATA ports. Due to budget constraints, I have to start small, either a single 1,5 TB drive or at most, a small 500 GB system drive + a 1,5 TB drive to get started with ZFS. What I am looking for is a configuration setup that would offer maximum possible storage, while having at least _some_ redundancy and having the possibility to grow the storage pool without having to reload the entire setup. Using ZFS root right now seems to involve a fair bit of trickery (you need to make an .ISO snapshot of -STABLE, burn it, boot from it, install from within a fixit environment, boot into your ZFS root and then make and install world again to fix the permissions). To top that off, even when/if you do it right, not your entire disk goes to ZFS anyway, because you still do need a swap and a /boot to be non-ZFS, so you will have to install ZFS onto a slice and not the entire disk and even SUN discourages to do that. Additionally, there seems to be at least one reported case of a system failing to boot after having done installworld on a ZFS root: the installworld process removes the old libc, tries to install a new one and due to failing to apply some flags to it which ZFS doesn't support, leave it uninstall, leaving the system in an unusable state. This can be worked around, but gotchas like this and the amount of work involved in getting the whole thing running make me really lean towards having a smaller traditional UFS2 system disk for FreeBSD itself. So, this leaves me with 1 SATA port used for a FreeBSD disk and 4 SATA ports available for tinketing with ZFS. What would make the most sense if I am starting with 1 disk for ZFS and eventually plan on having 4 and want to maximise storage, yet have SOME redundancy in case of a disk failure? Am I stuck with 2 x 2 disk mirrors or is there some 3+1 configuration possible? Sincerely, - Dan Naumov
I built a system recently with 5 drives and ZFS. I'm not booting off a ZFS root, though it does mount a ZFS file system once the system has booted from a UFS file system. Rather than dedicate drives, I simply partitioned each of the drives into a 1G partition, and another spanning the remainder of the disk. (In my case, all the drives are the same size). I boot off a gmirror of two partitions off the first two drives, and then use the other 3 1G partitions on the remaining 3 drives as swap partitions. I take the larger partitions on each of the 5 drives and organize them into a raidz2 ZFS pool. My needs are more relating to integrity of the data vs. surviving a disk failure without crashing. So, I don't bother to mirror swap partitions to keep running in the event of a drive failure. But that's a decision for you to make. It's not too tricky to do the install; I certainly didn't need to burn a custom CD or anything. There are some fine cookbooks on the net that talk about techniques. For me, the tricky bit was setting up the geom gmirror, which you could probably do from the fixit CD or something. I just did a normal install on the first drive to get a full FreeBSD running, and then "built" the mirrors on a couple of other drives, did an install on the mirror ("make installworld DESTDIR=/mnt") and then just moved the drives around. And I did a full installation in the 1G UFS gmirror file system, just to have an full environment to debug from, if necessary, rather than just a /boot. Just some ideas.. louie On May 30, 2009, at 2:41 PM, Dan Naumov wrote:> Hey > > I am not entirely sure if this question belongs here or to another > list, so feel free to direct me elsewhere :) > > Anyways, I am trying to figure out the best way to configure a NAS > system I will soon get my hands on, it's a Tranquil BBS2 ( > http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html ). > which has 5 SATA ports. Due to budget constraints, I have to start > small, either a single 1,5 TB drive or at most, a small 500 GB system > drive + a 1,5 TB drive to get started with ZFS. What I am looking for > is a configuration setup that would offer maximum possible storage, > while having at least _some_ redundancy and having the possibility to > grow the storage pool without having to reload the entire setup. > > Using ZFS root right now seems to involve a fair bit of trickery (you > need to make an .ISO snapshot of -STABLE, burn it, boot from it, > install from within a fixit environment, boot into your ZFS root and > then make and install world again to fix the permissions). To top that > off, even when/if you do it right, not your entire disk goes to ZFS > anyway, because you still do need a swap and a /boot to be non-ZFS, so > you will have to install ZFS onto a slice and not the entire disk and > even SUN discourages to do that. Additionally, there seems to be at > least one reported case of a system failing to boot after having done > installworld on a ZFS root: the installworld process removes the old > libc, tries to install a new one and due to failing to apply some > flags to it which ZFS doesn't support, leave it uninstall, leaving the > system in an unusable state. This can be worked around, but gotchas > like this and the amount of work involved in getting the whole thing > running make me really lean towards having a smaller traditional UFS2 > system disk for FreeBSD itself. > > So, this leaves me with 1 SATA port used for a FreeBSD disk and 4 SATA > ports available for tinketing with ZFS. What would make the most sense > if I am starting with 1 disk for ZFS and eventually plan on having 4 > and want to maximise storage, yet have SOME redundancy in case of a > disk failure? Am I stuck with 2 x 2 disk mirrors or is there some 3+1 > configuration possible? > > Sincerely, > - Dan Naumov > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org > " >
On 31/05/2009, at 4:41 AM, Dan Naumov wrote:> To top that > off, even when/if you do it right, not your entire disk goes to ZFS > anyway, because you still do need a swap and a /boot to be non-ZFS, so > you will have to install ZFS onto a slice and not the entire disk and > even SUN discourages to do that.ZFS on root is still pretty new to FreeBSD, and until it gets ironed out and all the sysinstall tools support it nicely, it isn't hard to use a small UFS slice to get things going during boot. And there is nothing wrong with putting ZFS onto a slice rather than the entire disk: that is a very common approach. http://www.ish.com.au/solutions/articles/freebsdzfs Ari Maniatis --------------------------> ish http://www.ish.com.au Level 1, 30 Wilson Street Newtown 2042 Australia phone +61 2 9550 5001 fax +61 2 9550 4001 GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A
On Sat, 30 May 2009 21:41:36 +0300 Dan Naumov <dan.naumov@gmail.com> wrote about ZFS NAS configuration question: DN> So, this leaves me with 1 SATA port used for a FreeBSD disk and 4 SATA DN> ports available for tinketing with ZFS. Do you have a USB port available to boot from? A conventional USB stick (I use 4 GB or 8GB these days, but smaller ones would certainly also do) is enough to hold the base system on UFS, and you can give the whole of your disks to ZFS without having to bother with booting from them. cu Gerrit
sthaug@nethelp.no wrote:>> root filesystem is remounted read write only for some configuration >> changes, then remounted back to read only. > > Does this work reliably for you? I tried doing the remounting trick, > both for root and /usr, back in the 4.x time frame. And could never > get it to work - would always end up with inconsistent file systems.There were many fixes in this area lately. The case where a file system with softdeps would fail to update to read-only is fixed in -CURRENT and these changes are merged to -STABLE. It is believed to work correctly. http://lists.freebsd.org/pipermail/freebsd-stable/2008-October/046001.html Remounting with soft updates enabled used to be too fragile to be useful. Now it seems very solid. Nikos
On Sun, May 31, 2009 at 4:43 AM, Aristedes Maniatis <ari@ish.com.au> wrote:> > On 31/05/2009, at 4:41 AM, Dan Naumov wrote: > > To top that >> off, even when/if you do it right, not your entire disk goes to ZFS >> anyway, because you still do need a swap and a /boot to be non-ZFS, so >> you will have to install ZFS onto a slice and not the entire disk and >> even SUN discourages to do that. >> > > ZFS on root is still pretty new to FreeBSD, and until it gets ironed out > and all the sysinstall tools support it nicely, it isn't hard to use a small > UFS slice to get things going during boot. And there is nothing wrong with > putting ZFS onto a slice rather than the entire disk: that is a very common > approach. >It's worth noting that there are a few sensible appliance designs... (although as a ZFS server, you might want 4, 8 or 16G in your "appliance"). You could, for instance, boot from flash. If your true purpose is an appliance, this is very reasonable. It means that your appliance "boots" when no "disks" are attached. Useful to instruct the appliance user how to attache disks and do diagnostics, for instance. My own ZFS is 5x 1.5TB disks running on a few week old 8-CURRENT. I gave up waiting for v13 in 7.x. Maybe I should have waited. But I've avoided most of the most recent foo-for-ah by not tracking current incessantly. If I was installing new, I'd probably stick with 7.x for a server... for now. I must admit, however, that the system seems happy with 8-CURRENT. The system boots from a pair of drives in a gmirror. Mot because you can't boot from ZFS, but because it's just so darn stable (and it predates the use of ZFS). Really there are two camps here --- booting from ZFS is the use of ZFS as the machine's own filesystem. This is one goal of ZFS that is somewhat imperfect on FreeBSD at the momment. ZFS file servers are another goal where booting from ZFS is not really required and only marginally beneficial.
sthaug@nethelp.no wrote:>>root filesystem is remounted read write only for some configuration >>changes, then remounted back to read only. > > > Does this work reliably for you? I tried doing the remounting trick, > both for root and /usr, back in the 4.x time frame. And could never > get it to work - would always end up with inconsistent file systems.The system is in production from October 2008 and never paniced in remounting. In this time frame, we got only two deadlocks caused by earlier versions of ZFS. At this time, files on ZFS are using 28151719 inodes, storage is for daily rsync backups of dozen webservers and mailserver. I am using mount -u -o current,rw / [do some configuration work] sync; sync; sync; mount -u -o current,ro / The sync command is maybe useless, but I feel safer with it ;o) (root filesystem is not using soft-updates) Miroslav Lachman
Anyone else think that this combined with freebsd-update integration and a simplistic menu GUI for choosing the preferred boot environment would make an _awesome_ addition to the base system? :) - Dan Naumov On Wed, Jun 3, 2009 at 5:42 PM, Philipp Wuensche<cryx-freebsd@h3q.com> wrote:> I wrote a script implementing the most useful features of the solaris > live upgrade, the only thing missing is selecting a boot-environment > from the loader and freebsd-update support as I write the script on a > system running current. I use this on all my freebsd-zfs boxes and it is > extremely useful! > > http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE > > greetings, > philipp