I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB SATA drives. When I install opensolaris, I assume it will want to use all or part of one of those drives for the install. That leaves me with the remaining part of disk 1, and all of disk 2. Question is, how do I best install OS to maximize my ability to use ZFS snapshots and recover if one drive fails? Alternatively, I guess I could add a small USB drive to use solely for the OS< and then have all of the 2 750 drives for ZFS. Is that a bad idea since the OS drive will be "standalone"? Thanks for your help. -- This message posted from opensolaris.org
Op 30-1-2010 20:53, Mark schreef:> I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB SATA drives. When I install opensolaris, I assume it will want to use all or part of one of those drives for the install. That leaves me with the remaining part of disk 1, and all of disk 2. > > Question is, how do I best install OS to maximize my ability to use ZFS snapshots and recover if one drive fails? >Install on one drive. After that attach the second and crate a mirror. You -NEED- redundancy.> Alternatively, I guess I could add a small USB drive to use solely for the OS< and then have all of the 2 750 drives for ZFS. Is that a bad idea since the OS drive will be "standalone"? >Very bad idea. Not safe. ZFS on one disk is asking for trouble. Take two smaller disks for the OS (mirrored vdev) and the two larger ones as a second vdev (mirrored too)
On Jan 30, 2010, at 2:53 PM, Mark <whitetr6 at gmail.com> wrote:> I have a 1U server that supports 2 SATA drives in the chassis. I > have 2 750 GB SATA drives. When I install opensolaris, I assume it > will want to use all or part of one of those drives for the install. > That leaves me with the remaining part of disk 1, and all of disk 2. > > Question is, how do I best install OS to maximize my ability to use > ZFS snapshots and recover if one drive fails? > > Alternatively, I guess I could add a small USB drive to use solely > for the OS< and then have all of the 2 750 drives for ZFS. Is that a > bad idea since the OS drive will be "standalone"?Just install the OS on the first drive and add the second drive to form a mirror. There are wikis and blogs on how to add the second drive to form an rpool mirror. You''ll then have a 750GB rpool which you can use for your media and rest safely knowing your data is protected in the event of a disk failure. -Ross
On 01/30/10 05:33 PM, Ross Walker wrote:> On Jan 30, 2010, at 2:53 PM, Mark <whitetr6 at gmail.com> wrote: > >> I have a 1U server that supports 2 SATA drives in the chassis. I have >> 2 750 GB SATA drives. When I install opensolaris, I assume it will >> want to use all or part of one of those drives for the install. That >> leaves me with the remaining part of disk 1, and all of disk 2. >> >> Question is, how do I best install OS to maximize my ability to use >> ZFS snapshots and recover if one drive fails?Where were you planning to send the snapshots? There''s been a lot of discussion about this on this list, but my solution is to mirror the entire system and zfs send/recv to it periodically to keep a live backup.>> Alternatively, I guess I could add a small USB drive to use solely for >> the OS< and then have all of the 2 750 drives for ZFS. Is that a bad >> idea since the OS drive will be "standalone"? > > Just install the OS on the first drive and add the second drive to form > a mirror. There are wikis and blogs on how to add the second drive to > form an rpool mirror.After more than a year or so of experience with ZFS on drive constrained systems, I am convinced that it is a really good idea to keep the root pool and the data pools separate. AFAIK you could set up two slices on each disk and mirror the results. But actually I''m not sure why you shouldn''t use your USB drive for root pool idea. If it breaks you simply reinstall (or restore it from a snapshot on your data pool after booting from a CD). I suppose you could mirror the USB drive, too, but if you can stand the downtime after a failure, that probably isn''t necessary. Of course, SSDs are getting pretty cheap in bootable sizes and will probably last forever if you don''t swap to them, and that would be an even better solution. USB SSD thumb drives seem to be quite cheap these days. The you''d have a full-disk mirrored data pool and a fast bootable OS pool; if you go the SSD route I''d go for at least 32GB. Of course you could get a 1TB USB drive to boot from, and use it to keep a backup of the data pool, but if it failed, you''d be SOL until you replaced it. IMO that would be the best 3-disk solution. Should be interesting to hear from the gurus about this... Cheers -- Frank
hello i also suggest, use your 750g drives as raid-1 data pool. i usually use one or better two (raid-1) 2,5" drives in the floppy-bay as system drive gea http://www.napp-it.org/hardware/ -- This message posted from opensolaris.org
Hi whitetr6, An interesting situation to which there is no, "right," answer. In fact, many different answers depending on where you put your priorities. I''m with Frank in keeping data and OS separate. As you''ve only got two drives, I''d put between 30 to 40 gig as an OS pool on each drive (making each individually bootable - I was helped out with the method on this thread - http://opensolaris.org/jive/thread.jspa?messageID=454491) and then the remainder of the drives as data. So you''ve sort of got... Drive 1 - <-30gig OS-> <-720gig data-> Drive 2 - <-30gig OS-> <-720gig data-> ...completely mirrored and independently bootable. Open Solaris creates its boot partition as a Zpool anyway, so it is relatively straightforward to mirror it. I made a short video of the advice that I was given, here - http://www.youtube.com/watch?v=tpzsSptzmyA - and that technique will also cover you for installing on drives of different sizes, so if you upgrade the hard drives later, this technique will hold solid and also enable you to add another drive should you have to replace one. You might have to adjust that advice if you''re not using your entire drive for the system partition. The only thing I''ve ever seen take out both internal drives at the same time is a power surge by either external sources, or an overheated PSU blowing in the PC. Surge protection and adequate cooling should minimise the risks. I''m assuming you''ve got a routine to take snapshots and get them off the box; based on what you''ve already written. Having an OS booting from USB is possible; from what I''ve seen of ZFS so far, I believe it would be possible to attach two USB keys and have them mirrored and bootable also! But personally I don''t see any real need to do it this way. So, if I were in your shoes, I''d partition as per above and run the two hard disks ... but have an external drive available for backup. It would be worth practising the technique of mirroring the root partition, handling zpools and recovering from failures before committing data to it. The practice is well worth it IMHO. I hope this helps. -- This message posted from opensolaris.org
Correct that ... I have seen a bad batch of drives fail in close succession; down to a manufacturing problem. -- This message posted from opensolaris.org
Perhaps an expert could kindly chime in an opinion of making the drives one large zpool (rather than separate hard partitions) and using the various options within ZFS to ensure that there is always disk space available to the operating system (zpool reservation) ... but the more I sit and think, the more I''m not sure how that would work on the rpool. There are so many ways of handling this. I still think I''d go with hard partitioning for the OS and data, but that is only because of my lack of overall experience with ZFS. -- This message posted from opensolaris.org
On Sat, Jan 30, 2010 at 06:07:48PM -0500, Frank Middleton wrote:> On 01/30/10 05:33 PM, Ross Walker wrote: >> Just install the OS on the first drive and add the second drive to form >> a mirror. > > After more than a year or so of experience with ZFS on drive constrained > systems, I am convinced that it is a really good idea to keep the root pool > and the data pools separate.Odd. I have the opposite conclusion. As I became more comfortable using send|recv for rebuilding machines and rearranging disks, most of the reasons for such separation went away. ZFS mechanisms (eg reservations, quotas and multiple BE''s) really are a better and more flexible way. The remaining reasons come from the constraints on root pool: single non-raidz vdevs, no slog. Those can be restrictive, but they share a common factor: they both need more disks/controller ports you may not have anyway. Often, systems must be built to the tightest constraint, and if that is the number of disks then I''m more than happy to put data in rpool in return for other benefits. USB sticks and PATA-to-CF cards for rpool are useful alternatives, almost as a way to cheat some extra ports and case space. For me at least, only as a way of working around the rpool constraints (e.g, it lets me have raidz data in a typical generic pc with 4 disk maximum). I wouldn''t use them *just* to keep data and BE''s in separate pools; if raidz ever becomes bootable, I''d switch to that. There''s something to be said for having a usable BE together with your data pool, if you ever need to move the disks elsewhere quickly because of a fault. Your quick replacement board might not have the pata ports for your CF cards at all, for example. I''ve taken to replicating the datasets from rpool into the data pool in such situations, just as a self-contained backup even if they''re not bootable. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100201/9c925311/attachment.bin>
On Sat, Jan 30 at 18:07, Frank Middleton wrote:>After more than a year or so of experience with ZFS on drive constrained >systems, I am convinced that it is a really good idea to keep the root pool >and the data pools separate.That''s what we do at the office. The data pool is a collection of mirror vdevs and backed up to another "live" system, while we just put the boot disk on a single SSD with no extra redundancy. It exposes us to a disk failure making the system unable to boot, but I know I can re-install opensolaris from scratch on this system in about 45 minutes (we''re only about a dozen pfexec commands away from a "default" installation) so the tradeoff was worth it to us given the expected low failure rate of the SSD combined with how little the rpool device gets written to in normal operation. In a pinch, I can just enable smb on the backup machine in read-only fashion to give people access to their files while the primary is rebuilding. If the backup pool design isn''t fast enough, I could just drag the disks over from the primary and import the pool on the backup server. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
I thank each of you for all of your insights. I think if this was a production system I''d abandon the idea of 2 drives and get a more capable system, maybe a 2U box with lots of SAS drives so I could use RAIDZ configurations. But in this case, I think all I can do is try some things until I understand it better. Please continue to add to the discussion as I learn something each time someone posts a reply. Thanks again -- This message posted from opensolaris.org
On Sat, January 30, 2010 14:21, Dick Hoogendijk wrote:> Op 30-1-2010 20:53, Mark schreef:>> Alternatively, I guess I could add a small USB drive to use solely for >> the OS< and then have all of the 2 750 drives for ZFS. Is that a bad >> idea since the OS drive will be "standalone"? >> > Very bad idea. Not safe. ZFS on one disk is asking for trouble. > Take two smaller disks for the OS (mirrored vdev) and the two larger > ones as a second vdev (mirrored too)I don''t fully agree with this. It depends on exactly what kind of fileserver you need (I know the initial poster specified "home"; that''s same as my primary use, and I Have Opinions :-)). As always, you get to trade off cost vs. availability vs. data safety and no doubt other things in the Big Picture. One idea I seriously considered is to boot off a USB key. No online redundancy (but I''d keep a second loaded key, plus the files to quickly reimage a new key, handy). So, in the case of the "system disk" (USB key) failing on me, I can just pull it out of the slot, plug in the spare, and reboot. MTTR = very low, and I could talk my wife through it over the phone if necessary. Yes, logging and such will to some extent wear through the write capacity of the USB key, but I expect it''d last several years, which is enough for me to not worry about it. You could do the same with external USB disks (minus the concern about write-leveling). My point is, online redundancy for the system disk MAY NOT be nearly as important as for the data, depending on the exact situation. Another approach, using just the disks and within the constraint of the two 750GB disks, is to partition them both the same, and make a root pool from two small partitions and a data pool from the two large partitions, thus keeping data and root separate and still providing redundancy to both, within the two drive constraint. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On February 1, 2010 11:59:14 AM -0600 David Dyer-Bennet <dd-b at dd-b.net> wrote:> One idea I seriously considered is to boot off a USB key. No online > redundancy (but I''d keep a second loaded key, plus the files to quickly > reimage a new key, handy).I''ve just built my first USB-booting zfs system. I took the plunge after playing with an x4275 and using the internal CF slot for it. I boot off of a mirrored pair of USB sticks. It works great and doesn''t eat 2 disk bays.> Yes, logging and such will to some extent wear through the write capacity > of the USB key, but I expect it''d last several years, which is enough for > me to not worry about it.I wouldn''t worry so much about write wear (as I recently learned on this list) as writes being dog slow. It was easy to redirect most log files to the spinning bits of rust. Some logs (/var/svc/log, eg) however can''t be redirected but all of those type that I could find are very infrequently updated. -frank