Hello, I''ve been turning this over in my mind, thought I''d post and see what creative ideas came up here. Given 4 internal drives in a server, what kind of ZFS layout would you use? This is SPARC, so I can''t boot off ZFS, and ideally the OS should be mirrored. Right now I feel tied to SVM mirror for OS and zfs mirror or stripe for remaining two drives. If I don''t mirror the OS with SVM, then I have 3 free drives for raid-z of course. If there are options or other creative ideas for this type of layout, I''d love to hear your feedback. Five internal drives sure would be nice in my situation, then I could mirror the OS and use raid-z on remaining disks for data. No such luck though, limited to 4. Many thanks for your feedback... This message posted from opensolaris.org
On 11/1/07, Scott Spyrison <sspyrison at gmail.com> wrote:> Given 4 internal drives in a server, what kind of ZFS layout would you use?What''s wrong with mirroring? What are you doing with the machine in question? I think that will make a big difference in what best to do with the disks. Will
On Thu, 2007-11-01 at 08:08 -0700, Scott Spyrison wrote:> Given 4 internal drives in a server, what kind of ZFS layout would you use?Assuming you needed more than one disk''s worth of ZFS space after mirroring: disks 0+1: partition them with a "space hog" partition at the start of the disk followed by three more slices: 1) root#1, 2) root#2, 3) swap Create svm mirrors for the two roots and swap. Use live upgrade to ping-pong between roots #1 and #2. svm mirroring protects against disk failure; multiple roots protects against software and administrative error. Use the space hog partition at the start of the disk as a zfs mirror. disks 2+3: mirror as a 2nd vdev in the same pool. Future-proofing: When the initial limited zfs boot (which will boot only from pools with a single top-level non-raidz vdev) comes along, convert slice 1 of disks 0+1 to a zfs pool. Migrate the system to root on that pool. Then expand it into slices 2 and 3 When/if fully general zfs boot comes along (allowing boot from pools with multiple top-level vdevs), move root into the big pool and expand the space hog slices on disks 0 and 1 into the space formerly occupied by the zfs root pool. With more disks I''d keep root separate from data pools, especially if the data disks were external; it makes it easier to move the pool to a different server in the event of a catastrophic failure or the unexpected arrival of better hardware with different internal disks. (I''ve had the second happen...) - Bill