Is it possible to bring one slice of a disk under zfs controller and leave the others as ufs? A customer is tryng to mirror one slice using zfs. Please respond to me directly and to the alias. Thanks, Shawn
Hello Shawn, Thursday, December 13, 2007, 3:46:09 PM, you wrote: SJ> Is it possible to bring one slice of a disk under zfs controller and SJ> leave the others as ufs? SJ> A customer is tryng to mirror one slice using zfs. Yes, it''s - it just works. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
What are the commands? Everything I see is c1t0d0, c1t1d0..... no slice just the completed disk. Robert Milkowski wrote:> Hello Shawn, > > Thursday, December 13, 2007, 3:46:09 PM, you wrote: > > SJ> Is it possible to bring one slice of a disk under zfs controller and > SJ> leave the others as ufs? > > SJ> A customer is tryng to mirror one slice using zfs. > > > Yes, it''s - it just works. > >
Shawn, Using slices for ZFS pools is generally not recommended so I think we minimized any command examples with slices: # zpool create tank mirror c1t0d0s0 c1t1d0s0 Keep in mind that using the slices from the same disk for both UFS and ZFS makes administration more complex. Please see the ZFS BP section here: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools * The recovery process of replacing a failed disk is more complex when disks contain both ZFS and UFS file systems on slices. * ZFS pools (and underlying disks) that also contain UFS file systems on slices cannot be easily migrated to other systems by using zpool import and export features. * In general, maintaining slices increases administration time and cost. Lower your administration costs by simplifying your storage pool configuration model. Cindy Shawn Joy wrote:> What are the commands? Everything I see is c1t0d0, c1t1d0..... no > slice just the completed disk. > > > > Robert Milkowski wrote: > >>Hello Shawn, >> >>Thursday, December 13, 2007, 3:46:09 PM, you wrote: >> >>SJ> Is it possible to bring one slice of a disk under zfs controller and >>SJ> leave the others as ufs? >> >>SJ> A customer is tryng to mirror one slice using zfs. >> >> >>Yes, it''s - it just works. >> >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 13-Dec-07, at 1:56 PM, Shawn Joy wrote:> What are the commands? Everything I see is c1t0d0, c1t1d0..... no > slice just the completed disk.I have used the following HOWTO. (Markup is TWiki, FWIW.) Device names are for a 2-drive X2100. Other machines may differ, for example, X4100 drives may be =c3t2d0= and =c3t3d0=. ---++ Partitioning This is done before installing Solaris 10, or after installing a new disk to replace a failed mirror disk. * Run *format*, choose the correct disk device * Enter *fdisk* from menu * Delete any diagnostic partition, and existing Solaris partition * Create one Solaris2 partition over 100% of the disk * Exit *fdisk*; quit *format* ---++ Slice layout |slice 0| root| 8192M| <-- this is not really large enough :-) |slice 1| swap| 2048M| |slice 2| -----|| |slice 3| SVM metadb| 16M| |slice 4| zfs| 68200M| |slice 5| SVM metadb| 16M| |slice 6| -----|| |slice 7| SVM metadb| 16M| The final slice layout should be saved using =prtvtoc /dev/rdsk/ c1d0s2 >vtoc The second (mirror) disk can be forced into the same layout using =fmthard -s vtoc /dev/rdsk/c2d0s2(Replacement drives must be partitioned in exactly the same way, so it is recommended that a copy of the vtoc be kept in a file.) GRUB must also be installed on the second disk: =/sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c2d0s0 ---++ Solaris Volume Manager setup The root and swap slices will be mirrored using SVM. See: * http://www.solarisinternals.com/wiki/index.php/ ZFS_Best_Practices_Guide#UFS.2FSVM * http://sunsolve.sun.com/search/document.do?assetkey=1-9-83605-1 (As of Sol10U2 (June 06), ZFS is not supported for root partition.) At this point the system has been installed on, and booted from the first disk, c1d0s0 (as root) and with swap from the same disk. The following steps set up SVM but don''t interfere with currently mounted partitions. The second disk has already been partitioned identically to the first, and the data will be copied to the mirror after =metattach= below. Changing =/etc/vfstab= sets the machine to boot from the SVM mirror device in future. * Create SVM metadata (slice 3) with redundant copies on slices 5 and 7: %BR% =metadb -a -f c1d0s3 c2d0s3 c1d0s5 c2d0s5 c1d0s7 c2d0s7 * Create submirrors on first disk (root and swap slices): %BR% =metainit -f d10 1 1 c1d0s0= %BR% =metainit -f d11 1 1 c1d0s1 * Create submirrors on second disk: %BR% =metainit -f d20 1 1 c2d0s0= %BR% =metainit -f d21 1 1 c2d0s1 * Create the mirrors: %BR% =metainit d0 -m d10= %BR% =metainit d1 -m d11 * Take a backup copy of =/etc/vfstab * Define root slice: =metaroot d0= (this alters the mount device for / in =/etc/vfstab=, it should now be =/dev/md/dsk/d0=) * Edit =/etc/vfstab= (changing device for swap to =/dev/md/dsk/d1=) * Reboot to test. If there is a problem, use single user mode and revert vfstab. Confirm that root and swap devices are now the mirrored devices with =df= and =swap -l * Attach second halves to mirror: %BR% =metattach d0 d20= %BR% =metattach d1 d21 Mirror will now begin to sync; progress can be checked with =metastat -c ---+++ Also see * [[http://slacksite.com/solaris/disksuite/disksuite.html recipe]] at slacksite.com ---++ ZFS setup Slice 4 is set aside for the ZFS pool - the system''s active data. * Create pool: =zpool create pool mirror c1d0s4 c2d0s4 * Create filesystem for home directories: =zfs create pool/home= % BR% (To make this active, move any existing home directories from =/ home= and into =/pool/home=; then =zfs set mountpoint=/home pool/ home=; log out; and log back in.) * Set up regular scrub - Add to =crontab= a line such as: =0 4 1 * * zpool scrub pool<verbatim> bash-3.00# zpool create pool mirror c1d0s4 c2d0s4 bash-3.00# zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1d0s4 ONLINE 0 0 0 c2d0s4 ONLINE 0 0 0 errors: No known data errors bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 75.5K 65.5G 24.5K /pool bash-3.00# </verbatim> ---++ References * [[http://docs.sun.com/app/docs/doc/819-5461 ZFS Admin Guide]] * [[http://docs.sun.com/app/docs/doc/816-4520 SVM Admin Guide]]> > > > Robert Milkowski wrote: >> Hello Shawn, >> >> Thursday, December 13, 2007, 3:46:09 PM, you wrote: >> >> SJ> Is it possible to bring one slice of a disk under zfs >> controller and >> SJ> leave the others as ufs? >> >> SJ> A customer is tryng to mirror one slice using zfs. >> >> >> Yes, it''s - it just works. >> >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy.Swearingen at Sun.COM wrote:> Shawn, > > Using slices for ZFS pools is generally not recommended so I think > we minimized any command examples with slices: > > # zpool create tank mirror c1t0d0s0 c1t1d0s0 >Cindy, I think the term "generally not recommended" requires more context. In the case of a small system, particularly one which you would find on a laptop or desktop, it is often the case that disks share multiple purposes, beyond ZFS. I think the way we have written this in the best practices wiki is fine, but perhaps we should ask the group at large. Thoughts anyone? I do like the minimization for the examples, though. If one were to actually read any of the manuals, we clearly talk about how whole disks or slices are fine. However, on occasion someone will propagate the news that ZFS only works with whole disks and we have to correct the confusion afterwards. -- richard> Keep in mind that using the slices from the same disk for both UFS > and ZFS makes administration more complex. Please see the ZFS BP > section here: > > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools > > * The recovery process of replacing a failed disk is more complex when > disks contain both ZFS and UFS file systems on > slices. > * ZFS pools (and underlying disks) that also contain UFS file systems > on slices cannot be easily migrated to other > systems by using zpool import and export features. > * In general, maintaining slices increases administration time and > cost. Lower your administration costs by > simplifying your storage pool configuration model. > > Cindy > > Shawn Joy wrote: > >> What are the commands? Everything I see is c1t0d0, c1t1d0..... no >> slice just the completed disk. >> >> >> >> Robert Milkowski wrote: >> >> >>> Hello Shawn, >>> >>> Thursday, December 13, 2007, 3:46:09 PM, you wrote: >>> >>> SJ> Is it possible to bring one slice of a disk under zfs controller and >>> SJ> leave the others as ufs? >>> >>> SJ> A customer is tryng to mirror one slice using zfs. >>> >>> >>> Yes, it''s - it just works. >>> >>> >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On 13-Dec-07, at 3:54 PM, Richard Elling wrote:> Cindy.Swearingen at Sun.COM wrote: >> Shawn, >> >> Using slices for ZFS pools is generally not recommended so I think >> we minimized any command examples with slices: >> >> # zpool create tank mirror c1t0d0s0 c1t1d0s0 >> > > Cindy, > I think the term "generally not recommended" requires more > context. In > the case > of a small system, particularly one which you would find on a > laptop or > desktop, > it is often the case that disks share multiple purposes, beyond ZFS.In particular in a 2-disk system that boots from UFS (that was my situation). --Toby> I > think the > way we have written this in the best practices wiki is fine, but > perhaps > we should > ask the group at large. Thoughts anyone? > > I do like the minimization for the examples, though. If one were to > actually > read any of the manuals, we clearly talk about how whole disks or > slices > are fine. However, on occasion someone will propagate the news > that ZFS > only works with whole disks and we have to correct the confusion > afterwards. > -- richard >> Keep in mind that using the slices from the same disk for both UFS >> and ZFS makes administration more complex. ...