The latest on when the update zfsboot support will go into Nevada is either build 61 or 62. We are making some final fixes and getting tests run. We are aiming for 61, but we might just miss it. In that case, we should be putting back into 62. Lori
On Tue, Mar 06, 2007 at 02:49:35PM -0700, Lori Alt wrote:> The latest on when the update zfsboot support will > go into Nevada is either build 61 or 62. We are > making some final fixes and getting tests run. We > are aiming for 61, but we might just miss it. In > that case, we should be putting back into 62.That is outstanding news Lori. Just to make sure we are all on the same page, this is x86 only? -brian -- "The reason I don''t use Gnome: every single other window manager I know of is very powerfully extensible, where you can switch actions to different mouse buttons. Guess which one is not, because it might confuse the poor users? Here''s a hint: it''s not the small and fast one." --Linus
Brian Hechinger wrote On 03/06/07 14:52,:>On Tue, Mar 06, 2007 at 02:49:35PM -0700, Lori Alt wrote: > > >>The latest on when the update zfsboot support will >>go into Nevada is either build 61 or 62. We are >>making some final fixes and getting tests run. We >>are aiming for 61, but we might just miss it. In >>that case, we should be putting back into 62. >> >> > >That is outstanding news Lori. Just to make sure we are all on the same page, >this is x86 only? > >Yes, x86 only. Lori
Lori Alt wrote:> The latest on when the update zfsboot support will > go into Nevada is either build 61 or 62. We are > making some final fixes and getting tests run. We > are aiming for 61, but we might just miss it. In > that case, we should be putting back into 62.Thanks for the heads up. I''m building a new file server at the moment and I''d like to make sure I can migrate to ZFS boot when it arrives. My current plan is to create a pool on 4 500GB drives and throw in a small boot drive. Will I be able to drop the boot drive and move / over to the pool when ZFS boot ships? Cheers, Ian
> I''m building a new file server at the moment and I''d like to make sure I > can migrate to ZFS boot when it arrives. > > My current plan is to create a pool on 4 500GB drives and throw in a > small boot drive. > > Will I be able to drop the boot drive and move / over to the pool when > ZFS boot ships?This thread from a year ago suggests that at least the first round of ZFS root pools will have restrictions that are not necessary on other pools (like no concatenation or RAIDZ). http://www.opensolaris.org/jive/thread.jspa?threadID=7089 I''ve not noticed any posts since that modify its content. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
On Thu, 2007-03-08 at 14:22 -0800, Darren Dunham wrote:> This thread from a year ago suggests that at least the first round of > ZFS root pools will have restrictions that are not necessary on other > pools (like no concatenation or RAIDZ). > > http://www.opensolaris.org/jive/thread.jspa?threadID=7089 > > I''ve not noticed any posts since that modify its content.That would be too bad if raidz is not supported. I have been running a server with "bootable" zfs (3 disk w/raidz) for the past 6 months (1U server). I''ve simply been using the trick that tabriz posted on her blog a while back, but I lost only a small amount of space on each drive by using a USB drive for the initial install and putting grub on its own. Performance is not earth shattering due to (I think) /var/tmp and /var/log. And it''s old ZFS code. I''ve not rebooted or upgraded since then. Francois
Yes, the initial release of bootable zfs has restriction on the root pool: i.e. no concatenation or RAIDZ, only single deviced pool or a mirrored configuration. This is mainly due to limitations on how many disks the firmware can access at boot time. Lin Francois Dion wrote:> On Thu, 2007-03-08 at 14:22 -0800, Darren Dunham wrote: > >> This thread from a year ago suggests that at least the first round of >> ZFS root pools will have restrictions that are not necessary on other >> pools (like no concatenation or RAIDZ). >> >> http://www.opensolaris.org/jive/thread.jspa?threadID=7089 >> >> I''ve not noticed any posts since that modify its content. >> > > That would be too bad if raidz is not supported. I have been running a > server with "bootable" zfs (3 disk w/raidz) for the past 6 months (1U > server). > > I''ve simply been using the trick that tabriz posted on her blog a while > back, but I lost only a small amount of space on each drive by using a > USB drive for the initial install and putting grub on its own. > > Performance is not earth shattering due to (I think) /var/tmp > and /var/log. And it''s old ZFS code. I''ve not rebooted or upgraded since > then. > > Francois > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Ian Collins wrote:> Thanks for the heads up. > > I''m building a new file server at the moment and I''d like to make sure I > can migrate to ZFS boot when it arrives. > > My current plan is to create a pool on 4 500GB drives and throw in a > small boot drive. > > Will I be able to drop the boot drive and move / over to the pool when > ZFS boot ships? >Yes, should be able to, given that you have already had an UFS boot drive running root. Lin
> Ian Collins wrote: > > Thanks for the heads up. > > > > I''m building a new file server at the moment and > I''d like to make sure I > > can migrate to ZFS boot when it arrives. > > > > My current plan is to create a pool on 4 500GB > drives and throw in a > > small boot drive. > > > > Will I be able to drop the boot drive and move / > over to the pool when > > ZFS boot ships? > > > > Yes, should be able to, given that you have already > had an UFS boot > drive running root. >Hi, However, this raises another concert that during recent discussions regarding to disk layout of a zfs system (http://www.opensolaris.org/jive/thread.jspa?threadID=25679&tstart=0) it was said that currently we''d better give zfs the whole device (rather than slices) and keep swap off zfs devices for better performance. If the above recommendation still holds, we still have to have a swap device out there othere than devices managed by zfs. is this limited by the design or implementation of zfs? Ivan.> Lin > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >This message posted from opensolaris.org
Lin Ling wrote:> Ian Collins wrote: > >> Thanks for the heads up. >> >> I''m building a new file server at the moment and I''d like to make sure I >> can migrate to ZFS boot when it arrives. >> >> My current plan is to create a pool on 4 500GB drives and throw in a >> small boot drive. >> >> Will I be able to drop the boot drive and move / over to the pool when >> ZFS boot ships? >> > > > Yes, should be able to, given that you have already had an UFS boot > drive running root. >Thanks. As I intend setting up my pool as a striped mirror, it looks from the the other postings like this will not be suitable for the boot device. So an SVM mirror on a couple of small drives may still be the best bet for a small sever. Ian
Hi Ian, I might misunderstand your plan. I assumed you''ll throw in a small boot drive as the zfs root pool. ZFS root pool can be a mirrored pool, so you don''t need to use SVM mirror. Lin Ian Collins wrote:> Lin Ling wrote: > > >> Ian Collins wrote: >> >> >>> Thanks for the heads up. >>> >>> I''m building a new file server at the moment and I''d like to make sure I >>> can migrate to ZFS boot when it arrives. >>> >>> My current plan is to create a pool on 4 500GB drives and throw in a >>> small boot drive. >>> >>> Will I be able to drop the boot drive and move / over to the pool when >>> ZFS boot ships? >>> >>> >> Yes, should be able to, given that you have already had an UFS boot >> drive running root. >> >> > Thanks. > > As I intend setting up my pool as a striped mirror, it looks from the > the other postings like this will not be suitable for the boot device. > > So an SVM mirror on a couple of small drives may still be the best bet > for a small sever. > > Ian > >
Ivan Wang wrote:> > Hi, > > However, this raises another concert that during recent discussions regarding to disk layout of a zfs system (http://www.opensolaris.org/jive/thread.jspa?threadID=25679&tstart=0) it was said that currently we''d better give zfs the whole device (rather than slices) and keep swap off zfs devices for better performance. > If the above recommendation still holds, we still have to have a swap device out there othere than devices managed by zfs. is this limited by the design or implementation of zfs? > > Ivan. >ZFS supports swap to /dev/vzol, however, I do not have data related to performance. Also note that ZFS does not support dump yet, see RFE 5008936. Lin
> > Ivan Wang wrote: > > > > Hi, > > > > However, this raises another concert that during > recent discussions regarding to disk layout of a zfs > system > (http://www.opensolaris.org/jive/thread.jspa?threadID> 25679&tstart=0) it was said that currently we''d > better give zfs the whole device (rather than slices) > and keep swap off zfs devices for better performance. > > > If the above recommendation still holds, we still > have to have a swap device out there othere than > devices managed by zfs. is this limited by the design > or implementation of zfs? > > > > Ivan. > > > > ZFS supports swap to /dev/vzol, however, I do not > have data related to > performance. > Also note that ZFS does not support dump yet, see RFE > 5008936.Got it, thanks, and a more general question, in a single disk root pool scenario, what advantage zfs will provide over ufs w/ logging? And when zfs boot integrated in neveda, will live upgrade work with zfs root? Cheers, Ivan.> > Lin > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss >This message posted from opensolaris.org
Hello Ivan, Sunday, March 11, 2007, 12:01:28 PM, you wrote: IW> Got it, thanks, and a more general question, in a single disk IW> root pool scenario, what advantage zfs will provide over ufs w/ IW> logging? And when zfs boot integrated in neveda, will live upgrade work with zfs root? Snapshots/clones + live upgrade or standard patching. Additionally no more hassle with separate /opt /var ... Potentially also compression turned on on /var -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
On 3/11/07, Robert Milkowski <rmilkowski at task.gda.pl> wrote:> IW> Got it, thanks, and a more general question, in a single disk > IW> root pool scenario, what advantage zfs will provide over ufs w/ > IW> logging? And when zfs boot integrated in neveda, will live upgrade work with zfs > root? > > Snapshots/clones + live upgrade or standard patching. > Additionally no more hassle with separate /opt /var ...I am curious how snapshots and clones will be integrated with grub. Will it be posible to boot from a snapshot? I think this would be useful when applying patches, since you could snapshot / ,/var and /opt, patch the system, and revert back (by choosing a snapshot from the grub menu) to the snapshot if something went awry. Is this how the zfs boot team envisions this working? Thanks, - Ryan -- UNIX Administrator http://prefetch.net
Robert Milkowski wrote:> Hello Ivan, > Sunday, March 11, 2007, 12:01:28 PM, you wrote: > > IW> Got it, thanks, and a more general question, in a single disk > IW> root pool scenario, what advantage zfs will provide over ufs w/ > IW> logging? And when zfs boot integrated in neveda, will live upgrade work with zfs root? > > Snapshots/clones + live upgrade or standard patching. > Additionally no more hassle with separate /opt /var ... > > Potentially also compression turned on on /var- just to add to Robert''s list, here''s other advantages ZFS on root has over UFS, even on a single disk: * knowing when your data starts getting corrupted (if your disk starts failing, and what data is being lost) * ditto blocks to take care of filesystem metadata consistency * performance improvements over UFS * ability to add disks to mirror the root filesystem at any time, should they become available * ability to use free space on the root pool, making it available for other uses (by setting a reservation on the root filesystem, you can ensure that / always has sufficient available space) - am I missing any others ? cheers, tim -- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf
On March 11, 2007 6:05:13 PM +0000 Tim Foster <Tim.Foster at Sun.COM> wrote:> * ability to add disks to mirror the root filesystem at any time, > should they become availableCan''t this be done with UFS+SVM as well? A reboot would be required but you have to do regular reboots anyway just for patching. -frank
On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:> On March 11, 2007 6:05:13 PM +0000 Tim Foster <Tim.Foster at Sun.COM> wrote: > >* ability to add disks to mirror the root filesystem at any time, > > should they become available > > Can''t this be done with UFS+SVM as well? A reboot would be required > but you have to do regular reboots anyway just for patching.It can, but you have to plan ahead. You need to leave a spall partition for the SVM metadata. Something I *never* remember to do (I''m too used to working with Veritas). If you can remember to plan ahead, then yes. ;) -brian -- "The reason I don''t use Gnome: every single other window manager I know of is very powerfully extensible, where you can switch actions to different mouse buttons. Guess which one is not, because it might confuse the poor users? Here''s a hint: it''s not the small and fast one." --Linus
> Robert Milkowski wrote: >> Hello Ivan, >> Sunday, March 11, 2007, 12:01:28 PM, you wrote: >> >> IW> Got it, thanks, and a more general question, in a single disk >> IW> root pool scenario, what advantage zfs will provide over ufs w/ >> IW> logging? And when zfs boot integrated in neveda, will live upgrade >> work with zfs root? >> >> Snapshots/clones + live upgrade or standard patching. >> Additionally no more hassle with separate /opt /var ... >> >> Potentially also compression turned on on /var > > - just to add to Robert''s list, here''s other advantages ZFS on root > has over UFS, even on a single disk: > > * knowing when your data starts getting corrupted > (if your disk starts failing, and what data is being lost) > * ditto blocks to take care of filesystem metadata consistency > * performance improvements over UFS > * ability to add disks to mirror the root filesystem at any time, > should they become available > * ability to use free space on the root pool, making it > available for other uses (by setting a reservation on the root > filesystem, you can ensure that / always has sufficient available > space) > > - am I missing any others ? >* ability to show off to your geeky friends who will all say "neato!" Dennis
Matty wrote:> I am curious how snapshots and clones will be integrated with grub. > Will it be posible to boot from a snapshot? I think this would be > useful when applying patches, since you could snapshot / ,/var and > /opt, patch the system, and revert back (by choosing a snapshot from > the grub menu) to the snapshot if something went awry. Is this how the > zfs boot team envisions this working?You can snapshot/clone, and revert back by choosing the clone from the grub menu to boot. Since snapshot is a read-only filesystem, directly booting from it is not supported for the initial release. However, it is on our to-investigate list. Lin
On 3/11/07, Lin Ling <lin.ling at sun.com> wrote:> Matty wrote: > > I am curious how snapshots and clones will be integrated with grub. > > Will it be posible to boot from a snapshot? I think this would be > > useful when applying patches, since you could snapshot / ,/var and > > /opt, patch the system, and revert back (by choosing a snapshot from > > the grub menu) to the snapshot if something went awry. Is this how the > > zfs boot team envisions this working? > > You can snapshot/clone, and revert back by choosing the clone from the > grub menu to boot. > Since snapshot is a read-only filesystem, directly booting from it is > not supported for the initial release. > However, it is on our to-investigate list.How will /boot/grub/menu.lst be updated? Will the admin have to run bootadm after the root clone is created, or will the zfs utility be enhanced to populate / remove entries from the menu.lst? Thanks, - Ryan -- UNIX Administrator http://prefetch.net
Matty wrote:> How will /boot/grub/menu.lst be updated? Will the admin have to run > bootadm after the root clone is created, or will the zfs utility be > enhanced to populate / remove entries from the menu.lst? >The detail of how menu.lst will be updated is still being worked out. We don''t plan on using zfs utility to handle it though. Lin
While the snapshot isn''t RW, the clone is and would certainly be helpful in this case.... Isn''t the whole idea to: 0) boot into single-user/boot-archive if you''re paranoid (or just quiess and clone if you feel lucky) 1) "clone" the primary OS instance+relevant-slices & boot into the primary OS 2) apply "alternate root" patches to the cloned file systems (leaving the original OS COMPLETELY UNTOUCHED) (run a DTRACE watch-dog to make sure bugs in patch pre/post-install don''t act against the primary OS instead of the alternate root.... (this would NEVER happen would it????)) 3) add entries for the cloned file system in GRUB, making them a valid boot option (possibly clean up a few /etc/vfstab path issues in the cloned FS while we''re at it) Boot the clone, or backout to the original ( by booting that menu option) if not 100% happy... If you''re happy on the clone say after a week, "zfs promote" the clone to become the primary. ((grub gets cleaned up accordingly so its not confused) and your backout option is GONE/recycled) (Wait, we haven''t used live-upgrade yet... What to do?) Thoughts on this scenario working for the zfs-boot initial release? Thanks, -- MikeE -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Lin Ling Sent: Sunday, March 11, 2007 3:49 PM To: Matty Cc: zfs-discuss at opensolaris.org Subject: Re: [zfs-discuss] Re: Re: update on zfs boot support Matty wrote:> I am curious how snapshots and clones will be integrated with grub. > Will it be posible to boot from a snapshot? I think this would be > useful when applying patches, since you could snapshot / ,/var and > /opt, patch the system, and revert back (by choosing a snapshot from > the grub menu) to the snapshot if something went awry. Is this how the > zfs boot team envisions this working?You can snapshot/clone, and revert back by choosing the clone from the grub menu to boot. Since snapshot is a read-only filesystem, directly booting from it is not supported for the initial release. However, it is on our to-investigate list. Lin _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote: > > On March 11, 2007 6:05:13 PM +0000 Tim Foster <Tim.Foster at Sun.COM> wrote: > > >* ability to add disks to mirror the root filesystem at any time, > > > should they become available > > > > Can''t this be done with UFS+SVM as well? A reboot would be required > > but you have to do regular reboots anyway just for patching.*if* you already have the root filesystem under SVM in the first place, then no reboot should be required to add a mirror. And I assume that''s all we''re talking about for the ZFS mirroring as well.> It can, but you have to plan ahead. You need to leave a spall > partition for the SVM metadata. Something I *never* remember to do > (I''m too used to working with Veritas).SVM metadata on the initial root disk would be required for the initial installation, not for the mirroring. And it could be taken from swap just as VxVM does. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Brian Hechinger wrote:> On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote: > >> On March 11, 2007 6:05:13 PM +0000 Tim Foster <Tim.Foster at Sun.COM> wrote: >> >>> * ability to add disks to mirror the root filesystem at any time, >>> should they become available >>> >> Can''t this be done with UFS+SVM as well? A reboot would be required >> but you have to do regular reboots anyway just for patching. >> > > It can, but you have to plan ahead. You need to leave a spall partition for > the SVM metadata. Something I *never* remember to do (I''m too used to working > with Veritas). > > If you can remember to plan ahead, then yes. ;) > > -brian >not necessarily, metainit -a -f will force all onto one disk, but should only be used in emergency cases really, where you are already in the situation of not having a partition to put the meta DB on. Enda
Malachi de AElfweald
2007-Mar-12 15:28 UTC
[zfs-discuss] Re: Re: update on zfs boot support
> ZFS supports swap to /dev/vzol, however, I do not > have data related to > performance. > Also note that ZFS does not support dump yet, see RFE > 5008936.I am getting ready to install a new server from scratch. While I had been hoping to do a full-radiz2 system, from what I am understanding here, my best bet is to still do a UFS drive for the root/boot partition, since ZFS does not support dump. Is this correct? If so, I have two questions: 1) How do I at least mirror the root partition during install (instead of the convoluted after-the-fact instructions all over the net) 2) Will it be possible at some point to switch to full-raidz2 without reinstalling the OS? Thanks, Malachi This message posted from opensolaris.org
Malachi de AElfweald wrote:> 1) How do I at least mirror the root partition during install (instead of the convoluted after-the-fact instructions all over the net) >Use Jumpstart. A profile to install your machine with mirroring should be pretty short, simple, and easy to create. It will be done at install time, and you won''t have to do a thing manually. -Kyle
On 12-Mar-07, at 11:28 AM, Malachi de AElfweald wrote:>> ZFS supports swap to /dev/vzol, however, I do not >> have data related to >> performance. >> Also note that ZFS does not support dump yet, see RFE >> 5008936. > > I am getting ready to install a new server from scratch. While I > had been hoping to do a full-radiz2 system, from what I am > understanding here, my best bet is to still do a UFS drive for the > root/boot partition, since ZFS does not support dump. Is this > correct? > > If so, I have two questions: > 1) How do I at least mirror the root partition during install > (instead of the convoluted after-the-fact instructions all over the > net)Actually the after-the-fact instructions are simple to follow, I''ve done it several times, including painlessly switching remote servers to SVM. A good recipe is: http://sunsolve.sun.com/search/document.do? assetkey=1-9-83605-1 (for Sun Fire x86) --Toby> 2) Will it be possible at some point to switch to full-raidz2 > without reinstalling the OS? > > Thanks, > Malachi > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> I am getting ready to install a new server from scratch. While I had > been hoping to do a full-radiz2 system, from what I am understanding > here, my best bet is to still do a UFS drive for the root/boot > partition, since ZFS does not support dump. Is this correct?There is no official support for ZFS root yet. So in most situations you''ll want root on UFS, yes.> If so, I have two questions:> 1) How do I at least mirror the root partition during install (instead > of the convoluted after-the-fact instructions all over the net)There is no way to do that during an interactive install. You can specify it in a jumpstart profile, but that seems to be quite a bit fiddlier than mirroring after installation.> 2) Will it be possible at some point to switch to full-raidz2 without > reinstalling the OS?You mean for the root filesystem? I don''t think that''s known yet. The announcements about the support of root on ZFS have mentioned that they are not targeting raidz or raidz2 as an initial option. I certainly wouldn''t plan any current system with that in mind. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
Malachi de Ælfweald
2007-Mar-12 15:59 UTC
[zfs-discuss] Re: Re: update on zfs boot support
I took a look at some Jumpstart instructions... As a n00b to Solaris Administration, I think I am likely to screw that up at the moment. I know that with my FreeBSD system, I specified the RAID at the hardware level then fdisk detected the volume as something I could install to... Last time I tried that with Solaris, it didn''t detect any drives. Malachi On 3/12/07, Kyle McDonald <Kyle.McDonald at bigbandnet.com> wrote:> > Malachi de AElfweald wrote: > > 1) How do I at least mirror the root partition during install (instead > of the convoluted after-the-fact instructions all over the net) > > > Use Jumpstart. A profile to install your machine with mirroring should > be pretty short, simple, and easy to create. It will be done at install > time, and you won''t have to do a thing manually. > > -Kyle > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070312/b6b5fb0d/attachment.html>
> I know that with my FreeBSD system, I specified the RAID at the hardware > level then fdisk detected the volume as something I could install to... > Last time I tried that with Solaris, it didn''t detect any drives.HW raid depends on the specific hardware and the existence of Solaris drivers to talk to it. If you have them, then that''s a valid path as well. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
This is great news. A question crossed my mind. I''m sure it''s a dumb one but I thought I''d ask anyway... How will LiveUpdate work when the boot partition is in the pool? Gary This message posted from opensolaris.org
[sorry for the late reply, the original got stuck in the mail] clarification below...> > Ian Collins wrote: > > > Thanks for the heads up. > > > > > > I''m building a new file server at the moment and > > I''d like to make sure I > > > can migrate to ZFS boot when it arrives. > > > > > > My current plan is to create a pool on 4 500GB > > drives and throw in a > > > small boot drive. > > > > > > Will I be able to drop the boot drive and move / > > over to the pool when > > > ZFS boot ships? > > > > > > > Yes, should be able to, given that you have > already > > had an UFS boot > > drive running root. > > > > Hi, > > However, this raises another concert that during > recent discussions regarding to disk layout of a zfs > system > (http://www.opensolaris.org/jive/thread.jspa?threadID> 25679&tstart=0) it was said that currently we''d > better give zfs the whole device (rather than slices) > and keep swap off zfs devices for better performance. > > If the above recommendation still holds, we still > have to have a swap device out there othere than > devices managed by zfs. is this limited by the design > or implementation of zfs?We''ve updated the wiki to help clarify this confusion. The consensus best practice is to have enough RAM that you don''t need to swap. If you need to swap, your life will be sad no matter what your disk config is. For those systems with limited numbers of disks, you really don''t have much choice about where swap is located, so keep track of your swap *usage* and adjust the system accordingly. -- richard This message posted from opensolaris.org
Richard Elling wrote:> [sorry for the late reply, the original got stuck in the mail] > clarification below... > >>> Ian Collins wrote: >>>> Thanks for the heads up. >>>> >>>> I''m building a new file server at the moment and >>> I''d like to make sure I >>>> can migrate to ZFS boot when it arrives. >>>> >>>> My current plan is to create a pool on 4 500GB >>> drives and throw in a >>>> small boot drive. >>>> >>>> Will I be able to drop the boot drive and move / >>> over to the pool when >>>> ZFS boot ships? >>>> >>> Yes, should be able to, given that you have >> already >>> had an UFS boot >>> drive running root. >>> >> Hi, >> >> However, this raises another concert that during >> recent discussions regarding to disk layout of a zfs >> system >> (http://www.opensolaris.org/jive/thread.jspa?threadID>> 25679&tstart=0) it was said that currently we''d >> better give zfs the whole device (rather than slices) >> and keep swap off zfs devices for better performance. >> >> If the above recommendation still holds, we still >> have to have a swap device out there othere than >> devices managed by zfs. is this limited by the design >> or implementation of zfs? > > We''ve updated the wiki to help clarify this confusion. The consensus best > practice is to have enough RAM that you don''t need to swap. If you need to > swap, your life will be sad no matter what your disk config is. For those > systems with limited numbers of disks, you really don''t have much choice about > where swap is located, so keep track of your swap *usage* and adjust the system > accordingly. > -- richard > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussOne thing several of us want to do in Nevada is allocate swap space transparently out of the root pool. Yes, there''d be reservations/allocations, etc. All we need then is a way to have a dedicated dump device in the same pool... - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts
On 12/03/07, Darren Dunham <ddunham at taos.com> wrote:> > On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote: > > > On March 11, 2007 6:05:13 PM +0000 Tim Foster <Tim.Foster at Sun.COM> wrote: > > > >* ability to add disks to mirror the root filesystem at any time, > > > > should they become available > > > > > > Can''t this be done with UFS+SVM as well? A reboot would be required > > > but you have to do regular reboots anyway just for patching. > > *if* you already have the root filesystem under SVM in the first place, > then no reboot should be required to add a mirror. And I assume that''s > all we''re talking about for the ZFS mirroring as well.Is there any reason you''d have SVM on just the one partition? I can see why you''d do that with ZFS (snapshot, compression, etc). -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
> > *if* you already have the root filesystem under SVM in the first place, > > then no reboot should be required to add a mirror. And I assume that''s > > all we''re talking about for the ZFS mirroring as well. > > Is there any reason you''d have SVM on just the one partition? I can > see why you''d > do that with ZFS (snapshot, compression, etc).Exactly the reason discussed. It allows later mirroring without requiring an unmount. For filesystems other than root and /usr, you could also expand as long as free space is available. Most sites tend to put all or none of the OS filesystems under SVM. I doubt that the practice is very common, but I''ve been to at least one place that did setup of SDS/SVM on all root disks, even if there was only one disk in the machine. My point was only that the usual issue with mirroring root is that it cannot be umounted, so the existing mount device is fixed until reboot. That issue is similar for all SVM, VxVM, and ZFS. As long as the mount device is already a VM object, adding the mirror should be trivial. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
I''m sure its not blessed, but another process to maximize the zfs space on a system with few disks is 1) boot from SXCR http://www.opensolaris.org/os/downloads/on/ 2) select "min install" with 512M / 512M swap rest /export/home use format to copy the partition table from disk0 to disk1 umount /export/home zpool create -f zfs c1t0d0s7 c1t1d0s7 zfs create zfs/usr zfs create zfs/var zfs create zfs/opt cd /zfs ufsdump 0fs - 999999 /usr /var | ufsrestore -rf - mkdir var/run zfs set mountpoint=legacy zfs/usr zfs set mountpoint=legacy zfs/var zfs set mountpoint=legacy zfs/opt vi /etc/fstab ; echo adding these lines: /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no - /dev/dsk/c1t0d0s1 - - swap - no - /dev/dsk/c1t1d0s1 - - swap - no - zfs/usr - /usr zfs - yes - zfs/var - /var zfs - yes - zfs/opt - /opt zfs - yes - cd / bootadm update-archive mkdir nukeme mv var/* nukeme mv usr/* nukeme power cycle as there is no reboot :-) rm -rf /nukeme note there isn''t enough space for a crashdump but there is space for a backup of root on c1t1d0s0 if you want bfu from here to get the slower debug bits but an easy way to get /usr/ucb pkgadd SUNWadmc SUNWtoo SUNWpool SUNWzoner SUNWzoneu pkgadd SUNWbind SUNWbindr SUNWluu SUNWadmfw SUNWlur SUNWluzone echo "set kmem_flags = 0x0" >> /etc/system touch /usr/lib/dbus-daemon chmod 755 /usr/lib/dbus-daemon grab build-tools and on-bfu from http://dlc.sun.com/osol/on/downloads/current/ vi /opt/onbld/bin/bfu to remove the fastfs depend and path it out as /opt/onbld/bin/`uname -p`/fastfs and change the remote acr to /opt/onbld/bin/acr vi /opt/onbld/bin/acr path out /usr/bin/gzip I''ve been fighting an issue that after an hr I can ping the default router but packets never get forward to the default route.. fails with either e1000g0 or bge0 and an ifconfig down ; ifconfig up fixes it for another hr or so. http://bugs.opensolaris.org/view_bug.do?bug_id=6523767 in opensol-20070312 didn''t fix it either. sigh.. Rob
Hi Richard,> The consensus best > practice is to have enough RAM that you don''t need to > swap. If you need to > swap, your life will be sad no matter what your disk > config is.>From my understanding Solaris does not overcommit memory allocation, so every allocation must be backed by some form of memory (real RAM or swap). Some programs tend to allocate more memory than they actually use, where unused memory is mapped from swap without any I/O. Without swap this would be drawn from real memory stealing memory from other applications or the page cache. A big swap is therefore helpful even if there is no swapping activity.Is this implemented differently in Solaris 10/Nevada? Best regards -- Dagobert This message posted from opensolaris.org
Dagobert Michelsen wrote:> Hi Richard, > >> The consensus best >> practice is to have enough RAM that you don''t need to >> swap. If you need to >> swap, your life will be sad no matter what your disk >> config is. > >>From my understanding Solaris does not overcommit memory allocation, so every allocation must be backed by some form of memory (real RAM or swap). Some programs tend to allocate more memory than they actually use, where unused memory is mapped from swap without any I/O. Without swap this would be drawn from real memory stealing memory from other applications or the page cache. A big swap is therefore helpful even if there is no swapping activity. > > Is this implemented differently in Solaris 10/Nevada?warning: noun/verb overload. In my context, swap is a verb. -- richard
Richard Elling wrote:> warning: noun/verb overload. In my context, swap is a verb.It is also a common shorthand for "swap space." -- --Ed -------------- next part -------------- A non-text attachment was scrubbed... Name: ed.gould.vcf Type: text/x-vcard Size: 282 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070319/9e46b1b5/attachment.vcf>