Hi Dick, I am redirecting your question to zfs-discuss mailing list, where people are more knowledgeable about this problem and your question could be better answered. Best regards, Jan dick hoogendijk wrote:> I have s10u6 installed on my server. > zfs list (partly): > NAME USED AVAIL REFER MOUNTPOINT > rpool 88.8G 140G 27.5K /rpool > rpool/ROOT 20.0G 140G 18K /rpool/ROOT > rpool/ROOT/s10BE2 20.0G 140G 7.78G / > > But just now, on a newly installed s10u6 system I got rpool/ROOT with a > mountpoint "legacy" > > The drives were different. On the latter (legacy) system it was not > formatted (yet) (in VirtualBox). On my server I switched from UFS to > ZFS, so I first created a rpool and than did a luupgrade into it. > This could explain the mountpoint /rpool/ROOT but WHY the difference? > Why can''t s10u6 install the same mountpoint on the new disk? > The server runs very well; is this "legacy" thing really needed? > >
On 12/02/08 03:21, jan damborsky wrote:> Hi Dick, > > I am redirecting your question to zfs-discuss > mailing list, where people are more knowledgeable > about this problem and your question could be > better answered. > > Best regards, > Jan > > > dick hoogendijk wrote: > >> I have s10u6 installed on my server. >> zfs list (partly): >> NAME USED AVAIL REFER MOUNTPOINT >> rpool 88.8G 140G 27.5K /rpool >> rpool/ROOT 20.0G 140G 18K /rpool/ROOT >> rpool/ROOT/s10BE2 20.0G 140G 7.78G / >> >> But just now, on a newly installed s10u6 system I got rpool/ROOT with a >> mountpoint "legacy" >> >>The mount point for /<rootpoolname>/ROOT is supposed to be "legacy" because that dataset should never be mounted. It''s just a "container" dataset to group all the BEs.>> The drives were different. On the latter (legacy) system it was not >> formatted (yet) (in VirtualBox). On my server I switched from UFS to >> ZFS, so I first created a rpool and than did a luupgrade into it. >> This could explain the mountpoint /rpool/ROOT but WHY the difference? >> Why can''t s10u6 install the same mountpoint on the new disk? >> The server runs very well; is this "legacy" thing really needed? >> >>When you created the rpool, did you also explicitly create the rpool/ROOT datasets? If you did create it and didn''t set the mount point to "legacy", that explains why you ended up with your original configuration. If you didn''t create the rpool/ROOT dataset yourself, and instead let LiveUpgrade create it automatically, and LiveUpgrade set the mountpoint to /rpool/ROOT, then that''s a bug in LiveUpgrade (though a minor one, I think). Lori
Lori Alt wrote:> On 12/02/08 03:21, jan damborsky wrote: >> Hi Dick, >> >> I am redirecting your question to zfs-discuss >> mailing list, where people are more knowledgeable >> about this problem and your question could be >> better answered. >> >> Best regards, >> Jan >> >> >> dick hoogendijk wrote: >> >>> I have s10u6 installed on my server. >>> zfs list (partly): >>> NAME USED AVAIL REFER MOUNTPOINT >>> rpool 88.8G 140G 27.5K /rpool >>> rpool/ROOT 20.0G 140G 18K /rpool/ROOT >>> rpool/ROOT/s10BE2 20.0G 140G 7.78G / >>> >>> But just now, on a newly installed s10u6 system I got rpool/ROOT with a >>> mountpoint "legacy" >>> >>> > The mount point for /<rootpoolname>/ROOT is supposed > to be "legacy" because that dataset should never be mounted. > It''s just a "container" dataset to group all the BEs. > >>> The drives were different. On the latter (legacy) system it was not >>> formatted (yet) (in VirtualBox). On my server I switched from UFS to >>> ZFS, so I first created a rpool and than did a luupgrade into it. >>> This could explain the mountpoint /rpool/ROOT but WHY the difference? >>> Why can''t s10u6 install the same mountpoint on the new disk? >>> The server runs very well; is this "legacy" thing really needed? >>> >>> > When you created the rpool, did you also explicitly create the rpool/ROOT > datasets? If you did create it and didn''t set the mount point to > "legacy", > that explains why you ended up with your original configuration. If > you didn''t create the rpool/ROOT dataset yourself, and instead let > LiveUpgrade > create it automatically, and LiveUpgrade set the mountpoint to > /rpool/ROOT, then > that''s a bug in LiveUpgrade (though a minor one, I think).NO, I''m quite positive all I did was "zfs create rpool" and after that I did a "lucreate -n zfsBE -p rpool" followed by "luupgrade -u -n zfsBE -s /iso" So, it must have been LU that "forgot" to set the mountpoint to legacy. What is the correct syntax to correct this situation? -- Dick Hoogendijk -- PGP/GnuPG key: F86289CE +http://nagual.nl/ | SunOS 10u6 10/08 ZFS+
On 12/02/08 11:29, dick hoogendijk wrote:> Lori Alt wrote: > >> On 12/02/08 03:21, jan damborsky wrote: >> >>> Hi Dick, >>> >>> I am redirecting your question to zfs-discuss >>> mailing list, where people are more knowledgeable >>> about this problem and your question could be >>> better answered. >>> >>> Best regards, >>> Jan >>> >>> >>> dick hoogendijk wrote: >>> >>> >>>> I have s10u6 installed on my server. >>>> zfs list (partly): >>>> NAME USED AVAIL REFER MOUNTPOINT >>>> rpool 88.8G 140G 27.5K /rpool >>>> rpool/ROOT 20.0G 140G 18K /rpool/ROOT >>>> rpool/ROOT/s10BE2 20.0G 140G 7.78G / >>>> >>>> But just now, on a newly installed s10u6 system I got rpool/ROOT with a >>>> mountpoint "legacy" >>>> >>>> >>>> >> The mount point for /<rootpoolname>/ROOT is supposed >> to be "legacy" because that dataset should never be mounted. >> It''s just a "container" dataset to group all the BEs. >> >> >>>> The drives were different. On the latter (legacy) system it was not >>>> formatted (yet) (in VirtualBox). On my server I switched from UFS to >>>> ZFS, so I first created a rpool and than did a luupgrade into it. >>>> This could explain the mountpoint /rpool/ROOT but WHY the difference? >>>> Why can''t s10u6 install the same mountpoint on the new disk? >>>> The server runs very well; is this "legacy" thing really needed? >>>> >>>> >>>> >> When you created the rpool, did you also explicitly create the rpool/ROOT >> datasets? If you did create it and didn''t set the mount point to >> "legacy", >> that explains why you ended up with your original configuration. If >> you didn''t create the rpool/ROOT dataset yourself, and instead let >> LiveUpgrade >> create it automatically, and LiveUpgrade set the mountpoint to >> /rpool/ROOT, then >> that''s a bug in LiveUpgrade (though a minor one, I think). >> > > NO, I''m quite positive all I did was "zfs create rpool" and after that I > did a "lucreate -n zfsBE -p rpool" followed by "luupgrade -u -n zfsBE -s > /iso" > > So, it must have been LU that "forgot" to set the mountpoint to legacy. >yes, we verified that and filed a bug against LU.> What is the correct syntax to correct this situation? > >I''m not sure you really need to, but you should be able to do this: zfs unmount rpool/ROOT zfs set mountpoint=legacy rpool/ROOT Lori -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081202/c162fd9a/attachment.html>