wan_jm
2008-Jun-27 10:01 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
the procedure is follows: 1. mkdir /tank 2. touch /tank/a 3. zpool create tank c0d0p3 this command give the following error message: cannot mount ''/tank'': directory is not empty; 4. reboot. then the os can only be login in from console. does it a bug? This message posted from opensolaris.org
Matthew Gardiner
2008-Jun-27 10:42 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
Hi, why are you creating a file in the directory tank? Matthew 2008/6/27 wan_jm <wan_jm at 126.com>:> the procedure is follows: > 1. mkdir /tank > 2. touch /tank/a > 3. zpool create tank c0d0p3 > this command give the following error message: > cannot mount ''/tank'': directory is not empty; > 4. reboot. > then the os can only be login in from console. does it a bug? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080627/5b6338bb/attachment.html>
Mark J Musante
2008-Jun-27 13:55 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
On Fri, 27 Jun 2008, wan_jm wrote:> the procedure is follows: > 1. mkdir /tank > 2. touch /tank/a > 3. zpool create tank c0d0p3 > this command give the following error message: > cannot mount ''/tank'': directory is not empty; > 4. reboot. > then the os can only be login in from console. does it a bug?No, I would not consider that a bug. Regards, markm
wan_jm
2008-Jun-27 23:30 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
I just have a try. In my opinion, if the directory is not empty, zpool should not create the pool. let me give a senario if some day our software runs on the customer site. but one engineer of the customer did the above operation failed, but he didn''t do anything more. few days later , the os automatically rebooted, then the machine stops services. don''t you think it is a bug ? This message posted from opensolaris.org
wan_jm
2008-Jun-27 23:34 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
I don''t know why that zfs mount failed stops all the other network service. maybe it is not a bug of zfs. it must a bug with SMF in my opinion. do you think so This message posted from opensolaris.org
Mike Gerdts
2008-Jun-28 00:28 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
On Fri, Jun 27, 2008 at 6:30 PM, wan_jm <wan_jm at 126.com> wrote:> I just have a try. > In my opinion, if the directory is not empty, zpool should not create the pool. > > let me give a senario if some day our software runs on the customer site. but one engineer of the customer did the above operation failed, but he didn''t do anything more. few days later , the os automatically rebooted, then the machine stops services. > > don''t you think it is a bug ? >At step 3 there was an error message that was not properly dealt with. The fact that the system is not resilient to any misstep is not a bug. If you remove /sbin/init the system would be hosed worse but you would have gotten no error message before reboot. -- Mike Gerdts http://mgerdts.blogspot.com/
Charles Soto
2008-Jun-29 02:30 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
On 6/27/08 8:55 AM, "Mark J Musante" <mmusante at east.sun.com> wrote:> On Fri, 27 Jun 2008, wan_jm wrote: > >> the procedure is follows: >> 1. mkdir /tank >> 2. touch /tank/a >> 3. zpool create tank c0d0p3 >> this command give the following error message: >> cannot mount ''/tank'': directory is not empty; >> 4. reboot. >> then the os can only be login in from console. does it a bug? > > No, I would not consider that a bug.Why? Charles (to paraphrase PBS - "be more helpful" ; conversely, "be less pithy")
michael schuster
2008-Jun-29 02:42 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
Charles Soto wrote:> On 6/27/08 8:55 AM, "Mark J Musante" <mmusante at east.sun.com> wrote: > >> On Fri, 27 Jun 2008, wan_jm wrote: >> >>> the procedure is follows: >>> 1. mkdir /tank >>> 2. touch /tank/a >>> 3. zpool create tank c0d0p3 >>> this command give the following error message: >>> cannot mount ''/tank'': directory is not empty; >>> 4. reboot. >>> then the os can only be login in from console. does it a bug? >> No, I would not consider that a bug. > > Why?well ... why would it be a bug? zfs is just making sure that it''s not accidentally "hiding" anything by mounting something on a non-empty mountpoint; as you probably know, anything that is in a directory is invisible if that directory is used as a mountpoint for another filesystem. zfs cannot know whether the mountpoint contains rubbish or whether the mountpoint property is incorrect, therefore the only sensible thing to do is to not mount an FS if the mountpoint is non-empty. to quote Renaud:> This is an expected behavior. filesystem/local is supposed to mount all > ZFS filesystems. If it fails then filesystem/local goes into maintenance > and network/inetd cannot start.HTH Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see ''Recursion''
Kyle McDonald
2008-Jun-29 12:39 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
michael schuster wrote:> Charles Soto wrote: > >> On 6/27/08 8:55 AM, "Mark J Musante" <mmusante at east.sun.com> wrote: >> >> >>> On Fri, 27 Jun 2008, wan_jm wrote: >>> >>> >>>> the procedure is follows: >>>> 1. mkdir /tank >>>> 2. touch /tank/a >>>> 3. zpool create tank c0d0p3 >>>> this command give the following error message: >>>> cannot mount ''/tank'': directory is not empty; >>>> 4. reboot. >>>> then the os can only be login in from console. does it a bug? >>>> >>> No, I would not consider that a bug. >>> >> Why? >> > > well ... why would it be a bug? > > zfs is just making sure that it''s not accidentally "hiding" anything by > mounting something on a non-empty mountpoint; as you probably know, > anything that is in a directory is invisible if that directory is used as a > mountpoint for another filesystem. >Yes, but that is the opposite of decades of UNIX behavior, so it''s not surprising that it''s unexpected for many people.> zfs cannot know whether the mountpoint contains rubbish or whether the > mountpoint property is incorrect, therefore the only sensible thing to do > is to not mount an FS if the mountpoint is non-empty. > > to quote Renaud: > > >> This is an expected behavior. filesystem/local is supposed to mount all >> ZFS filesystems. If it fails then filesystem/local goes into maintenance >> and network/inetd cannot start. >>Shouldn''t the other services really only be dependent on system filesystems being mounted? Or possibly all filesystems in the ''root'' zfs pool? I consider it a bug if my machine doesn''t boot up because one single, non-system and non-mandatory, FS has an issue and doesn''t mount. The rest of the machine should still boot and function fine. -Kyle
dick hoogendijk
2008-Jun-29 13:07 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
On Sun, 29 Jun 2008 08:39:17 -0400 Kyle McDonald <KMcDonald at Egenera.COM> wrote:> I consider it a bug if my machine doesn''t boot up because one single, > non-system and non-mandatory, FS has an issue and doesn''t mount. The > rest of the machine should still boot and function fine.My system has always stopped booting when a filesystem in /etc/vfstab could not be mounted, whatever the status for the system .. (i.e./export/home) I see no difference here. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D ++ http://nagual.nl/ + SunOS sxce snv91 ++
Richard Elling
2008-Jun-30 01:04 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
Kyle McDonald wrote:> michael schuster wrote: > >> Charles Soto wrote: >> >> >>> On 6/27/08 8:55 AM, "Mark J Musante" <mmusante at east.sun.com> wrote: >>> >>> >>> >>>> On Fri, 27 Jun 2008, wan_jm wrote: >>>> >>>> >>>> >>>>> the procedure is follows: >>>>> 1. mkdir /tank >>>>> 2. touch /tank/a >>>>> 3. zpool create tank c0d0p3 >>>>> this command give the following error message: >>>>> cannot mount ''/tank'': directory is not empty; >>>>> 4. reboot. >>>>> then the os can only be login in from console. does it a bug? >>>>> >>>>> >>>> No, I would not consider that a bug. >>>> >>>> >>> Why? >>> >>> >> well ... why would it be a bug? >> >> zfs is just making sure that it''s not accidentally "hiding" anything by >> mounting something on a non-empty mountpoint; as you probably know, >> anything that is in a directory is invisible if that directory is used as a >> mountpoint for another filesystem. >> >> > Yes, but that is the opposite of decades of UNIX behavior, so it''s not > surprising that it''s unexpected for many people. > >> zfs cannot know whether the mountpoint contains rubbish or whether the >> mountpoint property is incorrect, therefore the only sensible thing to do >> is to not mount an FS if the mountpoint is non-empty. >> >> to quote Renaud: >> >> >> >>> This is an expected behavior. filesystem/local is supposed to mount all >>> ZFS filesystems. If it fails then filesystem/local goes into maintenance >>> and network/inetd cannot start. >>> >>> > Shouldn''t the other services really only be dependent on system > filesystems being mounted? Or possibly all filesystems in the ''root'' zfs > pool? > > I consider it a bug if my machine doesn''t boot up because one single, > non-system and non-mandatory, FS has an issue and doesn''t mount. The > rest of the machine should still boot and function fine. >I think Kyle might be onto something here. With ZFS it is so easy to create file systems, one could expect many people to do so. In the past, it was so difficult and required planning, so people tended to be more careful about mount points. In this new world, we don''t really have a way to show which (ZFS) file systems are critical during boot (AFAICT). However, if we already know that a file system create failed in this manner, we could set the "canmount" property to false. This bothers me, just a little, because if there is such an error, it would be propagated as another potential latent fault. OTOH, as currently implemented, it is a different, and IMHO more impactful, latent fault. Thoughts? -- richard
Matthew Gardiner
2008-Jun-30 01:30 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
> > I think Kyle might be onto something here. With ZFS it is so easy > to create file systems, one could expect many people to do so. > In the past, it was so difficult and required planning, so people > tended to be more careful about mount points. > > In this new world, we don''t really have a way to show which > (ZFS) file systems are critical during boot (AFAICT). However, > if we already know that a file system create failed in this manner, > we could set the "canmount" property to false. This bothers me, > just a little, because if there is such an error, it would be propagated > as another potential latent fault. OTOH, as currently implemented, > it is a different, and IMHO more impactful, latent fault. Thoughts? > -- richardHi, I would have thought that the computer to keep loading, and once fully loaded, a polite message stating which devices couldn''t be mounted at boot time - I mean, I assumed that would be a pretty obvious way of handling something that couldn''t be mounted. Matthew -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080630/f05c3ee4/attachment.html>
On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner <kaiwai.gardiner at gmail.com> wrote:> I think Kyle might be onto something here. With ZFS it is so easy >> to create file systems, one could expect many people to do so. >> In the past, it was so difficult and required planning, so people >> tended to be more careful about mount points. >> >> In this new world, we don''t really have a way to show which >> (ZFS) file systems are critical during boot (AFAICT). However, >> if we already know that a file system create failed in this manner, >> we could set the "canmount" property to false. This bothers me, >> just a little, because if there is such an error, it would be propagated >> as another potential latent fault. OTOH, as currently implemented, >> it is a different, and IMHO more impactful, latent fault. Thoughts? >> -- richard > > > Hi, > > I would have thought that the computer to keep loading, and once fully > loaded, a polite message stating which devices couldn''t be mounted at boot > time - I mean, I assumed that would be a pretty obvious way of handling > something that couldn''t be mounted. > > Matthew > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >And what happens if it''s your root volume? Politely keep booting until it kernel panics? Hope nothing is corrupted in the process? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080629/391bc45e/attachment.html>
michael schuster
2008-Jun-30 01:33 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
Richard Elling wrote:>> I consider it a bug if my machine doesn''t boot up because one single, >> non-system and non-mandatory, FS has an issue and doesn''t mount. The >> rest of the machine should still boot and function fine. >> > > I think Kyle might be onto something here.I tend to agree. would it be possible to create a zfs property, eg. "mandatory", that, when true, causes the behaviour we''re discussing, and when false, doesn''t stop the rest of the boot process? Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see ''Recursion''
Matthew Gardiner
2008-Jun-30 01:34 UTC
[zfs-discuss] zfs mount failed at boot stops network services.
2008/6/30 Tim <tim at tcsac.net>:> > > On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner < > kaiwai.gardiner at gmail.com> wrote: > >> I think Kyle might be onto something here. With ZFS it is so easy >>> to create file systems, one could expect many people to do so. >>> In the past, it was so difficult and required planning, so people >>> tended to be more careful about mount points. >>> >>> In this new world, we don''t really have a way to show which >>> (ZFS) file systems are critical during boot (AFAICT). However, >>> if we already know that a file system create failed in this manner, >>> we could set the "canmount" property to false. This bothers me, >>> just a little, because if there is such an error, it would be propagated >>> as another potential latent fault. OTOH, as currently implemented, >>> it is a different, and IMHO more impactful, latent fault. Thoughts? >>> -- richard >> >> >> Hi, >> >> I would have thought that the computer to keep loading, and once fully >> loaded, a polite message stating which devices couldn''t be mounted at boot >> time - I mean, I assumed that would be a pretty obvious way of handling >> something that couldn''t be mounted. >> >> Matthew >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > > And what happens if it''s your root volume? Politely keep booting until it > kernel panics? Hope nothing is corrupted in the process?Come on man, use some commonsense! Geeze *shakes head* forget it, you''re beyond help. Matthew -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080630/aed93a0c/attachment.html>
On Sun, Jun 29, 2008 at 8:34 PM, Matthew Gardiner <kaiwai.gardiner at gmail.com> wrote:> > > 2008/6/30 Tim <tim at tcsac.net>: > > >> >> On Sun, Jun 29, 2008 at 8:30 PM, Matthew Gardiner < >> kaiwai.gardiner at gmail.com> wrote: >> >>> I think Kyle might be onto something here. With ZFS it is so easy >>>> to create file systems, one could expect many people to do so. >>>> In the past, it was so difficult and required planning, so people >>>> tended to be more careful about mount points. >>>> >>>> In this new world, we don''t really have a way to show which >>>> (ZFS) file systems are critical during boot (AFAICT). However, >>>> if we already know that a file system create failed in this manner, >>>> we could set the "canmount" property to false. This bothers me, >>>> just a little, because if there is such an error, it would be propagated >>>> as another potential latent fault. OTOH, as currently implemented, >>>> it is a different, and IMHO more impactful, latent fault. Thoughts? >>>> -- richard >>> >>> >>> Hi, >>> >>> I would have thought that the computer to keep loading, and once fully >>> loaded, a polite message stating which devices couldn''t be mounted at boot >>> time - I mean, I assumed that would be a pretty obvious way of handling >>> something that couldn''t be mounted. >>> >>> Matthew >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >> >> And what happens if it''s your root volume? Politely keep booting until it >> kernel panics? Hope nothing is corrupted in the process? > > > Come on man, use some commonsense! > > Geeze *shakes head* forget it, you''re beyond help. > > Matthew >Insightful AND constructive. A++, would read again. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080629/44502e2a/attachment.html>