Marc Bevand
2007-Oct-03 08:10 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with the SATA framework)
I would like to test ZFS boot on my home server, but according to bug 6486493 ZFS boot cannot be used if the disks are attached to a SATA controller handled by a driver using the new SATA framework (which is my case: driver si3124). I have never heard of someone having successfully used ZFS boot with the SATA framework, so I assume this bug is real and everybody out there playing with ZFS boot is doing so with PATA controllers, or SATA controllers operating in compatibility mode, or SCSI controllers, right ? -marc
Eric Schrock
2007-Oct-03 16:14 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with the SATA framework)
This bug was rendered moot via 6528732 in build snv_68 (and s10_u5). We now store physical devices paths with the vnodes, so even though the SATA framework doesn''t correctly support open by devid in early boot, we can fallback to the device path just fine. ZFS root works great on thumper, which uses the marvell SATA driver. - Eric On Wed, Oct 03, 2007 at 08:10:16AM +0000, Marc Bevand wrote:> I would like to test ZFS boot on my home server, but according to bug > 6486493 ZFS boot cannot be used if the disks are attached to a SATA > controller handled by a driver using the new SATA framework (which > is my case: driver si3124). I have never heard of someone having > successfully used ZFS boot with the SATA framework, so I assume this > bug is real and everybody out there playing with ZFS boot is doing so > with PATA controllers, or SATA controllers operating in compatibility > mode, or SCSI controllers, right ? > > -marc > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Ivan Wang
2007-Oct-04 12:22 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with
> This bug was rendered moot via 6528732 in build > snv_68 (and s10_u5). We > now store physical devices paths with the vnodes, so > even though the > SATA framework doesn''t correctly support open by > devid in early boot, weBut if I read it right, there is still a problem in SATA framework (failing ldi_open_by_devid,) right? If this problem is framework-wide, it might just bite back some time in the future. Ivan.> can fallback to the device path just fine. ZFS root > works great on > thumper, which uses the marvell SATA driver. > > - Eric > > On Wed, Oct 03, 2007 at 08:10:16AM +0000, Marc Bevand > wrote: > > I would like to test ZFS boot on my home server, > but according to bug > > 6486493 ZFS boot cannot be used if the disks are > attached to a SATA > > controller handled by a driver using the new SATA > framework (which > > is my case: driver si3124). I have never heard of > someone having > > successfully used ZFS boot with the SATA framework, > so I assume this > > bug is real and everybody out there playing with > ZFS boot is doing so > > with PATA controllers, or SATA controllers > operating in compatibility > > mode, or SCSI controllers, right ? > > > > -marc > > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ss > > -- > Eric Schrock, Solaris Kernel Development > http://blogs.sun.com/eschrock > _________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discu > ssThis message posted from opensolaris.org
Eric Schrock
2007-Oct-04 16:54 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with
On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:> > This bug was rendered moot via 6528732 in build > > snv_68 (and s10_u5). We > > now store physical devices paths with the vnodes, so > > even though the > > SATA framework doesn''t correctly support open by > > devid in early boot, we > > But if I read it right, there is still a problem in SATA framework (failing ldi_open_by_devid,) right? > If this problem is framework-wide, it might just bite back some time in the future. >Yes, there is still a bug in the SATA framework, in that ldi_open_by_devid() doesn''t work early in boot. Opening by device path works so long as you don''t recable your boot devices. If we had open by devid working in early boot, then this wouldn''t be a problem. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Robert Milkowski
2007-Oct-05 07:52 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with
Hello Eric, Thursday, October 4, 2007, 5:54:06 PM, you wrote: ES> On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:>> > This bug was rendered moot via 6528732 in build >> > snv_68 (and s10_u5). We >> > now store physical devices paths with the vnodes, so >> > even though the >> > SATA framework doesn''t correctly support open by >> > devid in early boot, we >> >> But if I read it right, there is still a problem in SATA framework (failing ldi_open_by_devid,) right? >> If this problem is framework-wide, it might just bite back some time in the future. >>ES> Yes, there is still a bug in the SATA framework, in that ES> ldi_open_by_devid() doesn''t work early in boot. Opening by device path ES> works so long as you don''t recable your boot devices. If we had open by ES> devid working in early boot, then this wouldn''t be a problem. Even if someone re-cables sata disks couldn''t we fallback to "read zfs label from all available disks and find our pool and import it"? -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Pawel Jakub Dawidek
2007-Oct-08 08:45 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with
On Fri, Oct 05, 2007 at 08:52:17AM +0100, Robert Milkowski wrote:> Hello Eric, > > Thursday, October 4, 2007, 5:54:06 PM, you wrote: > > ES> On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote: > >> > This bug was rendered moot via 6528732 in build > >> > snv_68 (and s10_u5). We > >> > now store physical devices paths with the vnodes, so > >> > even though the > >> > SATA framework doesn''t correctly support open by > >> > devid in early boot, we > >> > >> But if I read it right, there is still a problem in SATA framework (failing ldi_open_by_devid,) right? > >> If this problem is framework-wide, it might just bite back some time in the future. > >> > > ES> Yes, there is still a bug in the SATA framework, in that > ES> ldi_open_by_devid() doesn''t work early in boot. Opening by device path > ES> works so long as you don''t recable your boot devices. If we had open by > ES> devid working in early boot, then this wouldn''t be a problem. > > Even if someone re-cables sata disks couldn''t we fallback to "read zfs > label from all available disks and find our pool and import it"?FreeBSD''s GEOM storage framework implements a method called ''taste''. When new disks arrives (or is closed after last write), GEOM calls taste methods of all storage subsystems and subsystems can try to read their metadata. This is bascially how autoconfiguration happens in FreeBSD for things like software RAID1/RAID3/stripe/and others. It''s much easier than what ZFS does: 1. read /etc/zfs/zpool.cache 2. open components by name 3. if there is no such disk goto 5 4. verify diskid (not all disks have an ID) 5. if diskid doesn''t match, try to lookup by ID If there are few hundreds of disks, it may slows booting down, but it was never a real problem in FreeBSD. -- Pawel Jakub Dawidek http://www.wheel.pl pjd at FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20071008/868e6dfd/attachment.bin>
Robert Milkowski
2007-Oct-08 21:39 UTC
[zfs-discuss] About bug 6486493 (ZFS boot incompatible with
Hello Pawel, Monday, October 8, 2007, 9:45:01 AM, you wrote: PJD> On Fri, Oct 05, 2007 at 08:52:17AM +0100, Robert Milkowski wrote:>> Hello Eric, >> >> Thursday, October 4, 2007, 5:54:06 PM, you wrote: >> >> ES> On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote: >> >> > This bug was rendered moot via 6528732 in build >> >> > snv_68 (and s10_u5). We >> >> > now store physical devices paths with the vnodes, so >> >> > even though the >> >> > SATA framework doesn''t correctly support open by >> >> > devid in early boot, we >> >> >> >> But if I read it right, there is still a problem in SATA framework (failing ldi_open_by_devid,) right? >> >> If this problem is framework-wide, it might just bite back some time in the future. >> >> >> >> ES> Yes, there is still a bug in the SATA framework, in that >> ES> ldi_open_by_devid() doesn''t work early in boot. Opening by device path >> ES> works so long as you don''t recable your boot devices. If we had open by >> ES> devid working in early boot, then this wouldn''t be a problem. >> >> Even if someone re-cables sata disks couldn''t we fallback to "read zfs >> label from all available disks and find our pool and import it"?PJD> FreeBSD''s GEOM storage framework implements a method called ''taste''. PJD> When new disks arrives (or is closed after last write), GEOM calls taste PJD> methods of all storage subsystems and subsystems can try to read their PJD> metadata. This is bascially how autoconfiguration happens in FreeBSD for PJD> things like software RAID1/RAID3/stripe/and others. PJD> It''s much easier than what ZFS does: PJD> 1. read /etc/zfs/zpool.cache PJD> 2. open components by name PJD> 3. if there is no such disk goto 5 PJD> 4. verify diskid (not all disks have an ID) PJD> 5. if diskid doesn''t match, try to lookup by ID PJD> If there are few hundreds of disks, it may slows booting down, but it PJD> was never a real problem in FreeBSD. I haven''t done any benchmarks but I would say zpool.cache could possible greatly reduce boot times especially in SAN environment. Then using devids is a good idea - again you don''t have to scan all disks... it''s just about last chance mechanizm - read all disks and try to construct pool from it. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com