Simona Vetter
2025-Mar-07 13:09 UTC
[RFC PATCH 0/3] gpu: nova-core: add basic timer subdevice implementation
On Fri, Mar 07, 2025 at 08:32:55AM -0400, Jason Gunthorpe wrote:> On Fri, Mar 07, 2025 at 11:28:37AM +0100, Simona Vetter wrote: > > > > I wouldn't say it is wrong. It is still the correct thing to do, and > > > following down the normal cleanup paths is a good way to ensure the > > > special case doesn't have bugs. The primary difference is you want to > > > understand the device is dead and stop waiting on it faster. Drivers > > > need to consider these things anyhow if they want resiliency against > > > device crashes, PCI link wobbles and so on that don't involve > > > remove(). > > > > Might need to revisit that discussion, but Greg didn't like when we asked > > for a pci helper to check whether the device is physically gone (at least > > per the driver model). Hacking that in drivers is doable, but feels > > icky. > > I think Greg is right here, the driver model has less knowledge than > the driver if the device is alive.Maybe I misremember, but iirc he was fairly fundamentally opposed to trying to guess whether the hw is gone or not in the ->remove callback. But maybe that's more from the usb world, where all the hotremove race conditions are handled in the subsystem and you only have to deal with errno from calling into usb functions and unwind. So much, much easier situation.> The resiliency/fast-failure issue is not just isolated to having > observed a proper hot-unplug, but there are many classes of failure > that cause the device HW to malfunction that a robust driver can > detect and recover from. mlx5 attempts to do this for instance. > > It turns out when you deploy clusters with 800,000 NICs in them there > are weird HW fails constantly and you have to be resilient on the SW > side and try to recover from them when possible. > > So I'd say checking for a -1 read return on PCI is a sufficient > technique for the driver to use to understand if it's device is still > present. mlx5 devices further have an interactive register operation > "health check" that proves the device and it's PCI path is alive. > > Failing health checks trigger recovery, which shoot downs sleeps, > cleanly destroys stuff, resets the device, and starts running > again. IIRC this is actually done with a rdma hot unplug/plug sequence > autonomously executed inside the driver. > > A driver can do a health check immediately in remove() and make a > decision if the device is alive or not to speed up removal in the > hostile hot unplug case.Hm ... I guess when you get an all -1 read you check with a specific register to make sure it's not a false positive? Since for some registers that's a valid value. But yeah maybe this approach is more solid. The current C approach we have with an srcu revoceable section is definitely a least worse attempt from a very, very bad starting point. I think maybe we should also have two levels here: - Ideal driver design, probably what you've outlined above. This will need some hw/driver specific thought to get the optimal design most likely. This part is probably more bus and subsystem specific best practices documentation than things we enforce with the rust abstractions. - The "at least we don't blow up with memory safety issues" bare minimum that the rust abstractions should guarantee. So revocable and friends. And I think the latter safety fallback does not prevent you from doing the full fancy design, e.g. for revocable resources that only happens after your explicitly-coded ->remove() callback has finished. Which means you still have full access to the hw like anywhere else. Does this sounds like a possible conclusion of this thread, or do we need to keep digging? Also now that I look at this problem as a two-level issue, I think drm is actually a lot better than what I explained. If you clean up driver state properly in ->remove (or as stack automatic cleanup functions that run before all the mmio/irq/whatever stuff disappears), then we are largely there already with being able to fully quiescent driver state enough to make sure no new requests can sneak in. As an example drm_atomic_helper_shutdown does a full kernel modesetting commit across all resources, which guarantees that all preceeding in-flight commits have finished (or timed out, we should probably be a bit smarter on this so the timeouts are shorter when the hw is gone for good). And if you do that after drm_dev_unplug then nothing new should have been able to sneak in I think, at least conceptually. In practice we might have a bunch of funny races that are worth plugging I guess. Cheers, Sima -- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
Jason Gunthorpe
2025-Mar-07 14:55 UTC
[RFC PATCH 0/3] gpu: nova-core: add basic timer subdevice implementation
On Fri, Mar 07, 2025 at 02:09:12PM +0100, Simona Vetter wrote:> > A driver can do a health check immediately in remove() and make a > > decision if the device is alive or not to speed up removal in the > > hostile hot unplug case. > > Hm ... I guess when you get an all -1 read you check with a specific > register to make sure it's not a false positive? Since for some registers > that's a valid value.Yes. mlx5 has HW designed to support this, but I imagine on most devices you could find an ID register or something that won't be -1.> - The "at least we don't blow up with memory safety issues" bare minimum > that the rust abstractions should guarantee. So revocable and friends.I still really dislike recovable because it imposes a cost that is unnecessary.> And I think the latter safety fallback does not prevent you from doing the > full fancy design, e.g. for revocable resources that only happens after > your explicitly-coded ->remove() callback has finished. Which means you > still have full access to the hw like anywhere else.Yes, if you use rust bindings with something like RDMA then I would expect that by the time remove is done everything is cleaned up and all the revokable stuff was useless and never used. This is why I dislike revoke so much. It is adding a bunch of garbage all over the place that is *never used* if the driver is working correctly. I believe it is much better to runtime check that the driver is correct and not burden the API design with this. Giving people these features will only encourage them to write wrong drivers. This is not even a new idea, devm introduces automatic lifetime into the kernel and I've sat in presentations about how devm has all sorts of bug classes because of misuse. :\> Does this sounds like a possible conclusion of this thread, or do we need > to keep digging?IDK, I think this should be socialized more. It is important as it effects all drivers here out, and it is radically different to how the kernel works today.> Also now that I look at this problem as a two-level issue, I think drm is > actually a lot better than what I explained. If you clean up driver state > properly in ->remove (or as stack automatic cleanup functions that run > before all the mmio/irq/whatever stuff disappears), then we are largely > there already with being able to fully quiescent driver state enough to > make sure no new requests can sneak in.That is the typical subsystem design! Thanks, Jason