Toke Høiland-Jørgensen
2021-Nov-26 12:30 UTC
[PATCH v2 net-next 21/26] ice: add XDP and XSK generic per-channel statistics
Alexander Lobakin <alexandr.lobakin at intel.com> writes:> From: Jakub Kicinski <kuba at kernel.org> > Date: Thu, 25 Nov 2021 09:44:40 -0800 > >> On Thu, 25 Nov 2021 18:07:08 +0100 Alexander Lobakin wrote: >> > > This I agree with, and while I can see the layering argument for putting >> > > them into 'ip' and rtnetlink instead of ethtool, I also worry that these >> > > counters will simply be lost in obscurity, so I do wonder if it wouldn't >> > > be better to accept the "layering violation" and keeping them all in the >> > > 'ethtool -S' output? >> > >> > I don't think we should harm the code and the logics in favor of >> > 'some of the users can face something'. We don't control anything >> > related to XDP using Ethtool at all, but there is some XDP-related >> > stuff inside iproute2 code, so for me it's even more intuitive to >> > have them there. >> > Jakub, may be you'd like to add something at this point? >> >> TBH I wasn't following this thread too closely since I saw Daniel >> nacked it already. I do prefer rtnl xstats, I'd just report them >> in -s if they are non-zero. But doesn't sound like we have an agreement >> whether they should exist or not. > > Right, just -s is fine, if we drop the per-channel approach.I agree that adding them to -s is fine (and that resolves my "no one will find them" complain as well). If it crowds the output we could also default to only output'ing a subset, and have the more detailed statistics hidden behind a verbose switch (or even just in the JSON output)?>> Can we think of an approach which would make cloudflare and cilium >> happy? Feels like we're trying to make the slightly hypothetical >> admin happy while ignoring objections of very real users. > > The initial idea was to only uniform the drivers. But in general > you are right, 10 drivers having something doesn't mean it's > something good.I don't think it's accurate to call the admin use case "hypothetical". We're expending a significant effort explaining to people that XDP can "eat" your packets, and not having any standard statistics makes this way harder. We should absolutely cater to our "early adopters", but if we want XDP to see wider adoption, making it "less weird" is critical!> Maciej, I think you were talking about Cilium asking for those stats > in Intel drivers? Could you maybe provide their exact usecases/needs > so I'll orient myself? I certainly remember about XSK Tx packets and > bytes. > And speaking of XSK Tx, we have per-socket stats, isn't that enough?IMO, as long as the packets are accounted for in the regular XDP stats, having a whole separate set of stats only for XSK is less important.>> Please leave the per-channel stats out. They make a precedent for >> channel stats which should be an attribute of a channel. Working for >> a large XDP user for a couple of years now I can tell you from my own >> experience I've not once found them useful. In fact per-queue stats are >> a major PITA as they crowd the output. > > Oh okay. My very first iterations were without this, but then I > found most of the drivers expose their XDP stats per-channel. Since > I didn't plan to degrade the functionality, they went that way.I personally find the per-channel stats quite useful. One of the primary reasons for not achieving full performance with XDP is broken configuration of packet steering to CPUs, and having per-channel stats is a nice way of seeing this. I can see the point about them being way too verbose in the default output, though, and I do generally filter the output as well when viewing them. But see my point above about only printing a subset of the stats by default; per-channel stats could be JSON-only, for instance? -Toke