Dave Chinner
2023-Jun-23 06:12 UTC
[PATCH 02/29] mm: vmscan: introduce some helpers for dynamically allocating shrinker
On Thu, Jun 22, 2023 at 04:53:08PM +0800, Qi Zheng wrote:> Introduce some helpers for dynamically allocating shrinker instance, > and their uses are as follows: > > 1. shrinker_alloc_and_init() > > Used to allocate and initialize a shrinker instance, the priv_data > parameter is used to pass the pointer of the previously embedded > structure of the shrinker instance. > > 2. shrinker_free() > > Used to free the shrinker instance when the registration of shrinker > fails. > > 3. unregister_and_free_shrinker() > > Used to unregister and free the shrinker instance, and the kfree() > will be changed to kfree_rcu() later. > > Signed-off-by: Qi Zheng <zhengqi.arch at bytedance.com> > --- > include/linux/shrinker.h | 12 ++++++++++++ > mm/vmscan.c | 35 +++++++++++++++++++++++++++++++++++ > 2 files changed, 47 insertions(+) > > diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h > index 43e6fcabbf51..8e9ba6fa3fcc 100644 > --- a/include/linux/shrinker.h > +++ b/include/linux/shrinker.h > @@ -107,6 +107,18 @@ extern void unregister_shrinker(struct shrinker *shrinker); > extern void free_prealloced_shrinker(struct shrinker *shrinker); > extern void synchronize_shrinkers(void); > > +typedef unsigned long (*count_objects_cb)(struct shrinker *s, > + struct shrink_control *sc); > +typedef unsigned long (*scan_objects_cb)(struct shrinker *s, > + struct shrink_control *sc); > + > +struct shrinker *shrinker_alloc_and_init(count_objects_cb count, > + scan_objects_cb scan, long batch, > + int seeks, unsigned flags, > + void *priv_data); > +void shrinker_free(struct shrinker *shrinker); > +void unregister_and_free_shrinker(struct shrinker *shrinker);Hmmmm. Not exactly how I envisioned this to be done. Ok, this will definitely work, but I don't think it is an improvement. It's certainly not what I was thinking of when I suggested dynamically allocating shrinkers. The main issue is that this doesn't simplify the API - it expands it and creates a minefield of old and new functions that have to be used in exactly the right order for the right things to happen. What I was thinking of was moving the entire shrinker setup code over to the prealloc/register_prepared() algorithm, where the setup is already separated from the activation of the shrinker. That is, we start by renaming prealloc_shrinker() to shrinker_alloc(), adding a flags field to tell it everything that it needs to alloc (i.e. the NUMA/MEMCG_AWARE flags) and having it returned a fully allocated shrinker ready to register. Initially this also contains an internal flag to say the shrinker was allocated so that unregister_shrinker() knows to free it. The caller then fills out the shrinker functions, seeks, etc. just like the do now, and then calls register_shrinker_prepared() to make the shrinker active when it wants to turn it on. When it is time to tear down the shrinker, no API needs to change. unregister_shrinker() does all the shutdown and frees all the internal memory like it does now. If the shrinker is also marked as allocated, it frees the shrinker via RCU, too. Once everything is converted to this API, we then remove register_shrinker(), rename register_shrinker_prepared() to shrinker_register(), rename unregister_shrinker to shrinker_unregister(), get rid of the internal "allocated" flag and always free the shrinker. At the end of the patchset, every shrinker should be set up in a manner like this: sb->shrinker = shrinker_alloc(SHRINKER_MEMCG_AWARE|SHRINKER_NUMA_AWARE, "sb-%s", type->name); if (!sb->shrinker) return -ENOMEM; sb->shrinker->count_objects = super_cache_count; sb->shrinker->scan_objects = super_cache_scan; sb->shrinker->batch = 1024; sb->shrinker->private = sb; ..... shrinker_register(sb->shrinker); And teardown is just a call to shrinker_unregister(sb->shrinker) as it is now. i.e. the entire shrinker regsitration API is now just three functions, down from the current four, and much simpler than the the seven functions this patch set results in... The other advantage of this is that it will break all the existing out of tree code and third party modules using the old API and will no longer work with a kernel using lockless slab shrinkers. They need to break (both at the source and binary levels) to stop bad things from happening due to using uncoverted shrinkers in the new setup. -Dave. -- Dave Chinner david at fromorbit.com