[Lsf-pc] [LSF/MM/BPF TOPIC] Removing GFP_NOFS

Jan Kara jack at suse.cz
Fri Jan 5 02:57:36 PST 2024


Hello,

On Thu 04-01-24 21:17:16, Matthew Wilcox wrote:
> This is primarily a _FILESYSTEM_ track topic.  All the work has already
> been done on the MM side; the FS people need to do their part.  It could
> be a joint session, but I'm not sure there's much for the MM people
> to say.
> 
> There are situations where we need to allocate memory, but cannot call
> into the filesystem to free memory.  Generally this is because we're
> holding a lock or we've started a transaction, and attempting to write
> out dirty folios to reclaim memory would result in a deadlock.
> 
> The old way to solve this problem is to specify GFP_NOFS when allocating
> memory.  This conveys little information about what is being protected
> against, and so it is hard to know when it might be safe to remove.
> It's also a reflex -- many filesystem authors use GFP_NOFS by default
> even when they could use GFP_KERNEL because there's no risk of deadlock.
> 
> The new way is to use the scoped APIs -- memalloc_nofs_save() and
> memalloc_nofs_restore().  These should be called when we start a
> transaction or take a lock that would cause a GFP_KERNEL allocation to
> deadlock.  Then just use GFP_KERNEL as normal.  The memory allocators
> can see the nofs situation is in effect and will not call back into
> the filesystem.
> 
> This results in better code within your filesystem as you don't need to
> pass around gfp flags as much, and can lead to better performance from
> the memory allocators as GFP_NOFS will not be used unnecessarily.
> 
> The memalloc_nofs APIs were introduced in May 2017, but we still have
> over 1000 uses of GFP_NOFS in fs/ today (and 200 outside fs/, which is
> really sad).  This session is for filesystem developers to talk about
> what they need to do to fix up their own filesystem, or share stories
> about how they made their filesystem better by adopting the new APIs.

I agree this is a worthy goal and the scoped API helped us a lot in the
ext4/jbd2 land. Still we have some legacy to deal with:

~> git grep "NOFS" fs/jbd2/ | wc -l
15
~> git grep "NOFS" fs/ext4/ | wc -l
71

When you are asking about what would help filesystems with the conversion I
actually have one wish. The most common case is that you need to annotate
some lock that can be grabbed in the reclaim path and thus you must avoid
GFP_FS allocations from under it. For example to deal with reclaim
deadlocks in the writeback paths we had to introduce wrappers like:

static inline int ext4_writepages_down_read(struct super_block *sb)
{
        percpu_down_read(&EXT4_SB(sb)->s_writepages_rwsem);
        return memalloc_nofs_save();
}

static inline void ext4_writepages_up_read(struct super_block *sb, int ctx)
{
        memalloc_nofs_restore(ctx);
        percpu_up_read(&EXT4_SB(sb)->s_writepages_rwsem);
}

When you have to do it for 5 locks in your filesystem it gets a bit ugly
and it would be nice to have some generic way to deal with this. We already
have the spin_lock_irqsave() precedent we might follow (and I don't
necessarily mean the calling convention which is a bit weird for today's
standards)?

Even more lovely would be if we could actually avoid passing around the
returned reclaim state because sometimes the locks get acquired / released
in different functions and passing the state around requires quite some
changes and gets ugly. That would mean we'd have to have
fs-reclaim-forbidden counter instead of just a flag in task_struct. OTOH
then we could just mark the lock (mutex / rwsem / whatever) as
fs-reclaim-unsafe during init and the rest would just magically happen.
That would be super-easy to use.

								Honza
-- 
Jan Kara <jack at suse.com>
SUSE Labs, CR



More information about the Linux-nvme mailing list