[PATCH 4/6] xfs: use memalloc_nofs_{save,restore} instead of memalloc_noio*

Michal Hocko mhocko at kernel.org
Mon Feb 6 23:17:25 PST 2017


On Tue 07-02-17 09:51:50, Dave Chinner wrote:
> On Mon, Feb 06, 2017 at 07:47:43PM +0100, Michal Hocko wrote:
> > On Mon 06-02-17 10:32:37, Darrick J. Wong wrote:
[...]
> > > I prefer to keep the "...yet we are likely to be under GFP_NOFS..."
> > > wording of the old comment because it captures the uncertainty of
> > > whether or not we actually are already under NOFS.  If someone actually
> > > has audited this code well enough to know for sure then yes let's change
> > > the comment, but I haven't gone that far.
> > 
> > I believe we can drop the memalloc_nofs_save then as well because either
> > we are called from a potentially dangerous context and thus we are in
> > the nofs scope we we do not need the protection at all.
> 
> No, absolutely not. "Belief" is not a sufficient justification for
> removing low level deadlock avoidance infrastructure. This code
> needs to remain in _xfs_buf_map_pages() until a full audit of the
> caller paths is done and we're 100% certain that there are no
> lurking deadlocks.

Exactly. I was actually refering to "If someone actually has audited
this code" above... So I definitely do not want to justify anything
based on the belief

> For example, I'm pretty sure we can call into _xfs_buf_map_pages()
> outside of a transaction context but with an inode ILOCK held
> exclusively. If we then recurse into memory reclaim and try to run a
> transaction during reclaim, we have an inverted ILOCK vs transaction
> locking order. i.e. we are not allowed to call xfs_trans_reserve()
> with an ILOCK held as that can deadlock the log:  log full, locked
> inode pins tail of log, inode cannot be flushed because ILOCK is
> held by caller waiting for log space to become available....
> 
> i.e. there are certain situations where holding a ILOCK is a
> deadlock vector. See xfs_lock_inodes() for an example of the lengths
> we go to avoid ILOCK based log deadlocks like this...

Thanks for the reference. This is really helpful!

-- 
Michal Hocko
SUSE Labs



More information about the linux-mtd mailing list