[PATCH 01/12] mm/pgtable: add rcu_read_lock() and rcu_read_unlock()s

Jann Horn jannh at google.com
Fri Jun 2 07:21:35 PDT 2023


On Fri, Jun 2, 2023 at 4:50 AM Hugh Dickins <hughd at google.com> wrote:
> On Wed, 31 May 2023, Jann Horn wrote:
> > On Mon, May 29, 2023 at 8:15 AM Hugh Dickins <hughd at google.com> wrote:
> > > Before putting them to use (several commits later), add rcu_read_lock()
> > > to pte_offset_map(), and rcu_read_unlock() to pte_unmap().  Make this a
> > > separate commit, since it risks exposing imbalances: prior commits have
> > > fixed all the known imbalances, but we may find some have been missed.
> > [...]
> > > diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> > > index c7ab18a5fb77..674671835631 100644
> > > --- a/mm/pgtable-generic.c
> > > +++ b/mm/pgtable-generic.c
> > > @@ -236,7 +236,7 @@ pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
> > >  {
> > >         pmd_t pmdval;
> > >
> > > -       /* rcu_read_lock() to be added later */
> > > +       rcu_read_lock();
> > >         pmdval = pmdp_get_lockless(pmd);
> > >         if (pmdvalp)
> > >                 *pmdvalp = pmdval;
> >
> > It might be a good idea to document that this series assumes that the
> > first argument to __pte_offset_map() is a pointer into a second-level
> > page table (and not a local copy of the entry) unless the containing
> > VMA is known to not be THP-eligible or the page table is detached from
> > the page table hierarchy or something like that. Currently a bunch of
> > places pass references to local copies of the entry, and while I think
> > all of these are fine, it would probably be good to at least document
> > why these are allowed to do it while other places aren't.
>
> Thanks Jann: but I have to guess that here you are showing awareness of
> an important issue that I'm simply ignorant of.
>
> I have been haunted by a dim recollection that there is one architecture
> (arm-32?) which is fussy about the placement of the pmdval being examined
> (deduces info missing from the arch-independent interface, by following
> up the address?), but I couldn't track it down when I tried.
>
> Please tell me more; or better, don't spend your time explaining to me,
> but please just send a link to a good reference on the issue.  I'll be
> unable to document what you ask there, without educating myself first.

Sorry, I think I was somewhat confused about what was going on when I
wrote that message.

After this series, __pte_offset_map() looks as follows, with added
comments describing my understanding of the semantics:

// `pmd` points to one of:
// case 1: a pmd_t stored outside a page table,
//         referencing a page table detached by the caller
// case 2: a pmd_t stored outside a page table, which the caller copied
//         from a page table in an RCU-critical section that extends
//         until at least the end of this function
// case 3: a pmd_t stored inside a page table
pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp)
{
        unsigned long __maybe_unused flags;
        pmd_t pmdval;

        // begin an RCU section; this is needed for case 3
        rcu_read_lock();
        config_might_irq_save(flags);
        // read the pmd_t.
        // if the pmd_t references a page table, this page table can not
        // go away because:
        //  - in case 1, the caller is the main owner of the page table
        //  - in case 2, because the caller
        //    started an RCU read-side critical section before the caller
        //    read the original pmd_t. (This pmdp_get_lockless() is just
        //    reading a copied pmd_t off the stack.)
        //  - in case 3, because we started an RCU section above before
        //    reading the pmd_t out of the page table here
        pmdval = pmdp_get_lockless(pmd);
        config_might_irq_restore(flags);

        if (pmdvalp)
                *pmdvalp = pmdval;
        if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval)))
                goto nomap;
        if (unlikely(pmd_trans_huge(pmdval) || pmd_devmap(pmdval)))
                goto nomap;
        if (unlikely(pmd_bad(pmdval))) {
                pmd_clear_bad(pmd);
                goto nomap;
        }
        return __pte_map(&pmdval, addr);
nomap:
        rcu_read_unlock();
        return NULL;
}

case 1 is what happens in __page_table_check_pte_clear_range(),
__split_huge_zero_page_pmd() and __split_huge_pmd_locked().
case 2 happens in lockless page table traversal (gup_pte_range() and
perf_get_pgtable_size()).
case 3 is normal page table traversal under mmap lock or mapping lock.

I think having a function like this that can run in three different
contexts in which it is protected in three different ways is somewhat
hard to understand without comments. Though maybe I'm thinking about
it the wrong way?

Basically my point is: __pte_offset_map() normally requires that the
pmd argument points into a page table so that the rcu_read_lock() can
provide protection starting from the time the pmd_t is read from a
page table. The exception are cases where the caller has taken its own
precautions to ensure that the referenced page table can not have been
freed.



More information about the linux-arm-kernel mailing list