[PATCH 1/2] mm: Retry migration earlier upon refcount mismatch

David Hildenbrand david at redhat.com
Mon Aug 12 02:30:17 PDT 2024


On 12.08.24 07:35, Dev Jain wrote:
> 
> On 8/11/24 14:38, David Hildenbrand wrote:
>> On 11.08.24 08:06, Dev Jain wrote:
>>>
>>> On 8/11/24 00:22, David Hildenbrand wrote:
>>>> On 10.08.24 20:42, Dev Jain wrote:
>>>>>
>>>>> On 8/9/24 19:17, David Hildenbrand wrote:
>>>>>> On 09.08.24 12:31, Dev Jain wrote:
>>>>>>> As already being done in __migrate_folio(), wherein we backoff if
>>>>>>> the
>>>>>>> folio refcount is wrong, make this check during the unmapping phase,
>>>>>>> upon
>>>>>>> the failure of which, the original state of the PTEs will be
>>>>>>> restored
>>>>>>> and
>>>>>>> the folio lock will be dropped via migrate_folio_undo_src(), any
>>>>>>> racing
>>>>>>> thread will make progress and migration will be retried.
>>>>>>>
>>>>>>> Signed-off-by: Dev Jain <dev.jain at arm.com>
>>>>>>> ---
>>>>>>>      mm/migrate.c | 9 +++++++++
>>>>>>>      1 file changed, 9 insertions(+)
>>>>>>>
>>>>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>>>>> index e7296c0fb5d5..477acf996951 100644
>>>>>>> --- a/mm/migrate.c
>>>>>>> +++ b/mm/migrate.c
>>>>>>> @@ -1250,6 +1250,15 @@ static int migrate_folio_unmap(new_folio_t
>>>>>>> get_new_folio,
>>>>>>>          }
>>>>>>>            if (!folio_mapped(src)) {
>>>>>>> +        /*
>>>>>>> +         * Someone may have changed the refcount and maybe sleeping
>>>>>>> +         * on the folio lock. In case of refcount mismatch, bail
>>>>>>> out,
>>>>>>> +         * let the system make progress and retry.
>>>>>>> +         */
>>>>>>> +        struct address_space *mapping = folio_mapping(src);
>>>>>>> +
>>>>>>> +        if (folio_ref_count(src) != folio_expected_refs(mapping,
>>>>>>> src))
>>>>>>> +            goto out;
>>>>>>
>>>>>> This really seems to be the latest point where we can "easily" back
>>>>>> off and unlock the source folio -- in this function :)
>>>>>>
>>>>>> I wonder if we should be smarter in the migrate_pages_batch() loop
>>>>>> when we start the actual migrations via migrate_folio_move(): if we
>>>>>> detect that a folio has unexpected references *and* it has waiters
>>>>>> (PG_waiters), back off then and retry the folio later. If it only has
>>>>>> unexpected references, just keep retrying: no waiters -> nobody is
>>>>>> waiting for the lock to make progress.
>>>>>
>>>>>
>>>>> The patch currently retries migration irrespective of the reason of
>>>>> refcount change.
>>>>>
>>>>> If you are suggesting that, break the retrying according to two
>>>>> conditions:
>>>>
>>>> That's not what I am suggesting ...
>>>>
>>>>>
>>>>>
>>>>>> This really seems to be the latest point where we can "easily" back
>>>>>> off and unlock the source folio -- in this function :)
>>>>>> For example, when migrate_folio_move() fails with -EAGAIN, check if
>>>>>> there are waiters (PG_waiter?) and undo+unlock to try again later.
>>>>>
>>>>>
>>>>> Currently, on -EAGAIN, migrate_folio_move() returns without undoing
>>>>> src
>>>>> and dst; even if we were to fall
>>>>
>>>> ...
>>>>
>>>> I am wondering if we should detect here if there are waiters and undo
>>>> src+dst.
>>>
>>> After undoing src+dst, which restores the PTEs, how are you going to
>>> set the
>>>
>>> PTEs to migration again? That is being done through
>>> migrate_folio_unmap(),
>>>
>>> and the loops of _unmap() and _move() are different. Or am I missing
>>> something...
>>
>> Again, no expert on the code, but it would mean that if we detect that
>> there are waiters, we would undo src+dst and add them to ret_folios,
>> similar to what we do in "Cleanup remaining folios" at the end of
>> migrate_pages_batch()?
>>
>> So instead of retrying migration of that folio, just give it up
>> immediately and retry again later.
>>
>> Of course, this means that (without further modifications to that
>> function), we would leave retrying these folios to the caller, such as
>> in migrate_pages_sync(), where we move ret_folios to the tail of
>> "folios" and retry migration.
> 
> So IIUC, you are saying to change the return value in
> __folio_migrate_mapping(), so that when move_to_new_folio() fails
> 
> in migrate_folio_move(), we end up in the retrying loop of _sync() which
> calls _batch() in synchronous mode. Here, we
> 
> will have to make a change to decide how much we want to retry?

So essentially, instead of checking for "unexpected references" and 
backing off once at the beginning (what you do in this patch), we would 
*not* add new checks for "unexpected references" and not fail early in 
that case.

Instead, we would continuously check if there are waiters, and if there 
are waiters, we back-off completely (->unlock) instead of retrying 
something that cannot possibly make progress.

For "unexpected references" it can make sense to just retry immediately, 
because these might just be speculative references or short-term 
references that will go away soon.

For "unexpected reference with waiters" (or just "waiters" which should 
be the same because "waiters" should imply "unexpected references"), 
it's different as you discovered.

What we do with these "somebody else is waiting to make progress" pages 
is indeed a god question -- Ying seems to have some ideas in how to 
optimize retrying further.

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list