[PATCH 1/3] mm,memory_hotplug: export mhp min alignment

Max Gurtovoy mgurtovoy at nvidia.com
Mon Jun 21 09:11:12 PDT 2021


hi David,

do we have a conclusion for this series ?

Is the below suggestion accepted by the maintainers ?

I would like to send a new series before closing 5.14 merge window.

On 6/3/2021 1:52 PM, Max Gurtovoy wrote:
>
> On 6/2/2021 3:14 PM, David Hildenbrand wrote:
>> On 02.06.21 13:10, Max Gurtovoy wrote:
>>> Hotplugged memory has alignmet restrictions. E.g, it disallows all
>>> operations smaller than a sub-section and only allow operations smaller
>>> than a section for SPARSEMEM_VMEMMAP. Export the alignment restrictions
>>> for mhp users.
>>>
>>> Signed-off-by: Max Gurtovoy <mgurtovoy at nvidia.com>
>>> ---
>>>   include/linux/memory_hotplug.h |  5 +++++
>>>   mm/memory_hotplug.c            | 33 +++++++++++++++++++--------------
>>>   2 files changed, 24 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/include/linux/memory_hotplug.h 
>>> b/include/linux/memory_hotplug.h
>>> index 28f32fd00fe9..c55a9049b11e 100644
>>> --- a/include/linux/memory_hotplug.h
>>> +++ b/include/linux/memory_hotplug.h
>>> @@ -76,6 +76,7 @@ struct mhp_params {
>>>     bool mhp_range_allowed(u64 start, u64 size, bool need_mapping);
>>>   struct range mhp_get_pluggable_range(bool need_mapping);
>>> +unsigned long mhp_get_min_align(void);
>>>     /*
>>>    * Zone resizing functions
>>> @@ -248,6 +249,10 @@ void mem_hotplug_done(void);
>>>       ___page;                \
>>>    })
>>>   +static inline unsigned long mhp_get_min_align(void)
>>> +{
>>> +    return 0;
>>> +}
>>>   static inline unsigned zone_span_seqbegin(struct zone *zone)
>>>   {
>>>       return 0;
>>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>>> index 9e86e9ee0a10..161bb6704a9b 100644
>>> --- a/mm/memory_hotplug.c
>>> +++ b/mm/memory_hotplug.c
>>> @@ -270,24 +270,29 @@ void __init 
>>> register_page_bootmem_info_node(struct pglist_data *pgdat)
>>>   }
>>>   #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
>>>   +/*
>>> + * Disallow all operations smaller than a sub-section and only
>>> + * allow operations smaller than a section for
>>> + * SPARSEMEM_VMEMMAP. Note that check_hotplug_memory_range()
>>> + * enforces a larger memory_block_size_bytes() granularity for
>>> + * memory that will be marked online, so this check should only
>>> + * fire for direct arch_{add,remove}_memory() users outside of
>>> + * add_memory_resource().
>>> + */
>>> +unsigned long mhp_get_min_align(void)
>>> +{
>>> +    if (IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP))
>>> +        return PAGES_PER_SUBSECTION;
>>> +    return PAGES_PER_SECTION;
>>> +}
>>> +EXPORT_SYMBOL_GPL(mhp_get_min_align);
>>
>> We have to main interfaces to "hotplug" memory:
>>
>> a) add_memory() and friends for System RAM, which have memory block 
>> alignment requirements.
>>
>> b) memremap_pages(), which has the alignemnt requirements you mention 
>> here.
>>
>> I feel like what you need would better be exposed in mm/memremap.c, 
>> for example, via "memremap_min_alignment" so it matches the 
>> "memremap_pages" semantics.
>>
>> And then, memremap_pages() is only available with CONFIG_ZONE_DEVICE, 
>> which depends on SPARSEMEM_VMEMMAP. So you'll always have 
>> PAGES_PER_SUBSECTION.
>>
>> I can already spot "memremap_compat_align", maybe you can reuse that 
>> or handle it accordingly in there?
>
> Yes I think that since subsection is aligned to PAGE_SIZE I can do:
>
> size_t pci_p2pdma_align_size(size_t size)
> {
>         unsigned long min_align;
>
>         min_align = memremap_compat_align();
>         if (!IS_ALIGNED(size, min_align))
>                 return ALIGN_DOWN(size, min_align);
>
>         return size;
> }
>
>
> thoughts ?
>
>>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme



More information about the Linux-nvme mailing list