[PATCH RFC v2 15/27] arm64: mte: Check that tag storage blocks are in the same zone

David Hildenbrand david at redhat.com
Fri Nov 24 11:56:59 PST 2023


On 19.11.23 17:57, Alexandru Elisei wrote:
> alloc_contig_range() requires that the requested pages are in the same
> zone. Check that this is indeed the case before initializing the tag
> storage blocks.
> 
> Signed-off-by: Alexandru Elisei <alexandru.elisei at arm.com>
> ---
>   arch/arm64/kernel/mte_tag_storage.c | 33 +++++++++++++++++++++++++++++
>   1 file changed, 33 insertions(+)
> 
> diff --git a/arch/arm64/kernel/mte_tag_storage.c b/arch/arm64/kernel/mte_tag_storage.c
> index 8b9bedf7575d..fd63430d4dc0 100644
> --- a/arch/arm64/kernel/mte_tag_storage.c
> +++ b/arch/arm64/kernel/mte_tag_storage.c
> @@ -265,6 +265,35 @@ void __init mte_tag_storage_init(void)
>   	}
>   }
>   
> +/* alloc_contig_range() requires all pages to be in the same zone. */
> +static int __init mte_tag_storage_check_zone(void)
> +{
> +	struct range *tag_range;
> +	struct zone *zone;
> +	unsigned long pfn;
> +	u32 block_size;
> +	int i, j;
> +
> +	for (i = 0; i < num_tag_regions; i++) {
> +		block_size = tag_regions[i].block_size;
> +		if (block_size == 1)
> +			continue;
> +
> +		tag_range = &tag_regions[i].tag_range;
> +		for (pfn = tag_range->start; pfn <= tag_range->end; pfn += block_size) {
> +			zone = page_zone(pfn_to_page(pfn));
> +			for (j = 1; j < block_size; j++) {
> +				if (page_zone(pfn_to_page(pfn + j)) != zone) {
> +					pr_err("Tag storage block pages in different zones");
> +					return -EINVAL;
> +				}
> +			}
> +		}
> +	}
> +
> +	 return 0;
> +}
> +

Looks like something that ordinary CMA provides. See cma_activate_area().

Can't we find a way to let CMA do CMA thingies and only be a user of 
that? What would be required to make the performance issue you spelled 
out in the cover letter be gone and not have to open-code that in arch code?

-- 
Cheers,

David / dhildenb




More information about the linux-arm-kernel mailing list