[PATCH v2 5/6] ARM: MMU64: map memory for barebox proper pagewise

Ahmad Fatoum a.fatoum at pengutronix.de
Wed Jun 18 01:32:28 PDT 2025


Hello Sascha,

On 6/17/25 16:28, Sascha Hauer wrote:
> Map the remainder of the memory explicitly with two level page tables. This is
> the place where barebox proper ends at. In barebox proper we'll remap the code
> segments readonly/executable and the ro segments readonly/execute never. For this
> we need the memory being mapped pagewise. We can't do the split up from section
> wise mapping to pagewise mapping later because that would require us to do
> a break-before-make sequence which we can't do when barebox proper is running
> at the location being remapped.
> 
> Reviewed-by: Ahmad Fatoum <a.fatoum at pengutronix.de>
> Signed-off-by: Sascha Hauer <s.hauer at pengutronix.de>

Got a small change request below.

>  	uint64_t *ttb = get_ttb();
>  	uint64_t block_size;
> @@ -149,19 +150,25 @@ static void create_sections(uint64_t virt, uint64_t phys, uint64_t size,
>  	while (size) {
>  		table = ttb;
>  		for (level = 0; level < 4; level++) {
> +			bool finish = false;
>  			block_shift = level2shift(level);
>  			idx = (addr & level2mask(level)) >> block_shift;
>  			block_size = (1ULL << block_shift);
>  
>  			pte = table + idx;
>  
> -			if (size >= block_size && IS_ALIGNED(addr, block_size) &&
> -			    IS_ALIGNED(phys, block_size)) {
> +			if (force_pages) {
> +				if (level == 3)
> +					finish = true;
> +			} else if (size >= block_size && IS_ALIGNED(addr, block_size) &&
> +				   IS_ALIGNED(phys, block_size)) {
> +				finish = true;
> +			}
> +
> +			if (finish) {

Nitpick: I think code would be clearer with:

bool block_aligned = size >= block_size &&
                     IS_ALIGNED(addr, block_size) &&
                     IS_ALIGNED(phys, block_size))


if ((force_pages && level == 3) || (!force_pages && block_aligned))

>  				type = (level == 3) ?
>  					PTE_TYPE_PAGE : PTE_TYPE_BLOCK;
> -
> -				/* TODO: break-before-make missing */
> -				set_pte(pte, phys | attr | type);
> +				*pte = phys | attr | type;

create_sections is also used in situation where BBM is required.
We should also keep the volatile writes to the page tables.

Can you revert to:

/* TODO: break-before-make missing for non barebox-regions */
set_pte(pte, phys | attr | type);

So we know there is something still left to do?

Thanks,
Ahmad

-- 
Pengutronix e.K.                  |                             |
Steuerwalder Str. 21              | http://www.pengutronix.de/  |
31137 Hildesheim, Germany         | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686  | Fax:   +49-5121-206917-5555 |




More information about the barebox mailing list