[PATCH] arm64: patching: avoid early page_to_phys()

Mark Rutland mark.rutland at arm.com
Mon Dec 2 09:45:11 PST 2024


Looks like I messed up my text editing and left some bonus words...

On Mon, Dec 02, 2024 at 05:03:59PM +0000, Mark Rutland wrote:
> When arm64 is configured with CONFIG_DEBUG_VIRTUAL=y, a warning is
> printed from the patching code because patch_map(), e.g.

That was meant to say:

| When arm64 is configured with CONFIG_DEBUG_VIRTUAL=y, a warning is
| printed from the patching code, e.g.

[...]

> For historical reasons, the structure of patch_map() is more complicated
> than necessary and can be simplified. For kernel image addresses it's
> sufficient to use __pa_symbol() directly without converting this to a
> page address and back. Aside from kernel image addresses, all executable
> code should be allocated from execmem (where all allocations will fall
> within the vmalloc area), and the vmalloc area), and so there's no need
> for the fallback case when case when CONFIG_EXECMEM=n.

That last sentence should have been:

| Aside from kernel image addresses, all executable code should be
| allocated from execmem (where all allocations will fall within the
| vmalloc area), and so there's no need for the fallback case when case
| when CONFIG_EXECMEM=n.

I can spin a v2 with those fixed if necessary.

Mark.



More information about the linux-arm-kernel mailing list