[PATCH] arm64: patching: avoid early page_to_phys()
Mike Rapoport
rppt at kernel.org
Tue Dec 3 00:56:52 PST 2024
On Mon, Dec 02, 2024 at 05:45:11PM +0000, Mark Rutland wrote:
> Looks like I messed up my text editing and left some bonus words...
>
> On Mon, Dec 02, 2024 at 05:03:59PM +0000, Mark Rutland wrote:
> > When arm64 is configured with CONFIG_DEBUG_VIRTUAL=y, a warning is
> > printed from the patching code because patch_map(), e.g.
>
> That was meant to say:
>
> | When arm64 is configured with CONFIG_DEBUG_VIRTUAL=y, a warning is
> | printed from the patching code, e.g.
>
> [...]
>
> > For historical reasons, the structure of patch_map() is more complicated
> > than necessary and can be simplified. For kernel image addresses it's
> > sufficient to use __pa_symbol() directly without converting this to a
> > page address and back. Aside from kernel image addresses, all executable
> > code should be allocated from execmem (where all allocations will fall
> > within the vmalloc area), and the vmalloc area), and so there's no need
> > for the fallback case when case when CONFIG_EXECMEM=n.
>
> That last sentence should have been:
>
> | Aside from kernel image addresses, all executable code should be
> | allocated from execmem (where all allocations will fall within the
> | vmalloc area), and so there's no need for the fallback case when case
> | when CONFIG_EXECMEM=n.
Still an extra "case when" ;-)
> I can spin a v2 with those fixed if necessary.
>
> Mark.
--
Sincerely yours,
Mike.
More information about the linux-arm-kernel
mailing list