[PATCH v3 3/7] arm64: split off early mapping code from early_fixmap_init()
Ard Biesheuvel
ard.biesheuvel at linaro.org
Mon Dec 7 08:13:31 PST 2015
On 7 December 2015 at 17:08, Catalin Marinas <catalin.marinas at arm.com> wrote:
> On Thu, Dec 03, 2015 at 02:31:19PM +0100, Ard Biesheuvel wrote:
>> On 3 December 2015 at 13:18, Mark Rutland <mark.rutland at arm.com> wrote:
>> > As a heads-up, for avoiding TLB conflicts, I'm currently working on
>> > alternative way of creating the kernel page tables which will definitely
>> > conflict here, and may or may not supercede this approach.
>> >
>> > By adding new FIX_{PGD,PUD,PMD,PTE} indicees to the fixmap, we can
>> > allocate page tables from anywhere via memblock, and temporarily map
>> > them as we need to.
> [...]
>> > That would avoid the need for the bootstrap tables. In head.S we'd only
>> > need to create a temporary (coarse-grained, RWX) kernel mapping (with
>> > the fixmap bolted on). Later we would create a whole new set of tables
>> > with a fine-grained kernel mapping and a full linear mapping using the
>> > new fixmap entries to temporarily map tables, then switch over to those
>> > atomically.
>
> If we separate the kernel image mapping from the linear one, I think
> things would be slightly simpler to avoid TLB conflicts (but I haven't
> looked at Mark's patches yet).
>
>> If we change back to a full linear mapping, are we back to not putting
>> the Image astride a 1GB/32MB/512MB boundary (depending on page size)?
>>
>> Anyway, to illustrate where I am headed with this: in my next version
>> of this series, I intend to move the kernel mapping to the start of
>> the vmalloc area, which gets moved up 64 MB to make room for the
>> module area (which also moves down). That way, we can still load
>> modules as before, but no longer have a need for a dedicated carveout
>> for the kernel below PAGE_OFFSET.
>
> This makes sense, I guess it can be easily added to the existing series
> just by changing the KIMAGE_OFFSET macro.
>
Indeed. The only difference is that the VM area needs to be reserved
explicitly, to prevent vmalloc() from reusing it.
>> The next step is then to move the kernel Image up inside the vmalloc
>> area based on some randomness we get from the bootloader, and relocate
>> it in place (using the same approach as in the patches I sent out
>> beginning of this year). I have implemented module PLTs so that the
>> Image and the modules no longer need to be within 128 MB of each
>> other, which means that we can have full KASLR for modules and Image,
>> and also place the kernel anywhere in physical memory.The module PLTs
>> would be a runtime penalty only, i.e., a KASLR capable kernel running
>> without KASLR would not incur the penalty of branching via PLTs. The
>> only build time option is -mcmodel=large for modules so that data
>> symbol references are absolute, but that is unlike to hurt
>> performance.
>
> I guess full KASLR would be conditional on a config option.
>
Yes. But it would be nice if the only build time penalty is the use of
-mcmodel=large for modules, so that distro kernels can enable KASLR
unconditionally (especially since -mcmodel=large is likely to be
enabled for distro kernels anyway, due to the A53 erratum that
requires it.)
More information about the linux-arm-kernel
mailing list