[PATCH 3/3] arm64: read VA_BITS from kcore for 52-bits VA kernel

Pingfan Liu piliu at redhat.com
Wed Dec 15 17:59:42 PST 2021


On Wed, Dec 15, 2021 at 9:06 PM Signed-off-by at vergenet.net:Simon
Horman <horms at verge.net.au> wrote:
>
> On Fri, Dec 10, 2021 at 11:07:35AM +0800, Pingfan Liu wrote:
> > phys_to_virt() calculates virtual address. As a important factor,
> > page_offset is excepted to be accurate.
> >
> > Since arm64 kernel exposes va_bits through vmcore, using it.
> >
> > Signed-off-by: Pingfan Liu <piliu at redhat.com>
> > ---
> >  kexec/arch/arm64/kexec-arm64.c | 31 +++++++++++++++++++++++++++----
> >  kexec/arch/arm64/kexec-arm64.h |  1 +
> >  util_lib/elf_info.c            |  5 +++++
> >  3 files changed, 33 insertions(+), 4 deletions(-)
> >
> > diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
> > index bd650e6..ccc92db 100644
> > --- a/kexec/arch/arm64/kexec-arm64.c
> > +++ b/kexec/arch/arm64/kexec-arm64.c
> > @@ -54,7 +54,7 @@
> >  static bool try_read_phys_offset_from_kcore = false;
> >
> >  /* Machine specific details. */
> > -static int va_bits;
> > +static int va_bits = -1;
> >  static unsigned long page_offset;
> >
> >  /* Global varables the core kexec routines expect. */
> > @@ -876,7 +876,15 @@ static inline void set_phys_offset(long v, char *set_method)
> >
> >  static int get_va_bits(void)
> >  {
> > -     unsigned long long stext_sym_addr = get_kernel_sym("_stext");
> > +     unsigned long long stext_sym_addr;
> > +
> > +     /*
> > +      * if already got from kcore
> > +      */
> > +     if (va_bits != -1)
> > +             goto out;
>
> If va_bits is exposed by the kernel then it will be used.
> Else we continue here. Are there actually cases (old kernels) where
> we expect to continue. Or could we get rid of the fallback code here?
>

va_bits is exposed by kernel commit 84c57dbd3c48 ("arm64: kernel:
arch_crash_save_vmcoreinfo() should depend on CONFIG_CRASH_CORE")
And the first kernel which contains VA_BITS is v4.19.
I have no idea about the need for old kernels. Maybe just keep the
compatible code and throw a warning to remind users?

> > +
> > +     stext_sym_addr = get_kernel_sym("_stext");
> >
> >       if (stext_sym_addr == 0) {
> >               fprintf(stderr, "Can't get the symbol of _stext.\n");
> > @@ -900,6 +908,7 @@ static int get_va_bits(void)
> >               return -1;
> >       }
> >
> > +out:
> >       dbgprintf("va_bits : %d\n", va_bits);
> >
> >       return 0;
> > @@ -917,14 +926,27 @@ int get_page_offset(unsigned long *page_offset)
> >       if (ret < 0)
> >               return ret;
> >
>
> I'm confused about why there is both a (va_bits - 1)
> and va_bits case here.
>

It originates from the changes of memory layout on arm64.
And mostly it is contributed by kernel commit 14c127c957c1 ("arm64:
mm: Flip kernel VA space"),
where sees the changes:
-#define PAGE_OFFSET            (UL(0xffffffffffffffff) - \
        (UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET            (UL(0xffffffffffffffff) - \
+       (UL(1) << VA_BITS) + 1)


Thanks,

Pingfan




More information about the kexec mailing list