[PATCH 3/3] arm64: read VA_BITS from kcore for 52-bits VA kernel
Pingfan Liu
piliu at redhat.com
Wed Dec 15 19:05:53 PST 2021
On Thu, Dec 16, 2021 at 10:46 AM Pingfan Liu <piliu at redhat.com> wrote:
>
> On Wed, Dec 15, 2021 at 9:35 PM Philipp Rudo <prudo at redhat.com> wrote:
> >
> > Hi Pingfang,
> >
> > On Fri, 10 Dec 2021 11:07:35 +0800
> > Pingfan Liu <piliu at redhat.com> wrote:
> >
> > > phys_to_virt() calculates virtual address. As a important factor,
> > > page_offset is excepted to be accurate.
> > >
> > > Since arm64 kernel exposes va_bits through vmcore, using it.
> > >
> > > Signed-off-by: Pingfan Liu <piliu at redhat.com>
> > > ---
> > > kexec/arch/arm64/kexec-arm64.c | 31 +++++++++++++++++++++++++++----
> > > kexec/arch/arm64/kexec-arm64.h | 1 +
> > > util_lib/elf_info.c | 5 +++++
> > > 3 files changed, 33 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
> > > index bd650e6..ccc92db 100644
> > > --- a/kexec/arch/arm64/kexec-arm64.c
> > > +++ b/kexec/arch/arm64/kexec-arm64.c
> > > @@ -54,7 +54,7 @@
> > > static bool try_read_phys_offset_from_kcore = false;
> > >
> > > /* Machine specific details. */
> > > -static int va_bits;
> > > +static int va_bits = -1;
> > > static unsigned long page_offset;
> > >
> > > /* Global varables the core kexec routines expect. */
> > > @@ -876,7 +876,15 @@ static inline void set_phys_offset(long v, char *set_method)
> > >
> > > static int get_va_bits(void)
> > > {
> > > - unsigned long long stext_sym_addr = get_kernel_sym("_stext");
> > > + unsigned long long stext_sym_addr;
> > > +
> > > + /*
> > > + * if already got from kcore
> > > + */
> > > + if (va_bits != -1)
> > > + goto out;
> > > +
> > > + stext_sym_addr = get_kernel_sym("_stext");
> > >
> > > if (stext_sym_addr == 0) {
> > > fprintf(stderr, "Can't get the symbol of _stext.\n");
> > > @@ -900,6 +908,7 @@ static int get_va_bits(void)
> > > return -1;
> > > }
> > >
> > > +out:
> > > dbgprintf("va_bits : %d\n", va_bits);
> > >
> > > return 0;
> > > @@ -917,14 +926,27 @@ int get_page_offset(unsigned long *page_offset)
> > > if (ret < 0)
> > > return ret;
> > >
> > > - page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> > > + if (va_bits < 52)
> > > + *page_offset = (0xffffffffffffffffUL) << (va_bits - 1);
> > > + else
> > > + *page_offset = (0xffffffffffffffffUL) << va_bits;
> >
> > wouldn't it make sense to use ULONG_MAX here? At least for me it would
> > be much better readable.
> >
>
> Yes, I tend to agree and will update it in V2 (if there is no need to
> compile it on 32-bits machine, which I consider as a rare case
> nowadays.)
>
I think UINT64_MAX can free of this issue
More information about the kexec
mailing list