[PATCH v5 8/8] arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol
Mark Rutland
mark.rutland at arm.com
Wed Mar 18 13:24:30 PDT 2015
> >> >>> ENTRY(stext)
> >> >>> + adr_l x8, boot_regs // record the contents of
> >> >>> + stp x0, x1, [x8] // x0 .. x3 at kernel entry
> >> >>> + stp x2, x3, [x8, #16]
> >> >>
> >> >> I think we should have a dc ivac here as we do for
> >> >> set_cpu_boot_mode_flag.
> >> >>
> >> >> That avoids a potential issue with boot_regs sharing a cacheline with
> >> >> data we write with the MMU on -- using __flush_dcache_area will result
> >> >> in a civac, so we could write back dirty data atop of the boot_regs if
> >> >> there were clean entries in the cache when we did the non-cacheable
> >> >> write.
> >> >>
> >> >
> >> > Hmm, I wondered about that.
> >> >
> >> > Could we instead just make it u64 __initconst boot_regs[] in setup.c ?
> >> >
> >>
> >> Never mind, it's easier just to do the invalidate right after, and I
> >> can drop the flush before the access.
> >
> > Yup.
> >
> > Annoyingly the minimum cache line size seems to be a word (given the
> > defnition of CTR.DminLine), which means you need a few dc ivac
> > instructions to be architecturally correct.
> >
>
> But that applies to cpu_boot_mode as well then?
It writes a single word, so it happens to be safe.
> I will add a call to __inval_cache_range() right after recording the
> initial values, that should do the right thing regarding llinesize
That works, with one caveat: you'll need a dmb sy between the writes and
the call -- dc instructions by VA only hazard against normal cacheable
accesses, and __inval_cache_range assumes the caches are on and so
doesn't have a dmb prior to the dc instructions.
Mark.
More information about the linux-arm-kernel
mailing list