[PATCH 0/7] Add support to handle misaligned accesses in S-mode
Atish Kumar Patra
atishp at rivosinc.com
Tue Oct 3 01:40:30 PDT 2023
On Thu, Sep 28, 2023 at 9:48 AM Evan Green <evan at rivosinc.com> wrote:
>
> On Thu, Sep 28, 2023 at 12:49 AM Clément Léger <cleger at rivosinc.com> wrote:
> >
> >
> >
> > On 26/09/2023 23:43, Evan Green wrote:
> > > On Tue, Sep 26, 2023 at 8:03 AM Clément Léger <cleger at rivosinc.com> wrote:
> > >>
> > >> Since commit 61cadb9 ("Provide new description of misaligned load/store
> > >> behavior compatible with privileged architecture.") in the RISC-V ISA
> > >> manual, it is stated that misaligned load/store might not be supported.
> > >> However, the RISC-V kernel uABI describes that misaligned accesses are
> > >> supported. In order to support that, this series adds support for S-mode
> > >> handling of misaligned accesses as well support for prctl(PR_UNALIGN).
> > >>
> > >> Handling misaligned access in kernel allows for a finer grain control
> > >> of the misaligned accesses behavior, and thanks to the prctl call, can
> > >> allow disabling misaligned access emulation to generate SIGBUS. User
> > >> space can then optimize its software by removing such access based on
> > >> SIGBUS generation.
> > >>
> > >> Currently, this series is useful for people that uses a SBI that does
> > >> not handled misaligned traps. In a near future, this series will make
> > >> use a SBI extension [1] allowing to request delegation of the
> > >> misaligned load/store traps to the S-mode software. This extension has
> > >> been submitted for review to the riscv tech-prs group. An OpenSBI
> > >> implementation for this spec is available at [2].
> > >
> > > For my own education, how does the new SBI call behave with respect to
> > > multiple harts? Does a call to change a feature perform that change
> > > across all harts, or just the hart the SBI call was made on? If the
> > > answer is "all harts", what if not all harts are exactly the same, and
> > > some can enable the feature switch while others cannot? Also if the
> > > answer is "all harts", does it also apply to hotplugged cpus, which
> > > may not have even existed at boot time?
> >
> > Depending on the feature, they can be either global (all harts) or
> > local (calling hart). The medeleg register is per hart and thus
> > misaligned load/store delegation for S-mode is also per hart.
>
> We should probably state this in the spec update then, both generally
> and for each specific feature added. Otherwise firmware writers are
> left not knowing if they're supposed to spread a feature across to all
> cores or not.
>
If a feature is required to update any CSR, it must be per hart only.
The supervisor software
is aware of the state of each hart and it should invoke it from all
the present harts.
Doing it in M-mode will result in M-mode IPIs and seems racy with what
kernel might be doing at that time.
> >
> >
> > >
> > > What happens if a hart goes through a context loss event, like
> > > suspend/resume? Is the setting expected to be sticky, or is the kernel
> > > expected to replay these calls?
> >
> > That is a good question that we did not actually clarified yet. Thanks
> > for raising it !
>
IMO, it should be sticky until a reset. I would be interested to hear
other thoughts though if there is a non-sticky
use-case.
> No problem! This may also need to be specified per-feature in the
> spec. I have a vague hunch that it's better to ask the kernel to do it
> on resume, though ideally we'd have the terminology (and I don't think
> we do?) to specify exactly which points constitute a context loss.
> Mostly I'm remembering the x86 and ARM transition from S3, where lots
> of firmware code ran at resume, to S0ix-like power states, where
> things resumed directly into the OS and they had to figure out how to
> do it without firmware. The vague hunch is that keeping the laundry
> list of things firmware must do on resume low might keep us from
> getting in S0ix's way, but it's all so speculative it's hard to know
> if it's really a useful hunch or not.
>
> -Evan
More information about the linux-riscv
mailing list