[PATCH v9 2/4] arm64: mte: change ASYNC and SYNC TCF settings into bitfields
Peter Collingbourne
pcc at google.com
Tue Jul 13 15:52:12 PDT 2021
On Tue, Jul 13, 2021 at 10:27 AM Will Deacon <will at kernel.org> wrote:
>
> On Mon, Jul 12, 2021 at 12:04:39PM -0700, Peter Collingbourne wrote:
> > On Wed, Jul 7, 2021 at 4:11 AM Will Deacon <will at kernel.org> wrote:
> > > On Fri, Jul 02, 2021 at 12:41:08PM -0700, Peter Collingbourne wrote:
> > > > long set_mte_ctrl(struct task_struct *task, unsigned long arg)
> > > > {
> > > > - u64 sctlr = task->thread.sctlr_user & ~SCTLR_EL1_TCF0_MASK;
> > > > u64 mte_ctrl = (~((arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT) &
> > > > SYS_GCR_EL1_EXCL_MASK) << MTE_CTRL_GCR_USER_EXCL_SHIFT;
> > > >
> > > > if (!system_supports_mte())
> > > > return 0;
> > > >
> > > > - switch (arg & PR_MTE_TCF_MASK) {
> > > > - case PR_MTE_TCF_NONE:
> > > > - sctlr |= SCTLR_EL1_TCF0_NONE;
> > > > - break;
> > > > - case PR_MTE_TCF_SYNC:
> > > > - sctlr |= SCTLR_EL1_TCF0_SYNC;
> > > > - break;
> > > > - case PR_MTE_TCF_ASYNC:
> > > > - sctlr |= SCTLR_EL1_TCF0_ASYNC;
> > > > - break;
> > > > - default:
> > > > - return -EINVAL;
> > > > - }
> > > > + if (arg & PR_MTE_TCF_ASYNC)
> > > > + mte_ctrl |= MTE_CTRL_TCF_ASYNC;
> > > > + if (arg & PR_MTE_TCF_SYNC)
> > > > + mte_ctrl |= MTE_CTRL_TCF_SYNC;
> > > >
> > > > - if (task != current) {
> > > > - task->thread.sctlr_user = sctlr;
> > > > - task->thread.mte_ctrl = mte_ctrl;
> > > > - } else {
> > > > - set_task_sctlr_el1(sctlr);
> > > > - set_gcr_el1_excl(mte_ctrl);
> > > > + task->thread.mte_ctrl = mte_ctrl;
> > > > + if (task == current) {
> > > > + mte_update_sctlr_user(task);
> > >
> > > In conjunction with the next patch, what happens if we migrate at this
> > > point? I worry that we can install a stale sctlr_user value.
> > >
> > > > + set_task_sctlr_el1(task->thread.sctlr_user);
> >
> > In this case, we will call mte_update_sctlr_user when scheduled onto
> > the new CPU as a result of the change to mte_thread_switch, and both
> > the scheduler and prctl will set SCTLR_EL1 to the new (correct) value
> > for the current CPU.
>
> Doesn't that rely on task->thread.sctlr_user being explicitly read on the
> new CPU? For example, the following rough sequence is what I'm worried
> about:
>
>
> CPU x (prefer ASYNC)
> set_mte_ctrl(ASYNC | SYNC)
> current->thread.mte_ctrl = ASYNC | SYNC;
> mte_update_sctlr_user
> current->thread.sctlr_user = ASYNC;
> Register Xn = current->thread.sctlr_user; // ASYNC
> <migration to CPU y>
>
> CPU y (prefer SYNC)
> mte_thread_switch
> mte_update_sctlr_user
> next->thread.sctlr_user = SYNC;
> update_sctlr_el1
> SCTLR_EL1 = SYNC;
>
> <resume next back in set_mte_ctrl>
> set_task_sctlr_el1(Xn); // ASYNC
> current->thread.sctlr_user = Xn; // ASYNC XXX: also superfluous?
> SCTLR_EL1 = ASYNC;
>
>
> Does that make sense?
>
> I'm thinking set_mte_ctrl() should be using update_sctlr_el1() and disabling
> preemption around the whole thing, which would make it a lot closer to the
> context-switch path.
Okay, I see what you mean. I also noticed that
prctl(PR_PAC_SET_ENABLED_KEYS) would now have the same problem. In v10
I've addressed this issue by inserting a patch after this one that
disables preemption in both prctl implementations.
Peter
More information about the linux-arm-kernel
mailing list