[PATCH v3 1/1] KVM: arm64: Allow cacheable stage 2 mapping using VMA flags

Oliver Upton oliver.upton at linux.dev
Wed Mar 26 11:51:57 PDT 2025


On Wed, Mar 26, 2025 at 11:24:32AM -0700, Sean Christopherson wrote:
> > > But I thought the whole problem is that mapping this fancy memory as device is
> > > unsafe on non-FWB hosts?  If it's safe, then why does KVM needs to reject anything
> > > in the first place?
> > 
> > I don't know where you got that idea. This is all about what memory
> > type is exposed to a guest:
> > 
> > - with FWB, no need for CMOs, so cacheable memory is allowed if the
> >   device supports it (i.e. it actually exposes memory), and device
> >   otherwise.
> > 
> > - without FWB, CMOs are required, and we don't have a host mapping for
> >   these pages. As a fallback, the mapping is device only, as this
> >   doesn't require any CMO by definition.
> > 
> > There is no notion of "safety" here.
> 
> Ah, the safety I'm talking about is the CMO requirement.  IIUC, not doing CMOs
> if the memory is cacheable could result in data corruption, i.e. would be a safety
> issue for the host.  But I missed that you were proposing that the !FWB behavior
> would be to force device mappings.

To Jason's earlier point, you wind up with a security issue the other
way around.

Supposing the host is using a cacheable mapping to, say, zero the $THING
at the other end of the mapping. Without a way to CMO the $THING we
cannot make the zeroing visible to a guest with a stage-2 Device-* mapping.

Marc, I understand that your proposed fallback is aligned to what we
do today, but I'm actually unconvinced that it provides any reliable/correct
behavior. We should then wind up with stage-2 memory attribute rules
like so:

 1) If struct page memory, use a cacheable mapping. CMO for non-FWB.

 2) If cacheable PFNMAP:
   a) With FWB, use a cacheable mapping
   b) Without FWB, fail.

 3) If VM_ALLOW_ANY_UNCACHED, use Normal Non-Cacheable mapping

 4) Otherwise, Device-nGnRE

I understand 2b breaks ABI, but the 'typical' VFIO usages fall into (3)
and (4).

> > > > Importantly, it is *userspace* that is in charge of deciding how the
> > > > device is mapped at S2. And the memslot flag is the correct
> > > > abstraction for that.
> > >  
> > > I strongly disagree.  Whatever owns the underlying physical memory is in charge,
> > > not userspace.  For memory that's backed by a VMA, userspace can influence the
> > > behavior through mmap(), mprotect(), etc., but ultimately KVM needs to pull state
> > > from mm/, via the VMA.  Or in the guest_memfd case, from guest_memfd.
> > 
> > I don't buy that. Userspace needs to know the semantics of the memory
> > it gives to the guest. Or at least discover that the same device
> > plugged into to different hosts will have different behaviours. Just
> > letting things rip is not an acceptable outcome.
> 
> Agreed, but that doesn't require a memslot flag.  A capability to enumerate that
> KVM can do cacheable mappings for PFNMAP memory would suffice.  And if we want to
> have KVM reject memslots that are cachaeable in the VMA, but would get device in
> stage-2, then we can provide that functionality through the capability, i.e. let
> userspace decide if it wants "fallback to device" vs. "error on creation" on a
> per-VM basis.
> 
> What I object to is adding a memslot flag.

A capability that says "I can force cacheable things to be cacheable" is
useful beyond even PFNMAP stuff. A pedantic but correct live migration /
snapshotting implementation on non-FWB would need to do CMOs in case the
VM used a non-WB mapping for memory.

Thanks,
Oliver



More information about the linux-arm-kernel mailing list