[PATCH v8 07/15] iommupt: Add map_pages op
Alexey Kardashevskiy
aik at amd.com
Sun Mar 1 16:02:48 PST 2026
On 28/2/26 00:48, Jason Gunthorpe wrote:
> On Fri, Feb 27, 2026 at 12:39:28PM +1100, Alexey Kardashevskiy wrote:
>>
>>
>> On 27/2/26 02:04, Jason Gunthorpe wrote:
>>> On Thu, Feb 26, 2026 at 10:11:56AM +1100, Alexey Kardashevskiy wrote:
>>>>> The flow would be some thing like..
>>>>> 1) Create an IOAS
>>>>> 2) Create a HWPT. If there is some known upper bound on RMP/etc page
>>>>> size then limit the HWPT page size to the upper bound
>>>>> 3) Map stuff into the ioas
>>>>> 4) Build the RMP/etc and map ranges of page granularity
>>>>> 5) Call iommufd to adjust the page size within ranges
>>>>
>>>> I am about to try this approach now. 5) means splitting bigger pages
>>>> to smaller and I remember you working on that hitless IO PDEs
>>>> smashing, do you have something to play with? I could not spot
>>>> anything on github but do not want to reinvent. Thanks,
>>>
>>> I thought this thread had concluded you needed to use the HW engines
>>
>> The HW engine has to be used for smashing while DMAing to 2M page
>> being smashed. It is not needed when the insecure->trusted switch
>> happens and IOMMU now needs to match already configured RMP.
>
> Oh? I'm surprised shared->private is different that private->shared..
Well, I rather meant "statistically". Confidential VMs are all private with occasional sharing so shared->private normally means there was a recent private->shared which did the smashing. And there is also "unsmash_io" in that HW engine which I am not touching yet (but keep it in mind).
> Regardless, I think if you go this path you have to stick to 4k IOPTEs
> and avoid the HW engine. Maybe that is good enough to start.
This is the current plan.
>>> for this and if so then KVM should maintain the IOMMU S2 where it can
>>> synchronize things and access the HW engines?
>>
>> I want to explore the idea of using the gmemfd->iommufd notification
>> mechanism for smashing too (as these smashes are always the result
>> of page state changes and this requires a notification on its own as
>> we figured out) and plumb that HW engine to the IOMMU side,
>> somewhere in the AMD IOMMU driver. Hard to imagine KVM learning
>> about IOMMU.
>
> Equally hard to imagine IOMMU changing the RMP.. Since you explained
> the HW engine changes both I don't know what you will do.>> Maybe guestmemfd needs to own the RMP updates and it can somehow
> invoke the HW engine and co-ordinate all the parts. This sounds very
> hard as well, so IDK.
The latter is worth exploring. Thanks,
--
Alexey
More information about the linux-riscv
mailing list