[PATCH v9 10/24] mm/hmm: let users to tag specific PFN with DMA mapped bit
Mika Penttilä
mpenttil at redhat.com
Wed Apr 23 11:37:24 PDT 2025
On 4/23/25 21:17, Jason Gunthorpe wrote:
> On Wed, Apr 23, 2025 at 08:54:05PM +0300, Mika Penttilä wrote:
>>> @@ -36,6 +38,13 @@ enum hmm_pfn_flags {
>>> HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1),
>>> HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2),
>>> HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3),
>>> +
>>> + /*
>>> + * Sticky flags, carried from input to output,
>>> + * don't forget to update HMM_PFN_INOUT_FLAGS
>>> + */
>>> + HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7),
>>> +
>> How is this playing together with the mapped order usage?
> Order shift starts at bit 8, DMA_MAPPED is at bit 7
hmm bits are the high bits, and order is 5 bits starting from
(BITS_PER_LONG - 8)
> The pfn array is linear and simply indexed. The order is intended for
> page table like HW to be able to build larger entries from the hmm
> data without having to scan for contiguity.
>
> Even if order is present the entry is still replicated across all the
> pfns that are inside the order.
>
> At least this series should replicate the dma_mapped flag as well as
> it doesn't pay attention to order.
>
> I suspect a page table implementation may need to make some small
> changes. Indeed with guarenteed contiguous IOVA there may be a
> significant optimization available to have the HW page table cover all
> the contiguous present pages in the iommu, which would be a higher
> order than the pages themselves. However this would require being able
> to punch non-present holes into contiguous mappings...
>
> Jason
>
--Mika
More information about the Linux-nvme
mailing list