[PATCH 6/8] drm/panthor: Explicit expansion of locked VM region
Boris Brezillon
boris.brezillon at collabora.com
Tue May 5 09:32:44 PDT 2026
On Tue, 5 May 2026 16:05:12 +0200
Ketil Johnsen <ketil.johnsen at arm.com> wrote:
> Currently the panthor_vm_lock_region() function will implicitly expand
> an already locked VM region. This can be problematic because the caller
> do not reliably know if it needs to call panthor_vm_unlock_region()
> or not.
>
> Worth noting, there is currently no known issues with this as the code
> is written today.
>
> This change introduces panthor_vm_expand_region() which will only work
> if there is already a locked VM region. This again means that the
> original lock and unlock functions can work as a pair. This pairing is
> needed for subsequent protected memory changes.
>
> Signed-off-by: Ketil Johnsen <ketil.johnsen at arm.com>
> ---
> drivers/gpu/drm/panthor/panthor_mmu.c | 69 +++++++++++++++++++--------
> 1 file changed, 50 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index fc930ee158a52..07f54176ec1bf 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -1701,15 +1701,36 @@ static int panthor_vm_lock_region(struct panthor_vm *vm, u64 start, u64 size)
> struct panthor_device *ptdev = vm->ptdev;
> int ret = 0;
>
> - /* sm_step_remap() can call panthor_vm_lock_region() to account for
> - * the wider unmap needed when doing a partial huge page unamp. We
> - * need to ignore the lock if it's already part of the locked region.
> - */
> - if (start >= vm->locked_region.start &&
> - start + size <= vm->locked_region.start + vm->locked_region.size)
> - return 0;
> + if (drm_WARN_ON(&ptdev->base, vm->locked_region.size))
> + return -EINVAL;
How about we have a helper called panthor_vm_apply_as_lock() that would
only take care of the AS_LOCKADDR() sequence. panthor_vm_lock_region()
would have this WARN_ON(), the pack_region_range() and a call to
panthor_vm_apply_as_lock(). Similarly,
panthor_vm_expand_locked_region() would rely on
panthor_vm_apply_as_lock() to apply the expanded lock.
> +
> + mutex_lock(&ptdev->mmu->as.slots_lock);
> + if (vm->as.id >= 0 && size) {
> + /* Lock the region that needs to be updated */
> + gpu_write64(ptdev, AS_LOCKADDR(vm->as.id),
> + pack_region_range(ptdev, &start, &size));
> +
> + /* If the lock succeeded, update the locked_region info. */
> + ret = as_send_cmd_and_wait(ptdev, vm->as.id, AS_COMMAND_LOCK);
> + }
>
> - /* sm_step_remap() may need a locked region that isn't a strict superset
> + if (!ret) {
> + vm->locked_region.start = start;
> + vm->locked_region.size = size;
> + }
> + mutex_unlock(&ptdev->mmu->as.slots_lock);
> +
> + return ret;
> +}
> +
> +static int panthor_vm_expand_region(struct panthor_vm *vm, u64 start, u64 size)
s/panthor_vm_expand_region/panthor_vm_expand_locked_region/
> +{
> + struct panthor_device *ptdev = vm->ptdev;
> + u64 end;
> + int ret = 0;
> +
> + /* This function is here to handle the following case:
> + * sm_step_remap() may need a locked region that isn't a strict superset
> * of the original one because of having to extend unmap boundaries beyond
> * it to deal with partial unmaps of transparent huge pages. What we want
> * in those cases is to lock the union of both regions. The new region must
> @@ -1717,16 +1738,24 @@ static int panthor_vm_lock_region(struct panthor_vm *vm, u64 start, u64 size)
> * boundaries in a remap operation can only shift up or down respectively,
> * but never otherwise.
> */
> - if (vm->locked_region.size) {
> - u64 end = max(vm->locked_region.start + vm->locked_region.size,
> - start + size);
>
> - drm_WARN_ON_ONCE(&vm->ptdev->base, (start + size <= vm->locked_region.start) ||
> - (start >= vm->locked_region.start + vm->locked_region.size));
> + /* This function can only expand an already locked region */
> + if (drm_WARN_ON(&ptdev->base, !vm->locked_region.size))
> + return -EINVAL;
>
> - start = min(start, vm->locked_region.start);
> - size = end - start;
> - }
> + /* Early out if requested range is already locked */
> + if (start >= vm->locked_region.start &&
> + start + size <= vm->locked_region.start + vm->locked_region.size)
> + return 0;
> +
> + end = max(vm->locked_region.start + vm->locked_region.size,
> + start + size);
> +
> + drm_WARN_ON_ONCE(&ptdev->base, (start + size <= vm->locked_region.start) ||
> + (start >= vm->locked_region.start + vm->locked_region.size));
> +
> + start = min(start, vm->locked_region.start);
> + size = end - start;
>
> mutex_lock(&ptdev->mmu->as.slots_lock);
> if (vm->as.id >= 0 && size) {
> @@ -2252,11 +2281,13 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
> unmap_hugepage_align(&op->remap, &unmap_start, &unmap_range);
>
> /* If the range changed, we might have to lock a wider region to guarantee
> - * atomicity. panthor_vm_lock_region() bails out early if the new region
> - * is already part of the locked region, so no need to do this check here.
> + * atomicity.
> */
> if (!unmap_vma->evicted) {
> - panthor_vm_lock_region(vm, unmap_start, unmap_range);
> + ret = panthor_vm_expand_region(vm, unmap_start, unmap_range);
> + if (ret)
> + return ret;
> +
> panthor_vm_unmap_pages(vm, unmap_start, unmap_range);
> }
>
More information about the linux-arm-kernel
mailing list