[PATCH 5.10.y] arm64: Avoid premature usercopy failure

Greg Kroah-Hartman gregkh at linuxfoundation.org
Wed Oct 27 09:17:08 PDT 2021


On Wed, Oct 27, 2021 at 01:40:47AM +0000, Chen Huang wrote:
> From: Robin Murphy <robin.murphy at arm.com>
> 
> commit 295cf156231ca3f9e3a66bde7fab5e09c41835e0 upstream.
> 
> Al reminds us that the usercopy API must only return complete failure
> if absolutely nothing could be copied. Currently, if userspace does
> something silly like giving us an unaligned pointer to Device memory,
> or a size which overruns MTE tag bounds, we may fail to honour that
> requirement when faulting on a multi-byte access even though a smaller
> access could have succeeded.
> 
> Add a mitigation to the fixup routines to fall back to a single-byte
> copy if we faulted on a larger access before anything has been written
> to the destination, to guarantee making *some* forward progress. We
> needn't be too concerned about the overall performance since this should
> only occur when callers are doing something a bit dodgy in the first
> place. Particularly broken userspace might still be able to trick
> generic_perform_write() into an infinite loop by targeting write() at
> an mmap() of some read-only device register where the fault-in load
> succeeds but any store synchronously aborts such that copy_to_user() is
> genuinely unable to make progress, but, well, don't do that...
> 
> CC: stable at vger.kernel.org
> Reported-by: Chen Huang <chenhuang5 at huawei.com>
> Suggested-by: Al Viro <viro at zeniv.linux.org.uk>
> Reviewed-by: Catalin Marinas <catalin.marinas at arm.com>
> Signed-off-by: Robin Murphy <robin.murphy at arm.com>
> Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.com
> Signed-off-by: Will Deacon <will at kernel.org>
> Signed-off-by: Chen Huang <chenhuang5 at huawei.com>
> ---
>  arch/arm64/lib/copy_from_user.S | 13 ++++++++++---
>  arch/arm64/lib/copy_in_user.S   | 21 ++++++++++++++-------
>  arch/arm64/lib/copy_to_user.S   | 14 +++++++++++---
>  3 files changed, 35 insertions(+), 13 deletions(-)

Both now queued up, thanks.

greg k-h



More information about the linux-arm-kernel mailing list