[PATCH v2 3/4] arm64: Add support for user sub-page fault probing

Catalin Marinas catalin.marinas at arm.com
Thu Dec 2 08:09:08 PST 2021


Hi Mark,

On Wed, Dec 01, 2021 at 08:29:06PM +0000, Mark Rutland wrote:
> On Wed, Dec 01, 2021 at 07:37:49PM +0000, Catalin Marinas wrote:
> > +/*
> > + * Return 0 on success, the number of bytes not accessed otherwise.
> > + */
> > +static inline size_t __mte_probe_user_range(const char __user *uaddr,
> > +					    size_t size, bool skip_first)
> > +{
> > +	const char __user *end = uaddr + size;
> > +	int err = 0;
> > +	char val;
> > +
> > +	uaddr = PTR_ALIGN_DOWN(uaddr, MTE_GRANULE_SIZE);
> > +	if (skip_first)
> > +		uaddr += MTE_GRANULE_SIZE;
> 
> Do we need the skipping for a functional reason, or is that an optimization?

An optimisation and very likely not noticeable. Given that we'd do a read
following a put_user() or get_user() earlier, the cacheline was
allocated and another load may be nearly as fast as the uaddr increment.

> From the comments in probe_subpage_writeable() and
> probe_subpage_safe_writeable() I wasn't sure if the skipping was because we
> *don't need to* check the first granule, or because we *must not* check the
> first granule.

The "don't need to" part. But thinking about this, I'll just drop it as
it's confusing.

> > +	while (uaddr < end) {
> > +		/*
> > +		 * A read is sufficient for MTE, the caller should have probed
> > +		 * for the pte write permission if required.
> > +		 */
> > +		__raw_get_user(val, uaddr, err);
> > +		if (err)
> > +			return end - uaddr;
> > +		uaddr += MTE_GRANULE_SIZE;
> > +	}
> 
> I think we may need to account for the residue from PTR_ALIGN_DOWN(), or we can
> report more bytes not copied than was passed in `size` in the first place,
> which I think might confused some callers.
> 
> Consider MTE_GRANULE_SIZE is 16, uaddr is 31, and size is 1 (so end is 32). We
> align uaddr down to 16, and if we fail the first access we return (32 - 16),
> i.e. 16.

Good point. This is fine if we skip the first byte but not otherwise.
Planning to fold in this diff:

diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index bcbd24b97917..213b30841beb 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -451,15 +451,17 @@ static inline int __copy_from_user_flushcache(void *dst, const void __user *src,
  * Return 0 on success, the number of bytes not accessed otherwise.
  */
 static inline size_t __mte_probe_user_range(const char __user *uaddr,
-					    size_t size, bool skip_first)
+					    size_t size)
 {
 	const char __user *end = uaddr + size;
 	int err = 0;
 	char val;
 
-	uaddr = PTR_ALIGN_DOWN(uaddr, MTE_GRANULE_SIZE);
-	if (skip_first)
-		uaddr += MTE_GRANULE_SIZE;
+	__raw_get_user(val, uaddr, err);
+	if (err)
+		return size;
+
+	uaddr = PTR_ALIGN(uaddr, MTE_GRANULE_SIZE);
 	while (uaddr < end) {
 		/*
 		 * A read is sufficient for MTE, the caller should have probed
@@ -480,8 +482,7 @@ static inline size_t probe_subpage_writeable(const void __user *uaddr,
 {
 	if (!system_supports_mte())
 		return 0;
-	/* first put_user() done in the caller */
-	return __mte_probe_user_range(uaddr, size, true);
+	return __mte_probe_user_range(uaddr, size);
 }
 
 static inline size_t probe_subpage_safe_writeable(const void __user *uaddr,
@@ -489,8 +490,7 @@ static inline size_t probe_subpage_safe_writeable(const void __user *uaddr,
 {
 	if (!system_supports_mte())
 		return 0;
-	/* the caller used GUP, don't skip the first granule */
-	return __mte_probe_user_range(uaddr, size, false);
+	return __mte_probe_user_range(uaddr, size);
 }
 
 static inline size_t probe_subpage_readable(const void __user *uaddr,
@@ -498,8 +498,7 @@ static inline size_t probe_subpage_readable(const void __user *uaddr,
 {
 	if (!system_supports_mte())
 		return 0;
-	/* first get_user() done in the caller */
-	return __mte_probe_user_range(uaddr, size, true);
+	return __mte_probe_user_range(uaddr, size);
 }
 
 #endif /* CONFIG_ARCH_HAS_SUBPAGE_FAULTS */

-- 
Catalin



More information about the linux-arm-kernel mailing list