[PATCH 6/6] kselftest/arm64: Check mte tagged user address in kernel

Amit Kachhap amit.kachhap at arm.com
Wed Sep 23 03:06:59 EDT 2020



On 9/22/20 4:11 PM, Catalin Marinas wrote:
> On Tue, Sep 01, 2020 at 02:57:19PM +0530, Amit Daniel Kachhap wrote:
>> Add a testcase to check that user address with valid/invalid
>> mte tag works in kernel mode. This test verifies the kernel API's
>> __arch_copy_from_user/__arch_copy_to_user works by considering
>> if the user pointer has valid/invalid allocation tags.
>>
>> In MTE sync mode a SIGSEV fault is generated if a user memory
>> with invalid tag is accessed in kernel. In async mode no such
>> fault occurs.
> 
> We don't generate a SIGSEGV for faults in the uaccess routines. The
> kernel simply returns less copied bytes than what was requested or -1
> and setting errno.

ok. I will update in the next iteration.
> 
> BTW, Qemu has a bug and it reports the wrong exception class (lower
> DABT) for a tag check fault while in the uaccess routines, leading to
> kernel panic (bad mode in synchronous abort handler).

Yes I am also seeing this.
> 
>> +static int check_usermem_access_fault(int mem_type, int mode, int mapping)
>> +{
>> +	int fd, ret, i, err;
>> +	char val = 'A';
>> +	size_t len, read_len;
>> +	void *ptr, *ptr_next;
>> +	bool fault;
>> +
>> +	len = 2 * page_sz;
>> +	err = KSFT_FAIL;
>> +	/*
>> +	 * Accessing user memory in kernel with invalid tag should fault in sync
>> +	 * mode but may not fault in async mode as per the implemented MTE
>> +	 * support in Arm64 kernel.
>> +	 */
>> +	if (mode == MTE_ASYNC_ERR)
>> +		fault = false;
>> +	else
>> +		fault = true;
>> +	mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
>> +	fd = create_temp_file();
>> +	if (fd == -1)
>> +		return KSFT_FAIL;
>> +	for (i = 0; i < len; i++)
>> +		write(fd, &val, sizeof(val));
>> +	lseek(fd, 0, 0);
>> +	ptr = mte_allocate_memory(len, mem_type, mapping, true);
>> +	if (check_allocated_memory(ptr, len, mem_type, true) != KSFT_PASS) {
>> +		close(fd);
>> +		return KSFT_FAIL;
>> +	}
>> +	mte_initialize_current_context(mode, (uintptr_t)ptr, len);
>> +	/* Copy from file into buffer with valid tag */
>> +	read_len = read(fd, ptr, len);
>> +	ret = errno;
> 
> My reading of the man page is that errno is set only if read() returns
> -1.

Yes. The checks should be optimized here.
> 
>> +	mte_wait_after_trig();
>> +	if ((cur_mte_cxt.fault_valid == true) || ret == EFAULT || read_len < len)
>> +		goto usermem_acc_err;
>> +	/* Verify same pattern is read */
>> +	for (i = 0; i < len; i++)
>> +		if (*(char *)(ptr + i) != val)
>> +			break;
>> +	if (i < len)
>> +		goto usermem_acc_err;
>> +
>> +	/* Tag the next half of memory with different value */
>> +	ptr_next = (void *)((unsigned long)ptr + page_sz);
>> +	ptr_next = mte_insert_tags(ptr_next, page_sz);
>> +	if (!ptr_next)
>> +		goto usermem_acc_err;
>> +	lseek(fd, 0, 0);
>> +	/* Copy from file into buffer with invalid tag */
>> +	read_len = read(fd, ptr, len);
>> +	ret = errno;
>> +	mte_wait_after_trig();
>> +	if ((fault == true) &&
> 
> Nitpick: just use "if (fault &&), it's a bool already.

ok.
> 
>> +	    (cur_mte_cxt.fault_valid == true || ret == EFAULT || read_len < len)) {
>> +		err = KSFT_PASS;
>> +	} else if ((fault == false) &&
>> +		   (cur_mte_cxt.fault_valid == false && read_len == len)) {
> 
> Same here, !fault, !cur_mte_cxt.fault_valid.

ok.
> 



More information about the linux-arm-kernel mailing list