[Bug report] hash_name() may cross page boundary and trigger sleep in RCU context
Zizhi Wo
wozizhi at huaweicloud.com
Wed Nov 26 18:24:19 PST 2025
在 2025/11/27 2:55, Al Viro 写道:
> On Wed, Nov 26, 2025 at 05:05:05PM +0800, Zizhi Wo wrote:
>
>> under an RCU read-side critical section. In linux-mainline, arm/arm64
>> do_page_fault() still has this problem:
>>
>> lock_mm_and_find_vma->get_mmap_lock_carefully->mmap_read_lock_killable.
>
> arm64 shouldn't hit do_page_fault() in the first place, and
> do_translation_fault() there will see that address is beyond TASK_SIZE
> and go straight to do_bad_area() -> __do_kernel_fault() -> fixup_exception(),
> with no messing with mmap_lock.
>
> Can anybody confirm that problem exists on arm64 (ideally - with
> reproducer)?
>
Thank you all for the replies.
We did reproduce the issue on arm, and I mistakenly assumed the same
problem existed on arm64 after looking at the do_page_fault() code.
However, I just confirmed using the test program that, as everyone
pointed out, it goes through do_translation_fault() and reaches
do_bad_area() -> __do_kernel_fault(). So indeed, the issue does not
exist on arm64 — that was my oversight...
That said, I’d like to ask a follow-up question:
Why does x86 have special handling in do_kern_addr_fault(), including
logic for vmalloc faults? For example, on CONFIG_X86_32, it still takes
the vmalloc_fault path. As noted in the x86 comments, "We can fault-in
kernel-space virtual memory on-demand"...
But on arm64, I don’t see similar logic — is there a specific reason
for this difference? Maybe x86's vmalloc area is mapped lazily, while
ARM maps it fully during early boot?
Thanks,
Zizhi Wo
More information about the linux-arm-kernel
mailing list