[PATCH 3/5] um: Do a double clone to disable rseq

Tiwei Bie tiwei.btw at antgroup.com
Wed May 29 19:54:45 PDT 2024


On 5/28/24 10:13 PM, Tiwei Bie wrote:
> On 5/28/24 7:57 PM, Johannes Berg wrote:
>> On Tue, 2024-05-28 at 18:16 +0800, Tiwei Bie wrote:
>>> On 5/28/24 4:54 PM, benjamin at sipsolutions.net wrote:
>>>> From: Benjamin Berg <benjamin.berg at intel.com>
>>>>
>>>> Newer glibc versions are enabling rseq support by default. This remains
>>>> enabled in the cloned child process, potentially causing the host kernel
>>>> to write/read memory in the child.
>>>>
>>>> It appears that this was purely not an issue because the used memory
>>>> area happened to be above TASK_SIZE and remains mapped.
>>>
>>> I also encountered this issue. In my case, with "Force a static link"
>>> (CONFIG_STATIC_LINK) enabled, UML will crash immediately every time
>>> it starts up. I worked around this by setting the glibc.pthread.rseq
>>> tunable via GLIBC_TUNABLES [1] before launching UML.
>>>
>>> So another easy way to work around this issue without introducing runtime
>>> overhead might be to add the GLIBC_TUNABLES=glibc.pthread.rseq=0 environment
>>> variable and exec /proc/self/exe in UML on startup.
>>>
>>
>> It's also a bit of a question what to rely on - this would introduce a
>> dependency on glibc behaviour, whereas doing the double-clone proposed
>> here will work purely because of host kernel behaviour, regardless of
>> what part of the system set up rseq, how the tunables work, etc.
> 
> Makes sense. My previous concern was primarily about the runtime overhead,
> but after taking a closer look at the patch, I realized that the double-clone
> won't happen on the critical path, so there shouldn't be any performance
> issues. I also think the double-clone proposal is better. :)

But when combined with this series [1], things might be different..
Double-clone will happen for each new mm context. That's something
we might want to avoid.

[1] https://patchwork.ozlabs.org/project/linux-um/list/?series=408104

Regards,
Tiwei




More information about the linux-um mailing list