[PATCH] remove AND operation in choose_random_kstack_offset()
liuyuntao (F)
liuyuntao12 at huawei.com
Wed Jun 19 21:04:23 PDT 2024
On 2024/6/18 18:45, Mark Rutland wrote:
> Hi Arnd,
>
> On Mon, Jun 17, 2024 at 10:33:08PM +0200, Arnd Bergmann wrote:
>> On Mon, Jun 17, 2024, at 20:22, Kees Cook wrote:
>>> On Mon, Jun 17, 2024 at 04:52:15PM +0100, Mark Rutland wrote:
>>>> On Mon, Jun 17, 2024 at 01:37:21PM +0000, Yuntao Liu wrote:
>>>>> Since the offset would be bitwise ANDed with 0x3FF in
>>>>> add_random_kstack_offset(), so just remove AND operation here.
>>>>>
>>>>> Signed-off-by: Yuntao Liu <liuyuntao12 at huawei.com>
>>>>
>>>> The comments in arm64 and x86 say that they're deliberately capping the
>>>> offset at fewer bits than the result of KSTACK_OFFSET_MAX() masking the
>>>> value with 0x3FF.
>>>>
>>>> Maybe it's ok to expand that, but if that's the case the commit message
>>>> needs to explain why it's safe add extra bits (2 on arm64, 3 on s39 and
>>>> x86), and those comments need to be updated accordingly.
>>>>
>>>> As-is, I do not think this patch is ok.
>>>
>>> Yeah, I agree: the truncation is intentional and tuned to the
>>> architecture.
>>
>> It may be intentional, but it's clearly nonsense: there is nothing
>> inherent to the architecture that means we have can go only 256
>> bytes instead of 512 bytes into the 16KB available stack space.
>>
>> As far as I can tell, any code just gets bloated to the point
>> where it fills up the available memory, regardless of how
>> much you give it. I'm sure one can find code paths today that
>> exceed the 16KB, so there is no point pretending that 15.75KB
>> is somehow safe to use while 15.00KB is not.
>>
>> I'm definitely in favor of making this less architecture
>> specific, we just need to pick a good value, and we may well
>> end up deciding to use less than the default 1KB. We can also
>> go the opposite way and make the limit 4KB but then increase
>> the default stack size to 20KB for kernels that enable
>> randomization.
>
> Sorry, to be clear, I'm happy for this to change, so long as:
>
> * The commit message explains why that's safe.
>
> IIUC this goes from 511 to 1023 bytes on arm64, which is ~3% of the
> stack, so maybe that is ok. It'd be nice to see any rationale/analysis
> beyond "the offset would be bitwise ANDed with 0x3FF".
>
> * The comments in architecture code referring to the masking get
> removed/updated along with the masking.
>
> My complaint was that the patch didn't do those things.
>
Sorry for that I don't adjust the comments in architecture code
referring to the masking.
I've tested the stack entropy by applying this patch on arm64.
before:
Bits of stack entropy: 6
after:
Bits of stack entropy: 7
It seems the difference was minimal, so I didn't reflect it in the
commit message. Now it appears that I missed some of the Kees's intentions.
Kees has resent the patch, and everything should be fine now.
Thanks!
Yuntao
> Mark.
More information about the linux-arm-kernel
mailing list