[RFC v2][PATCH 04/11] x86: Implement __arch_rare_write_begin/unmap()

Kees Cook keescook at chromium.org
Wed Apr 5 17:14:09 PDT 2017


On Wed, Apr 5, 2017 at 4:57 PM, Andy Lutomirski <luto at kernel.org> wrote:
> On Wed, Mar 29, 2017 at 6:41 PM, Kees Cook <keescook at chromium.org> wrote:
>> On Wed, Mar 29, 2017 at 3:38 PM, Andy Lutomirski <luto at amacapital.net> wrote:
>>> On Wed, Mar 29, 2017 at 11:15 AM, Kees Cook <keescook at chromium.org> wrote:
>>>> Based on PaX's x86 pax_{open,close}_kernel() implementation, this
>>>> allows HAVE_ARCH_RARE_WRITE to work on x86.
>>>>
>>>
>>>> +
>>>> +static __always_inline unsigned long __arch_rare_write_begin(void)
>>>> +{
>>>> +       unsigned long cr0;
>>>> +
>>>> +       preempt_disable();
>>>
>>> This looks wrong.  DEBUG_LOCKS_WARN_ON(!irqs_disabled()) would work,
>>> as would local_irq_disable().  There's no way that just disabling
>>> preemption is enough.
>>>
>>> (Also, how does this interact with perf nmis?)
>>
>> Do you mean preempt_disable() isn't strong enough here? I'm open to
>> suggestions. The goal would be to make sure nothing between _begin and
>> _end would get executed without interruption...
>>
>
> Sorry for the very slow response.
>
> preempt_disable() isn't strong enough to prevent interrupts, and an
> interrupt here would run with WP off, causing unknown havoc.  I tend
> to think that the caller should be responsible for turning off
> interrupts.

So, something like:

Top-level functions:

static __always_inline rare_write_begin(void)
{
    preempt_disable();
    local_irq_disable();
    barrier();
    __arch_rare_write_begin();
    barrier();
}

static __always_inline rare_write_end(void)
{
    barrier();
    __arch_rare_write_end();
    barrier();
    local_irq_enable();
    preempt_enable_no_resched();
}

x86-specific helpers:

static __always_inline unsigned long __arch_rare_write_begin(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(cr0 & X86_CR0_WP);
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

static __always_inline unsigned long __arch_rare_write_end(void)
{
       unsigned long cr0;

       cr0 = read_cr0() ^ X86_CR0_WP;
       BUG_ON(!(cr0 & X86_CR0_WP));
       write_cr0(cr0);
       return cr0 ^ X86_CR0_WP;
}

I can give it a spin...

-Kees

-- 
Kees Cook
Pixel Security



More information about the linux-arm-kernel mailing list