[PATCH v2 16/17] coresight: trbe: Work around write to out of range

Anshuman Khandual anshuman.khandual at arm.com
Thu Sep 30 21:56:50 PDT 2021



On 9/28/21 4:02 PM, Suzuki K Poulose wrote:
> On 23/09/2021 04:15, Anshuman Khandual wrote:
>>
>>
>> On 9/21/21 7:11 PM, Suzuki K Poulose wrote:
>>> TRBE implementations affected by Arm erratum (2253138 or 2224489), could
>>> write to the next address after the TRBLIMITR.LIMIT, instead of wrapping
>>> to the TRBBASER. This implies that the TRBE could potentially corrupt :
>>>
>>>    - A page used by the rest of the kernel/user (if the LIMIT = end of
>>>      perf ring buffer)
>>>    - A page within the ring buffer, but outside the driver's range.
>>>      [head, head + size]. This may contain some trace data, may be
>>>      consumed by the userspace.
>>>
>>> We workaround this erratum by :
>>>    - Making sure that there is at least an extra PAGE space left in the
>>>      TRBE's range than we normally assign. This will be additional to other
>>>      restrictions (e.g, the TRBE alignment for working around
>>>      TRBE_WORKAROUND_OVERWRITE_IN_FILL_MODE, where there is a minimum of PAGE_SIZE.
>>>      Thus we would have 2 * PAGE_SIZE)
>>>
>>>    - Adjust the LIMIT to leave the last PAGE_SIZE out of the TRBE's allowed
>>>      range (i.e, TRBEBASER...TRBLIMITR.LIMIT), by :
>>>
>>>          TRBLIMITR.LIMIT -= PAGE_SIZE
>>>
>>> Cc: Anshuman Khandual <anshuman.khandual at arm.com>
>>> Cc: Mathieu Poirier <mathieu.poirier at linaro.org>
>>> Cc: Mike Leach <mike.leach at linaro.org>
>>> Cc: Leo Yan <leo.yan at linaro.org>
>>> Signed-off-by: Suzuki K Poulose <suzuki.poulose at arm.com>
>>> ---
>>>   drivers/hwtracing/coresight/coresight-trbe.c | 59 +++++++++++++++++++-
>>>   1 file changed, 57 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/hwtracing/coresight/coresight-trbe.c b/drivers/hwtracing/coresight/coresight-trbe.c
>>> index 02f9e00e2091..ea907345354c 100644
>>> --- a/drivers/hwtracing/coresight/coresight-trbe.c
>>> +++ b/drivers/hwtracing/coresight/coresight-trbe.c
>>> @@ -86,7 +86,8 @@ struct trbe_buf {
>>>    * affects the given instance of the TRBE.
>>>    */
>>>   #define TRBE_WORKAROUND_OVERWRITE_FILL_MODE    0
>>> -#define TRBE_ERRATA_MAX                1
>>> +#define TRBE_WORKAROUND_WRITE_OUT_OF_RANGE    1
>>> +#define TRBE_ERRATA_MAX                2
>>>     /*
>>>    * Safe limit for the number of bytes that may be overwritten
>>> @@ -96,6 +97,7 @@ struct trbe_buf {
>>>     static unsigned long trbe_errata_cpucaps[TRBE_ERRATA_MAX] = {
>>>       [TRBE_WORKAROUND_OVERWRITE_FILL_MODE] = ARM64_WORKAROUND_TRBE_OVERWRITE_FILL_MODE,
>>> +    [TRBE_WORKAROUND_WRITE_OUT_OF_RANGE] = ARM64_WORKAROUND_TRBE_WRITE_OUT_OF_RANGE,
>>>   };
>>>     /*
>>> @@ -279,7 +281,20 @@ trbe_handle_to_cpudata(struct perf_output_handle *handle)
>>>     static u64 trbe_min_trace_buf_size(struct perf_output_handle *handle)
>>>   {
>>> -    return TRBE_TRACE_MIN_BUF_SIZE;
>>> +    u64 size = TRBE_TRACE_MIN_BUF_SIZE;
>>> +    struct trbe_cpudata *cpudata = trbe_handle_to_cpudata(handle);
>>> +
>>> +    /*
>>> +     * When the TRBE is affected by an erratum that could make it
>>> +     * write to the next "virtually addressed" page beyond the LIMIT.
>>
>> What if the next "virtually addressed" page is just blocked from future
>> usage in the kernel and never really gets mapped into a physical page ?
> 
> That is the case today for vmap(), the end of the vm_area has a guard
> page. But that implies when the erratum is triggered, the TRBE
> encounters a fault and we need to handle that in the driver. This works
> for "end" of the ring buffer. But not when the LIMIT is in the middle
> of the ring buffer.
> 
>> In that case it would be guaranteed that, a next "virtually addressed"
>> page would not even exist after the LIMIT pointer and hence the errata
>> would not be triggered. Something like there is a virtual mapping cliff
>> right after the LIMIT pointer from the MMU perspective.
>>
>> Although it might be bit tricky. Currently the entire ring buffer gets
>> mapped at once with vmap() in arm_trbe_alloc_buffer(). Just to achieve
>> the above solution, each computation of the LIMIT pointer needs to be
>> followed by a temporary unmapping of next virtual page from existing
>> vmap() buffer. Subsequently it could be mapped back as trbe_buf->pages
>> always contains all the physical pages from the perf ring buffer.
> 
> It is much easier to leave a page aside than to do this map, unmap
> dance, which might even change the VA address you get and thus it
> complicates the TRBE driver in general. I believe this is much
> simpler and we can reason about the code better. And all faults
> are still illegal for the driver, which helps us to detect any
> other issues in the TRBE.

Agreed, as I had mentioned earlier this would have been anyways bit
complicated. Not changing the virtual address for the entire buffer
and to treat each fault inside the driver as illegal, makes current
implementation much simpler and easier to reason about. So probably
discarding those properties might not be a good idea after all.



More information about the linux-arm-kernel mailing list