[PATCH -fixes] riscv: Implement flush_cache_vmap()
Conor Dooley
conor.dooley at microchip.com
Thu Aug 3 02:24:48 PDT 2023
On Thu, Aug 03, 2023 at 05:14:15PM +0800, dylan wrote:
> On Sun, Jul 30, 2023 at 01:08:17AM -0400, Guo Ren wrote:
> > On Tue, Jul 25, 2023 at 9:22 AM Alexandre Ghiti <alexghiti at rivosinc.com> wrote:
> > >
> > > The RISC-V kernel needs a sfence.vma after a page table modification: we
> > > used to rely on the vmalloc fault handling to emit an sfence.vma, but
> > > commit 7d3332be011e ("riscv: mm: Pre-allocate PGD entries for
> > > vmalloc/modules area") got rid of this path for 64-bit kernels, so now we
> > > need to explicitly emit a sfence.vma in flush_cache_vmap().
> > >
> > > Note that we don't need to implement flush_cache_vunmap() as the generic
> > > code should emit a flush tlb after unmapping a vmalloc region.
> > >
> > > Fixes: 7d3332be011e ("riscv: mm: Pre-allocate PGD entries for vmalloc/modules area")
> > > Signed-off-by: Alexandre Ghiti <alexghiti at rivosinc.com>
> > > ---
> > > arch/riscv/include/asm/cacheflush.h | 4 ++++
> > > 1 file changed, 4 insertions(+)
> > >
> > > diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
> > > index 8091b8bf4883..b93ffddf8a61 100644
> > > --- a/arch/riscv/include/asm/cacheflush.h
> > > +++ b/arch/riscv/include/asm/cacheflush.h
> > > @@ -37,6 +37,10 @@ static inline void flush_dcache_page(struct page *page)
> > > #define flush_icache_user_page(vma, pg, addr, len) \
> > > flush_icache_mm(vma->vm_mm, 0)
> > >
> > > +#ifdef CONFIG_64BIT
> > > +#define flush_cache_vmap(start, end) flush_tlb_kernel_range(start, end)
> > Sorry, I couldn't agree with the above in a PIPT cache machine. It's
> > not worth for.
> >
> > It would reduce the performance of vmap_pages_range,
> > ioremap_page_range ... API, which may cause some drivers' performance
> > issues when they install/uninstall memory frequently.
> >
>
> Hi All,
>
> I think functional correctness should be more important than system performance
> in this case. The "preventive" SFENCE.VMA became necessary due to the RISC-V
> specification allowing invalidation entries to be cached in the TLB.
We are at -rc4 and this stuff is broken. Taking the bigger hammer, which
can be reverted later when a more targeted fix shows up, to make sure
that v6.5 doesn't end up broken, sounds rather prudent. Otherwise, the
original commit should probably be reverted.
> The problem[1] we are currently encountering is caused by not updating the TLB
> after the page table is created, and the solution to this problem can only be
> solved by updating the TLB immediately after the page table is created.
>
> There are currently two possible approaches to flush TLB:
> 1. Flush TLB in flush_cache_vmap()
> 2. Flush TLB in arch_sync_kernel_mappings()
>
> But I'm not quite sure if it's a good idea to operate on the TLB inside flush_cache_vmap().
> The name of this function indicates that it should be related to cache operations, maybe
> it would be more appropriate to do TLB flush in arch_sync_kernel_mappings()?
>
> [1]: http://lists.infradead.org/pipermail/linux-riscv/2023-August/037503.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 228 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-riscv/attachments/20230803/af9b07a9/attachment.sig>
More information about the linux-riscv
mailing list