How to deal with openSBI reserved regions?
Alexandre Ghiti
alex at ghiti.fr
Fri Jul 28 03:36:06 PDT 2023
Hi Petr,
On 28/07/2023 12:02, Petr Tesarik wrote:
> Hi all,
>
> I have recently looked into enabling crash kernel and kdump on riscv64.
> I can start a new kernel after crash, but I ran into an issue when
> reading /proc/vmcore there.
>
> I am testing this inside a QEMU VM and I boot my system in S-Mode using
> U-Boot with mbedded openSBI firmware. The problem here is that openSBI
> occupies the first 128 pages of RAM, but they are shown as "System RAM"
> in /proc/iomem. The kexec_file_load(2) system call uses
> walk_system_ram_res() to build a memory map of the currently running
> kernel. This is then passed to the crash kernel through ELF core headers
> as a LOAD segment. When reading the corresponding part of /proc/iomem,
> the crash kernel tries to map these pages, but they can be accessed only
> from M-Mode. Any attempt to access them from the Linux kernel fails with
> a PMP violation. Consequently, -EFAULT is returned to user space.
>
> Now, the openSBI area is represented in the Device Tree with a
> reserved-memory node () which overlaps a memory node. Technically, the
> firmware region is indeed RAM, but how should it be excluded from
> /proc/iomem?
>
> Should this be fixed in the kernel?
>
> Or, is the provided DTB incorrect?
> Should the memory node exclude the firmware area?
Which version of openSBI are you using? We fixed something similar in
1.3 by reintroducing the "nomap" property to the regions occupied by
openSBI, I'd say it should be enough, let me know if it's not!
Thanks,
Alex
>
> Petr T
>
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv
More information about the linux-riscv
mailing list