[PATCH v3 7/9] RISC-V: KVM: Use the new gpa range validate helper function
Anup Patel
anup at brainfault.org
Thu Jul 17 21:40:36 PDT 2025
On Fri, May 23, 2025 at 12:33 AM Atish Patra <atishp at rivosinc.com> wrote:
>
> Remove the duplicate code and use the new helper function to validate
> the shared memory gpa address.
>
> Signed-off-by: Atish Patra <atishp at rivosinc.com>
LGTM.
Reviewed-by: Anup Patel <anup at brainfault.org>
Regards,
Anup
> ---
> arch/riscv/kvm/vcpu_pmu.c | 5 +----
> arch/riscv/kvm/vcpu_sbi_sta.c | 6 ++----
> 2 files changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
> index 15d71a7b75ba..163bd4403fd0 100644
> --- a/arch/riscv/kvm/vcpu_pmu.c
> +++ b/arch/riscv/kvm/vcpu_pmu.c
> @@ -409,8 +409,6 @@ int kvm_riscv_vcpu_pmu_snapshot_set_shmem(struct kvm_vcpu *vcpu, unsigned long s
> int snapshot_area_size = sizeof(struct riscv_pmu_snapshot_data);
> int sbiret = 0;
> gpa_t saddr;
> - unsigned long hva;
> - bool writable;
>
> if (!kvpmu || flags) {
> sbiret = SBI_ERR_INVALID_PARAM;
> @@ -432,8 +430,7 @@ int kvm_riscv_vcpu_pmu_snapshot_set_shmem(struct kvm_vcpu *vcpu, unsigned long s
> goto out;
> }
>
> - hva = kvm_vcpu_gfn_to_hva_prot(vcpu, saddr >> PAGE_SHIFT, &writable);
> - if (kvm_is_error_hva(hva) || !writable) {
> + if (kvm_vcpu_validate_gpa_range(vcpu, saddr, PAGE_SIZE, true)) {
> sbiret = SBI_ERR_INVALID_ADDRESS;
> goto out;
> }
> diff --git a/arch/riscv/kvm/vcpu_sbi_sta.c b/arch/riscv/kvm/vcpu_sbi_sta.c
> index 5f35427114c1..67dfb613df6a 100644
> --- a/arch/riscv/kvm/vcpu_sbi_sta.c
> +++ b/arch/riscv/kvm/vcpu_sbi_sta.c
> @@ -85,8 +85,6 @@ static int kvm_sbi_sta_steal_time_set_shmem(struct kvm_vcpu *vcpu)
> unsigned long shmem_phys_hi = cp->a1;
> u32 flags = cp->a2;
> struct sbi_sta_struct zero_sta = {0};
> - unsigned long hva;
> - bool writable;
> gpa_t shmem;
> int ret;
>
> @@ -111,8 +109,8 @@ static int kvm_sbi_sta_steal_time_set_shmem(struct kvm_vcpu *vcpu)
> return SBI_ERR_INVALID_ADDRESS;
> }
>
> - hva = kvm_vcpu_gfn_to_hva_prot(vcpu, shmem >> PAGE_SHIFT, &writable);
> - if (kvm_is_error_hva(hva) || !writable)
> + /* The spec requires the shmem to be 64-byte aligned. */
> + if (kvm_vcpu_validate_gpa_range(vcpu, shmem, 64, true))
> return SBI_ERR_INVALID_ADDRESS;
>
> ret = kvm_vcpu_write_guest(vcpu, shmem, &zero_sta, sizeof(zero_sta));
>
> --
> 2.43.0
>
More information about the linux-riscv
mailing list