[PATCH v3 02/21] KVM: arm64: Add stand-alone page-table walker infrastructure
Alexandru Elisei
alexandru.elisei at arm.com
Fri Aug 28 11:43:20 EDT 2020
Hi,
I've had another good look at the code, and I now I can answer some of my own
questions. Sorry for the noise!
On 8/27/20 5:27 PM, Alexandru Elisei wrote:
> [..]
> +
> + if (!table) {
> + data->addr += kvm_granule_size(level);
> + goto out;
> + }
> +
> + childp = kvm_pte_follow(pte);
> + ret = __kvm_pgtable_walk(data, childp, level + 1);
> + if (ret)
> + goto out;
> +
> + if (flags & KVM_PGTABLE_WALK_TABLE_POST) {
> We check that ptep is a valid table when we test the KVM_PGTABLE_WALK_TABLE_PRE
> flag, why aren't we doing that here?
That's because the function goes to out if the leaf visitor didn't turn the leaf
entry into a table.
>
>> + ret = kvm_pgtable_visitor_cb(data, addr, level, ptep,
>> + KVM_PGTABLE_WALK_TABLE_POST);
>> + }
>> +
>> +out:
>> + return ret;
>> +}
>> +
>> [..]
>> +}
>> +
>> +static int _kvm_pgtable_walk(struct kvm_pgtable_walk_data *data)
>> +{
>> + u32 idx;
>> + int ret = 0;
>> + struct kvm_pgtable *pgt = data->pgt;
>> + u64 limit = BIT(pgt->ia_bits);
>> +
>> + if (data->addr > limit || data->end > limit)
>> + return -ERANGE;
>> +
>> + if (!pgt->pgd)
>> + return -EINVAL;
>> +
>> + for (idx = kvm_pgd_page_idx(data); data->addr < data->end; ++idx) {
>> + kvm_pte_t *ptep = &pgt->pgd[idx * PTRS_PER_PTE];
> I'm sorry, but I just don't understand this part:
>
> - Why do we skip over PTRS_PER_PTE instead of visiting each idx?
>
> - Why do we use PTRS_PER_PTE instead of PTRS_PER_PGD?
>
> Would you mind explaining what the loop is doing?
>
> I also don't see anywhere in the page table walking code where we take into
> account that we can have concatenated tables at level 1 or 2, which means we have
> more entries than PTRS_PER_P{U,M}D.
I think I understand the code better now, __kvm_pgtable_walk will visit all
entries in the range ptep[0..PTRS_PER_PTE-1], that's why every iteration we
increment by PTRS_PER_PTE.
>
>> +
>> + ret = __kvm_pgtable_walk(data, ptep, pgt->start_level);
>> + if (ret)
>> + break;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size,
>> + struct kvm_pgtable_walker *walker)
>> +{
>> + struct kvm_pgtable_walk_data walk_data = {
>> + .pgt = pgt,
>> + .addr = ALIGN_DOWN(addr, PAGE_SIZE),
>> + .end = PAGE_ALIGN(walk_data.addr + size),
> [..]
>
> What happens if addr < PAGE_SIZE - 1? It looks to me that according to the
> definition of ALIGN_DOWN, addr will wrap around.
My mistake again, ALIGN_DOWN will subtract PAGE_SIZE - 1, but __ALIGN_KERNEL will
add PAGE_SIZE - 1, and the result is what we expect (no wrapping around).
Thanks,
Alex
More information about the linux-arm-kernel
mailing list