[PATCH v4 17/21] KVM: arm64: Convert user_mem_abort() to generic page-table API

Alexandru Elisei alexandru.elisei at arm.com
Thu Sep 10 09:10:07 EDT 2020


Hi,

On 9/10/20 11:51 AM, Will Deacon wrote:
> On Wed, Sep 09, 2020 at 06:12:29PM +0100, Marc Zyngier wrote:
>> On 2020-09-09 15:20, Alexandru Elisei wrote:
>>> On 9/7/20 4:23 PM, Will Deacon wrote:
>>>> @@ -1610,62 +1605,31 @@ static int user_mem_abort(struct kvm_vcpu
>>>> *vcpu, phys_addr_t fault_ipa,
>>>>  	if (vma_pagesize == PAGE_SIZE && !force_pte)
>>>>  		vma_pagesize = transparent_hugepage_adjust(memslot, hva,
>>>>  							   &pfn, &fault_ipa);
>>>> -	if (writable)
>>>> +	if (writable) {
>>>> +		prot |= KVM_PGTABLE_PROT_W;
>>>>  		kvm_set_pfn_dirty(pfn);
>>>> +		mark_page_dirty(kvm, gfn);
>>> The previous code called mark_page_dirty() only if the vma_pagesize ==
>>> PAGE_SIZE
>>> (and writable was true, obviously). Is this supposed to fix a bug?
>> No, this is actually introducing one. mark_page_dirty() checks that there is
>> an
>> associated bitmap, and thus only happens when writing to a single page, but
>> we
>> shouldn't do it for R/O memslots, which the current code avoids. It should
>> be
>> guarded by logging_active.
> gfn_to_pfn_prot() will set "writable" to false for R/O memslots, so I think
> we're good here.

That also looks correct to me, gfn_to_pfn_prot() -> __gfn_to_pfn_memslot() sets
*writable to false if the memslot is readonly.

Marc is also right, mark_page_dirty() first checks if there's a dirty bitmap
associated with the memslot, and memslot->dirty_bitmap gets allocated only if
KVM_SET_USER_MEMORY_REGION was called with the KVM_MEM_LOG_DIRTY_PAGES flag set
(should've checked for that before asking).

Thanks,
Alex



More information about the linux-arm-kernel mailing list