[PATCH v2 07/10] PPC, KVM, CMA: use general CMA reserved area management framework
Joonsoo Kim
iamjoonsoo.kim at lge.com
Sun Jun 15 22:34:08 PDT 2014
On Sat, Jun 14, 2014 at 02:23:59PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim at lge.com> writes:
>
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Acked-by: Michal Nazarewicz <mina86 at mina86.com>
> > Acked-by: Paolo Bonzini <pbonzini at redhat.com>
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim at lge.com>
>
> Need this. We may want to keep the VM_BUG_ON by moving
> KVM_CMA_CHUNK_ORDER around.
>
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 8056107..1932e0e 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -37,8 +37,6 @@
> #include <asm/ppc-opcode.h>
> #include <asm/cputable.h>
>
> -#include "book3s_hv_cma.h"
> -
> /* POWER7 has 10-bit LPIDs, PPC970 has 6-bit LPIDs */
> #define MAX_LPID_970 63
>
> @@ -64,7 +62,6 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
> }
>
> kvm->arch.hpt_cma_alloc = 0;
> - VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
> page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
> if (page) {
> hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
>
>
>
> -aneesh
Okay.
So do you also want this?
@@ -131,16 +135,18 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
{
unsigned long align_pages = HPT_ALIGN_PAGES;
+ VM_BUG_ON(get_order(nr_pages) < KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
+
/* Old CPUs require HPT aligned on a multiple of its size */
if (!cpu_has_feature(CPU_FTR_ARCH_206))
align_pages = nr_pages;
- return kvm_alloc_cma(nr_pages, align_pages);
+ return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
}
Thanks.
More information about the linux-arm-kernel
mailing list