[arm-platforms:kvm-arm64/nv-5.16 38/71] arch/arm64/kvm/mmu.c:178: warning: expecting prototype for kvm_unmap_stage2_range(). Prototype was for __unmap_stage2_range() instead

kernel test robot lkp at intel.com
Sun Nov 28 09:25:19 PST 2021


tree:   https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/nv-5.16
head:   6162310e2419353608acd8f247bde3fc848d4f64
commit: cd6a3361a2f89c063eb12cead9808136d54cd18f [38/71] KVM: arm64: nv: Support multiple nested Stage-2 mmu structures
config: arm64-allyesconfig (https://download.01.org/0day-ci/archive/20211129/202111290109.52igH1P0-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/commit/?id=cd6a3361a2f89c063eb12cead9808136d54cd18f
        git remote add arm-platforms https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git
        git fetch --no-tags arm-platforms kvm-arm64/nv-5.16
        git checkout cd6a3361a2f89c063eb12cead9808136d54cd18f
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm64 SHELL=/bin/bash arch/arm64/kvm/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp at intel.com>

All warnings (new ones prefixed by >>):

>> arch/arm64/kvm/mmu.c:178: warning: expecting prototype for kvm_unmap_stage2_range(). Prototype was for __unmap_stage2_range() instead


vim +178 arch/arm64/kvm/mmu.c

378e6a9c78a02b arch/arm64/kvm/mmu.c Yanan Wang       2021-06-17  139  
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  140  /*
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  141   * Unmapping vs dcache management:
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  142   *
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  143   * If a guest maps certain memory pages as uncached, all writes will
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  144   * bypass the data cache and go directly to RAM.  However, the CPUs
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  145   * can still speculate reads (not writes) and fill cache lines with
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  146   * data.
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  147   *
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  148   * Those cache lines will be *clean* cache lines though, so a
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  149   * clean+invalidate operation is equivalent to an invalidate
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  150   * operation, because no cache lines are marked dirty.
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  151   *
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  152   * Those clean cache lines could be filled prior to an uncached write
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  153   * by the guest, and the cache coherent IO subsystem would therefore
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  154   * end up writing old data to disk.
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  155   *
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  156   * This is why right after unmapping a page/section and invalidating
52bae936f0e7be arch/arm64/kvm/mmu.c Will Deacon      2020-09-11  157   * the corresponding TLBs, we flush to make sure the IO subsystem will
52bae936f0e7be arch/arm64/kvm/mmu.c Will Deacon      2020-09-11  158   * never hit in the cache.
e48d53a91f6e90 virt/kvm/arm/mmu.c   Marc Zyngier     2018-04-06  159   *
e48d53a91f6e90 virt/kvm/arm/mmu.c   Marc Zyngier     2018-04-06  160   * This is all avoided on systems that have ARM64_HAS_STAGE2_FWB, as
e48d53a91f6e90 virt/kvm/arm/mmu.c   Marc Zyngier     2018-04-06  161   * we then fully enforce cacheability of RAM, no matter what the guest
e48d53a91f6e90 virt/kvm/arm/mmu.c   Marc Zyngier     2018-04-06  162   * does.
363ef89f8e9bce arch/arm/kvm/mmu.c   Marc Zyngier     2014-12-19  163   */
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  164  /**
cd6a3361a2f89c arch/arm64/kvm/mmu.c Marc Zyngier     2021-04-26  165   * kvm_unmap_stage2_range -- Clear stage2 page table entries to unmap a range
c9c0279cc02b4e arch/arm64/kvm/mmu.c Xiaofei Tan      2020-09-17  166   * @mmu:   The KVM stage-2 MMU pointer
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  167   * @start: The intermediate physical base address of the range to unmap
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  168   * @size:  The size of the area to unmap
c9c0279cc02b4e arch/arm64/kvm/mmu.c Xiaofei Tan      2020-09-17  169   * @may_block: Whether or not we are permitted to block
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  170   *
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  171   * Clear a range of stage-2 mappings, lowering the various ref-counts.  Must
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  172   * be called while holding mmu_lock (unless for freeing the stage2 pgd before
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  173   * destroying the VM), otherwise another faulting VCPU may come in and mess
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  174   * with things behind our backs.
7a1c831ee8553b arch/arm/kvm/mmu.c   Suzuki K Poulose 2016-03-23  175   */
b5331379bc6261 arch/arm64/kvm/mmu.c Will Deacon      2020-08-11  176  static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size,
b5331379bc6261 arch/arm64/kvm/mmu.c Will Deacon      2020-08-11  177  				 bool may_block)
4f853a714bf163 arch/arm/kvm/mmu.c   Christoffer Dall 2014-05-09 @178  {
cfb1a98de7a9aa arch/arm64/kvm/mmu.c Quentin Perret   2021-03-19  179  	struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu);
52bae936f0e7be arch/arm64/kvm/mmu.c Will Deacon      2020-09-11  180  	phys_addr_t end = start + size;
4f853a714bf163 arch/arm/kvm/mmu.c   Christoffer Dall 2014-05-09  181  
8b3405e345b5a0 arch/arm/kvm/mmu.c   Suzuki K Poulose 2017-04-03  182  	assert_spin_locked(&kvm->mmu_lock);
47a91b7232fa25 virt/kvm/arm/mmu.c   Jia He           2018-05-21  183  	WARN_ON(size & ~PAGE_MASK);
52bae936f0e7be arch/arm64/kvm/mmu.c Will Deacon      2020-09-11  184  	WARN_ON(stage2_apply_range(kvm, start, end, kvm_pgtable_stage2_unmap,
52bae936f0e7be arch/arm64/kvm/mmu.c Will Deacon      2020-09-11  185  				   may_block));
342cd0ab0e6ca3 arch/arm/kvm/mmu.c   Christoffer Dall 2013-01-20  186  }
000d399625b4b3 arch/arm/kvm/mmu.c   Marc Zyngier     2013-03-05  187  

:::::: The code at line 178 was first introduced by commit
:::::: 4f853a714bf16338ff5261128e6c7ae2569e9505 arm/arm64: KVM: Fix and refactor unmap_range

:::::: TO: Christoffer Dall <christoffer.dall at linaro.org>
:::::: CC: Christoffer Dall <christoffer.dall at linaro.org>

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org



More information about the linux-arm-kernel mailing list