[PATCH] arm64: mm: reduce swiotlb size when dynamic swiotlb enabled

Kefeng Wang wangkefeng.wang at huawei.com
Wed May 8 06:23:00 PDT 2024


After commit a1e50a82256e ("arm64: Increase the swiotlb buffer size
64MB"), the swiotlb buffer size increased to 64M in case of 32-bit only
devices require many bounce buffering via swiotlb, but with the
CONFIG_SWIOTLB_DYNAMIC enabled, we could reduce swiotlb size from 64M to
4M(MAX_ORDER_NR_PAGES << PAGE_SHIFT) again since swiotlb buffer size could
be allocated danamicly, and this should save 60M for most platform which
don't require too much swiotlb buffer.

Signed-off-by: Kefeng Wang <wangkefeng.wang at huawei.com>
---
 arch/arm64/mm/init.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9b5ab6818f7f..425222c13d97 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -370,18 +370,23 @@ void __init bootmem_init(void)
 void __init mem_init(void)
 {
 	bool swiotlb = max_pfn > PFN_DOWN(arm64_dma_phys_limit);
+	unsigned long size = 0;
 
 	if (IS_ENABLED(CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC) && !swiotlb) {
 		/*
 		 * If no bouncing needed for ZONE_DMA, reduce the swiotlb
 		 * buffer for kmalloc() bouncing to 1MB per 1GB of RAM.
 		 */
-		unsigned long size =
-			DIV_ROUND_UP(memblock_phys_mem_size(), 1024);
-		swiotlb_adjust_size(min(swiotlb_size_or_default(), size));
+		size = DIV_ROUND_UP(memblock_phys_mem_size(), 1024);
 		swiotlb = true;
 	}
 
+	if (IS_ENABLED(CONFIG_SWIOTLB_DYNAMIC) && !size)
+		size = MAX_ORDER_NR_PAGES << PAGE_SHIFT;
+
+	if (size)
+		swiotlb_adjust_size(min(swiotlb_size_or_default(), size));
+
 	swiotlb_init(swiotlb, SWIOTLB_VERBOSE);
 
 	/* this will put all unused low memory onto the freelists */
-- 
2.27.0




More information about the linux-arm-kernel mailing list