[PATCH v2 6/8] arm64: Import latest memcpy()/memmove() implementation

dann frazier dann.frazier at canonical.com
Fri May 20 16:30:50 PDT 2022


On Tue, Jun 08, 2021 at 01:42:48PM +0100, Mark Rutland wrote:
> On Tue, Jun 08, 2021 at 02:36:26PM +0200, Neil Armstrong wrote:
> > On 08/06/2021 14:21, Marek Szyprowski wrote:
> > > On 08.06.2021 13:37, Robin Murphy wrote:
> > >> On 2021-06-08 12:15, Marek Szyprowski wrote:
> > >>> This patch landed recently in linux-next as commit 285133040e6c ("arm64:
> > >>> Import latest memcpy()/memmove() implementation"). Sadly it causes
> > >>> serious issues on Khadas VIM3 board. Reverting it on top of linux
> > >>> next-20210607 (together with 6b8f648959e5 and resolving the conflict in
> > >>> the Makefile) fixes the issue. Here is the kernel log:
> > >>>
> > >>> Unable to handle kernel paging request at virtual address 
> > >>> ffff8000136bd204
> > >>> Mem abort info:
> > >>>     ESR = 0x96000061
> > >>>     EC = 0x25: DABT (current EL), IL = 32 bits
> > >>>     SET = 0, FnV = 0
> > >>>     EA = 0, S1PTW = 0
> > >>> Data abort info:
> > >>>     ISV = 0, ISS = 0x00000061
> > >>
> > >> That's an alignment fault, which implies we're accessing something 
> > >> which isn't normal memory.
> 
> [...]
> 
> > >>> I hope that the above log helps fixing the issue. IIRC the SDHCI driver
> > >>> on VIM3 board uses internal SRAM for transferring data (instead of DMA),
> > >>> so the issue is somehow related to that.
> > >>
> > >> Drivers shouldn't be using memcpy() on iomem mappings. Even if they 
> > >> happen to have got away with it sometimes ;)
> > >>
> > >> Taking a quick look at that driver,
> > >>
> > >>     host->bounce_buf = host->regs + SD_EMMC_SRAM_DATA_BUF_OFF;
> > >>
> > >> is completely bogus, as Sparse will readily point out.
> > 
> > My bad, what's the correct way to copy data to an iomem mapping ?
> 
> We have memcpy_toio() and memcpy_fromio() for this.

ltp's read_all_sys test is triggering something similar which I
bisected back to this commit - see below. Does this imply we need
something like a memory_read_from_*io*_buffer()?

[ 2583.023514] Unable to handle kernel paging request at virtual address ffff80004a3003bf
[ 2583.031456] Mem abort info:
[ 2583.034259]   ESR = 0x96000021
[ 2583.037317]   EC = 0x25: DABT (current EL), IL = 32 bits
[ 2583.042632]   SET = 0, FnV = 0
[ 2583.045689]   EA = 0, S1PTW = 0
[ 2583.048834] Data abort info:
[ 2583.051704]   ISV = 0, ISS = 0x00000021
[ 2583.055542]   CM = 0, WnR = 0
[ 2583.058512] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000401182231000
[ 2583.065217] [ffff80004a3003bf] pgd=10000800001a2003, p4d=10000800001a2003, pud=100008000fa35003, pmd=100008001ddbd003, pte=0068000088230f13
[ 2583.077751] Internal error: Oops: 96000021 [#22] SMP
[ 2583.082710] Modules linked in: nls_iso8859_1 joydev input_leds efi_pstore arm_spe_pmu acpi_ipmi ipmi_ssif arm_cmn xgene_hwmon arm_dmc620_pmu arm_dsu_pmu cppc_cpufreq acpi_tad sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua ipmi_devintf ipmi_msghandler ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor xor_neon raid6_pq libcrc32c raid1 raid0 multipath linear hid_generic usbhid cdc_ether hid usbnet mlx5_ib ib_uverbs ib_core uas usb_storage ast drm_vram_helper drm_ttm_helper ttm i2c_algo_bit drm_kms_helper syscopyarea crct10dif_ce sysfillrect ghash_ce sysimgblt sha2_ce fb_sys_fops sha256_arm64 cec sha1_ce rc_core mlx5_core nvme drm psample xhci_pci nvme_core mlxfw xhci_pci_renesas tls aes_neon_bs aes_neon_blk aes_ce_blk crypto_simd cryptd aes_ce_cipher
[ 2583.158313] CPU: 38 PID: 8392 Comm: read_all Tainted: G      D           5.13.0-rc3+ #15
[ 2583.166394] Hardware name: WIWYNN Mt.Jade Server System B81.030Z1.0007/Mt.Jade Motherboard, BIOS 1.6.20210526 (SCP: 1.06.20210526) 2021/05/26
[ 2583.179075] pstate: 80400009 (Nzcv daif +PAN -UAO -TCO BTYPE=--)
[ 2583.185072] pc : __memcpy+0x168/0x260
[ 2583.188735] lr : memory_read_from_buffer+0x58/0x80
[ 2583.193524] sp : ffff80004321bb40
[ 2583.196826] x29: ffff80004321bb40 x28: ffff07ffdd8bae80 x27: 0000000000000000
[ 2583.203952] x26: 0000000000000000 x25: 0000000000000000 x24: ffff07ffd2bd5820
[ 2583.211077] x23: ffff80004321bc30 x22: 00000000000003ff x21: ffff80004321bba8
[ 2583.218201] x20: 00000000000003ff x19: 00000000000003ff x18: 0000000000000000
[ 2583.225326] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[ 2583.232449] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
[ 2583.239573] x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000
[ 2583.246697] x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000
[ 2583.253820] x5 : ffff07ffb7d93bff x4 : ffff80004a3003ff x3 : ffff07ffb7d93b80
[ 2583.260945] x2 : ffffffffffffffef x1 : ffff80004a3003c0 x0 : ffff07ffb7d93800
[ 2583.268069] Call trace:
[ 2583.270504]  __memcpy+0x168/0x260
[ 2583.273807]  acpi_data_show+0x5c/0x8c
[ 2583.277464]  sysfs_kf_bin_read+0x78/0xa0
[ 2583.281378]  kernfs_fop_read_iter+0xac/0x1e0
[ 2583.285637]  new_sync_read+0xf0/0x184
[ 2583.289290]  vfs_read+0x158/0x1e4
[ 2583.292594]  ksys_read+0x74/0x100
[ 2583.295898]  __arm64_sys_read+0x28/0x34
[ 2583.299723]  invoke_syscall+0x78/0x100
[ 2583.303466]  el0_svc_common.constprop.0+0x158/0x160
[ 2583.308332]  do_el0_svc+0x34/0xa0
[ 2583.311637]  el0_svc+0x2c/0x54
[ 2583.314685]  el0_sync_handler+0xa4/0x130
[ 2583.318596]  el0_sync+0x19c/0x1c0
[ 2583.321903] Code: a984346c a9c4342c f1010042 54fffee8 (a97c3c8e) 
[ 2583.327989] ---[ end trace e19a85d1dd8510e5 ]---



More information about the linux-amlogic mailing list