[PATCH V2 1/2] md: propagate BLK_FEAT_PCI_P2PDMA from member devices
Christoph Hellwig
hch at lst.de
Wed Apr 8 23:27:48 PDT 2026
On Wed, Apr 08, 2026 at 12:25:36AM -0700, Chaitanya Kulkarni wrote:
> From: Kiran Kumar Modukuri <kmodukuri at nvidia.com>
>
> MD RAID does not propagate BLK_FEAT_PCI_P2PDMA from member devices to
> the RAID device, preventing peer-to-peer DMA through the RAID layer even
> when all underlying devices support it.
>
> Enable BLK_FEAT_PCI_P2PDMA in raid0, raid1 and raid10 personalities
> during queue limits setup and clear it in mddev_stack_rdev_limits()
> during array init and mddev_stack_new_rdev() during hot-add if any
> member device lacks support. Parity RAID personalities (raid4/5/6) are
> excluded because they need CPU access to data pages for parity
> computation, which is incompatible with P2P mappings.
>
> Tested with RAID0/1/10 arrays containing multiple NVMe devices with P2PDMA
> support, confirming that peer-to-peer transfers work correctly through
> the RAID layer.
Same thing as for nvme-multipath applies here - just set
BLK_FEAT_PCI_P2PDMA unconditionally at setup time for the personality
that support it, and then rely on an updated blk_stack_limits to clear
it.
More information about the Linux-nvme
mailing list