[PATCH v4 18/18] Documentation: Document the NVMe PCI endpoint target driver
Damien Le Moal
dlemoal at kernel.org
Tue Dec 17 09:40:29 PST 2024
On 2024/12/17 9:30, Manivannan Sadhasivam wrote:
>> +Now, create a subsystem and a port that we will use to create a PCI target
>> +controller when setting up the NVMe PCI endpoint target device. In this
>> +example, the port is created with a maximum of 4 I/O queue pairs::
>> +
>> + # cd /sys/kernel/config/nvmet/subsystems
>> + # mkdir nvmepf.0.nqn
>> + # echo -n "Linux-nvmet-pciep" > nvmepf.0.nqn/attr_model
>> + # echo "0x1b96" > nvmepf.0.nqn/attr_vendor_id
>> + # echo "0x1b96" > nvmepf.0.nqn/attr_subsys_vendor_id
>> + # echo 1 > nvmepf.0.nqn/attr_allow_any_host
>> + # echo 4 > nvmepf.0.nqn/attr_qid_max
>> +
>> +Next, create and enable the subsystem namespace using the null_blk block device::
>> +
>> + # mkdir nvmepf.0.nqn/namespaces/1
>> + # echo -n "/dev/nullb0" > nvmepf.0.nqn/namespaces/1/device_path
>> + # echo 1 > "pci_epf_nvme.0.nqn/namespaces/1/enable"
>
> I have to do, 'echo 1 > nvmepf.0.nqn/namespaces/1/enable'
Good catch. That is the old name from previous version. Will fix this.
>> +
>> +Finally, create the target port and link it to the subsystem::
>> +
>> + # cd /sys/kernel/config/nvmet/ports
>> + # mkdir 1
>> + # echo -n "pci" > 1/addr_trtype
>> + # ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
>> + /sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn
>> +
>> +Creating a NVMe PCI Endpoint Device
>> +-----------------------------------
>> +
>> +With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint
>> +device can now be created and enabled. The NVMe PCI endpoint target driver
>> +should already be loaded (that is done automatically when the port is created)::
>> +
>> + # ls /sys/kernel/config/pci_ep/functions
>> + nvmet_pciep
>> +
>> +Next, create function 0::
>> +
>> + # cd /sys/kernel/config/pci_ep/functions/nvmet_pciep
>> + # mkdir nvmepf.0
>> + # ls nvmepf.0/
>> + baseclass_code msix_interrupts secondary
>> + cache_line_size nvme subclass_code
>> + deviceid primary subsys_id
>> + interrupt_pin progif_code subsys_vendor_id
>> + msi_interrupts revid vendorid
>> +
>> +Configure the function using any vendor ID and device ID::
>> +
>> + # cd /sys/kernel/config/pci_ep/functions/nvmet_pciep
>> + # echo 0x1b96 > nvmepf.0/vendorid
>> + # echo 0xBEEF > nvmepf.0/deviceid
>> + # echo 32 > nvmepf.0/msix_interrupts
>> +
>> +If the PCI endpoint controller used does not support MSIX, MSI can be
>> +configured instead::
>> +
>> + # echo 32 > nvmepf.0/msi_interrupts
>> +
>> +Next, let's bind our endpoint device with the target subsystem and port that we
>> +created::
>> +
>> + # echo 1 > nvmepf.0/portid
>
> 'echo 1 > nvmepf.0/nvme/portid'
>
>> + # echo "nvmepf.0.nqn" > nvmepf.0/subsysnqn
>
> 'echo 1 > nvmepf.0/nvme/subsysnqn'
Yep. Good catch.
>
>> +
>> +The endpoint function can then be bound to the endpoint controller and the
>> +controller started::
>> +
>> + # cd /sys/kernel/config/pci_ep
>> + # ln -s functions/nvmet_pciep/nvmepf.0 controllers/a40000000.pcie-ep/
>> + # echo 1 > controllers/a40000000.pcie-ep/start
>> +
>> +On the endpoint machine, kernel messages will show information as the NVMe
>> +target device and endpoint device are created and connected.
>> +
>
> For some reason, I cannot get the function driver working. Getting this warning
> on the ep:
>
> nvmet: connect request for invalid subsystem 1!
>
> I didn't debug it further. Will do it tomorrow morning and let you know.
Hmmm... Weird. You should not ever see a connect request/command at all.
Can you try this script:
https://github.com/damien-lemoal/buildroot/blob/rock5b_ep_v25/board/radxa/rock5b-ep/overlay/root/pci-ep/nvmet-pciep
Just run "./nvmet-pciep start" after booting the endpoint board.
The command example in the documentation is an extract from what this script
does. I think that:
echo 1 > ${SUBSYSNQN}/attr_allow_any_host
missing may be the reason for this error.
--
Damien Le Moal
Western Digital Research
More information about the Linux-nvme
mailing list