[PATCH 1/1] nvme-tcp: Set correct numa-node id for host controller
Nilay Shroff
nilay at linux.ibm.com
Thu Jun 13 06:46:14 PDT 2024
In the current implementation we always set numa-node id of the nvme
tcp host controller to NUMA_NO_NODE(-1). So on a multi-node numa
aware system when the iopolicy is set to NUMA, NVMe multipath code
can't calculate the accurate node distance and hence it can't select
the optimal path for IO.
This patch ensures that we set the correct numa-node id for the tcp host
controller and thus it would help ensure that for multipath scenario
the optimal path is selected for performing IO operations when iopolicy
is set to NUMA.
Signed-off-by: Nilay Shroff <nilay at linux.ibm.com>
---
drivers/nvme/host/tcp.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 8b5e4327fe83..d96a9b0c7c1a 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1924,6 +1924,10 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
{
int ret;
key_serial_t pskid = 0;
+ struct nvme_tcp_queue *queue;
+ struct sock *sk;
+ struct dst_entry *dst;
+ struct nvme_tcp_ctrl *tctrl = to_tcp_ctrl(ctrl);
if (nvme_tcp_tls(ctrl)) {
if (ctrl->opts->tls_key)
@@ -1942,10 +1946,21 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
if (ret)
return ret;
- ret = nvme_tcp_alloc_async_req(to_tcp_ctrl(ctrl));
+ ret = nvme_tcp_alloc_async_req(tctrl);
if (ret)
goto out_free_queue;
+ /* socket is already connected */
+ queue = &tctrl->queues[0];
+ sk = queue->sock->sk;
+ dst = sk_dst_get(sk);
+ if (likely(dst)) {
+ struct net_device *netdev;
+
+ netdev = netdev_sk_get_lowest_dev(dst->dev, sk);
+ ctrl->numa_node = dev_to_node(&netdev->dev);
+ dst_release(dst);
+ }
return 0;
out_free_queue:
--
2.45.1
More information about the Linux-nvme
mailing list