[PATCHv2] nvme-tcp: align I/O cpu with blk-mq mapping
Sagi Grimberg
sagi at grimberg.me
Wed Jun 19 08:58:22 PDT 2024
>> I see how you address multiple controllers falling into the same
>> mappings case in your patch.
>> You could have selected a different mq_map entry for each controller
>> (out of the entries that map to the qid).
>>
> Looked at it, but hadn't any idea how to figure out the load.
> The load is actually per-cpu, but we only have per controller structures.
> So we would need to introduce a per-cpu counter, detailing out the
> number of queues scheduled on that CPU.
> But that won't help with the CPU oversubscription issue; we still
> might have substantially higher number of overall queues than we have
> CPUs...
I think that it would still be better than what you have right now:
IIUC Right now you will have for all controllers (based on your example):
queue 1: using cpu 6
queue 2: using cpu 9
queue 3: using cpu 18
But selecting a different mq_map entry can give:
ctrl1:
queue 1: using cpu 6
queue 2: using cpu 9
queue 3: using cpu 18
ctrl2:
queue 1: using cpu 7
queue 2: using cpu 10
queue 3: using cpu 19
ctrl3:
queue 1: using cpu 8
queue 2: using cpu 11
queue 3: using cpu 20
ctrl4:
queue 1: using cpu 54
queue 2: using cpu 57
queue 3: using cpu 66
and so on...
>
>>>
>>> Not sure how wq_unbound helps in this case; in theory the workqueue
>>> items can be pushed on arbitrary CPUs, but that only leads to even
>>> worse
>>> thread bouncing.
>>>
>>> However, topic for ALPSS. We really should have some sore of
>>> backpressure here.
>>
>> I have a patch that was sitting for some time now, to make the RX
>> path run directly
>> from softirq, which should make RX execute from the cpu core mapped
>> to the RSS hash.
>> Perhaps you or your customer can give it a go.
>>
> No s**t. That is pretty much what I wanted to do.
> I'm sure to give it a go.
> Thanks for that!
You will need another prep patch for it:
--
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 3649987c0a2d..b6ea7e337eb8 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -955,6 +955,18 @@ static int nvme_tcp_recv_skb(read_descriptor_t
*desc, struct sk_buff *skb,
return consumed;
}
+static int nvme_tcp_try_recv_locked(struct nvme_tcp_queue *queue)
+{
+ struct socket *sock = queue->sock;
+ struct sock *sk = sock->sk;
+ read_descriptor_t rd_desc;
+
+ rd_desc.arg.data = queue;
+ rd_desc.count = 1;
+ queue->nr_cqe = 0;
+ return sock->ops->read_sock(sk, &rd_desc, nvme_tcp_recv_skb);
+}
+
static void nvme_tcp_data_ready(struct sock *sk)
{
struct nvme_tcp_queue *queue;
@@ -1251,16 +1263,11 @@ static int nvme_tcp_try_send(struct
nvme_tcp_queue *queue)
static int nvme_tcp_try_recv(struct nvme_tcp_queue *queue)
{
- struct socket *sock = queue->sock;
- struct sock *sk = sock->sk;
- read_descriptor_t rd_desc;
+ struct sock *sk = queue->sock->sk;
int consumed;
- rd_desc.arg.data = queue;
- rd_desc.count = 1;
lock_sock(sk);
- queue->nr_cqe = 0;
- consumed = sock->ops->read_sock(sk, &rd_desc, nvme_tcp_recv_skb);
+ consumed = nvme_tcp_try_recv_locked(queue);
release_sock(sk);
return consumed;
}
--
More information about the Linux-nvme
mailing list