Seeking for help with NVMe arbitration questions
Wang Yicheng
wangyicheng1209 at gmail.com
Thu May 4 17:27:51 PDT 2023
Understood, thanks Keith!
Given that the IO queue distribution is not intended for IO
prioritization, I pivoted my focus to how enabling IO polling can help
the performance. I repeated a very simple single-process FIO job 4
times using the following set-ups:
1. W/o poll queues, "hipri" was set to 0
2. W/ poll queues, "hipri" was set to 0
3. W/o poll queues, "hipri" was set to 1
4. W/ poll queues, "hipri" was set to 1
The throughput result was 4>3>1=2. It's expected that case 4 has the
best performance. But what I don't understand is why case 3
outperformed 1 and 2. I thought the IO polling will only be enabled
when there're poll queues. Could you please comment on this result as
well?
Best,
Yicheng
On Fri, Apr 28, 2023 at 1:06 PM Keith Busch <kbusch at kernel.org> wrote:
>
> On Fri, Apr 28, 2023 at 11:18:03AM -0700, Wang Yicheng wrote:
> >
> > Then this confuses me with the motivations of introducing different
> > queue types. Doesn't it aim for providing some sort of prioritization?
>
> Having a separate read queue ensures that reads won't get starved for
> a command resource by a write intensive workload. AFAIK, it's not a very
> common option to enable.
>
> The poll queues are intended for latency senstive applications. I
> don't think it will be as reliable if you are running concurrently
> with interrupt driven workloads: the interrupts will just preempt
> the polling threads.
More information about the Linux-nvme
mailing list