[PATCH] nvmet: make nvmet_wq visible in sysfs

Chaitanya Kulkarni chaitanyak at nvidia.com
Wed Oct 30 19:45:09 PDT 2024


> On Oct 30, 2024, at 7:01 PM, Guixin Liu <kanie at linux.alibaba.com> wrote:
> 
> 
>> 在 2024/10/31 02:38, Chaitanya Kulkarni 写道:
>>> On 10/30/24 04:20, Guixin Liu wrote:
>>> Sorry for the unclear explanation.
>>> 
>>> The transport is tcp and the backend is block.
>>> 
>>> This is just a solution level thing, in some complicated scenarios, we
>>> deploy multiple
>>> 
>>> missions on one machine(hybrid deployment), such as:
>>> 
>>> 1. Dockers for function computation.
>>> 
>>> 2. Real-time tasks.
>>> 
>>> 3. Monitoring, and handling events and managemets.
>>> 
>>> 4. And also nvme target server.
>>> 
>>> All of them are retrict to their own cpu cores to prevent mutual
>>> influence.
>>> 
>>> There is no problem if nvmet_wq running on all cpus of course, but for
>>> strict isolation,
>>> 
>>> we need to do this retriction.
>>> 
>>> I dont know if I've given enough detail.
>>> 
>>> Best Regards,
>>> 
>>> Guixin Liu
>> can you please send a patch with detailed usecase ?
>> 
>> Also, it will be nice (not a blocker to merge this patch) if you can
>> provide
>> steps similar to listed above so we can get this scenario tested, even
>> better
>> if you can submit a block test, but if not I'll send one once I get steps.
>> 
>> -ck
> 
> I will send the v2 with our usecase to expain why we should restrict the cpumask,
> 
> I'm concerned whether blktests can handle such complex tests, as it relies on deploying
> 
> many Docker containers and services. Should it only test the case of setting the cpumask
> 
> with fio?
> 
> Best Regards,
> 
> Guixin Liu
> 

For now just cpumask and fio is sufficient so that when we upstream this patch we have some sort of testing done via sysfs. 


-ck




More information about the Linux-nvme mailing list