[PATCH] nvmet: make nvmet_wq visible in sysfs
Guixin Liu
kanie at linux.alibaba.com
Wed Oct 30 04:20:22 PDT 2024
在 2024/10/30 14:33, Chaitanya Kulkarni 写道:
> On 10/29/24 18:44, Guixin Liu wrote:
>> 在 2024/10/30 03:52, Chaitanya Kulkarni 写道:
>>> On 10/28/24 23:46, Guixin Liu wrote:
>>>> 在 2024/10/29 13:04, Chaitanya Kulkarni 写道:
>>>>> On 10/28/24 18:49, Guixin Liu wrote:
>>>>>> Make nvmet_wq visible in sysfs, allowing for tuning the it's attr
>>>>>> through sysfs.
>>>>>>
>>>>>> Signed-off-by: Guixin Liu<kanie at linux.alibaba.com>
>>>>>> ---
>>>>> do you happened have a usecase for this?
>>>>>
>>>>> -ck
>>>> Sometimes, in order to respond promptly to certain events or
>>>>
>>>> manage commands, we need to reserve resources and partition
>>>>
>>>> the CPU cores. For example, if there are 4 cores available,
>>>>
>>>> we can initially allocate them by dedicating one core for
>>>>
>>>> management while the remaining 3 cores are specifically for handling
>>>> IO.
>>>>
>>>> Best Regards,
>>>>
>>>> Guixin Liu
>>>>
>>> I'm aware of exposing tunables through sysfs and it's benefits, my
>>> question
>>> was do you have a setup where this setting is needed currently ?
>>>
>>> I've always been asked to for the usecase on a patch when we expose
>>> something
>>> out of kernel that is solving the problem in the deployment ...
>>>
>>> -ck
>> I need serverve some cpu core to do other things, such as handle events
>>
>> and managements, so that the nvmet_wq can't running on all cpu cores,
>>
>> currently, I restrict it by setting the cpumask of nvmet_wq(that's why
>>
>> I expose nvmet_wq to sysfs).
>>
>> Best Regards,
>>
>> Guixin Liu
>>
> Can you please explain your setup ? e.g. transport tcp/rdma/fc, device
> backend file/block etc ?
>
> so nvmet_wq's CPU consumption is so high, that it doesn't have bandwidth
> to handle events and managements ?
>
> Can you please explain the workload and what kind of events and managements
> handling is needed where you need to restrict the nvmet_wq with CPUMASK ?
>
> The only reason I'm asking that I've not seen this scenario so far in
> the many
> many deployments since we've added the nvmet_wq and I'd really like to learn
> about scenario.
>
> -ck
Sorry for the unclear explanation.
The transport is tcp and the backend is block.
This is just a solution level thing, in some complicated scenarios, we
deploy multiple
missions on one machine(hybrid deployment), such as:
1. Dockers for function computation.
2. Real-time tasks.
3. Monitoring, and handling events and managemets.
4. And also nvme target server.
All of them are retrict to their own cpu cores to prevent mutual influence.
There is no problem if nvmet_wq running on all cpus of course, but for
strict isolation,
we need to do this retriction.
I dont know if I've given enough detail.
Best Regards,
Guixin Liu
>
More information about the Linux-nvme
mailing list