[PATCH v2] nvmet: make nvmet_wq visible in sysfs
Chaitanya Kulkarni
chaitanyak at nvidia.com
Wed Oct 30 23:23:01 PDT 2024
On 10/30/24 19:27, Guixin Liu wrote:
> In some complex scenarios, we deploy multiple tasks on a single machine
> (hybrid deployment), such as:
> 1. Docker containers for function computation (background processing).
> 2. Docker containers for real-time tasks.
> 3. Docker containers for monitoring, event handling, and management.
> 4. An NVMe target server.
> Each of these components is restricted to its own CPU cores to prevent
> mutual interference and ensure strict isolation. Additionally, we make
> the nvmet_wq visible in sysfs, allowing for tuning its attributes
> through sysfs, such as cpumask.
How about following ? no need to send V3 can be done at
the time of applying the patch if you are okay with it :-
" In some complex scenarios, we deploy multiple taskson asingle machine
(hybrid deployment), suchas Docker containersfor function computation
(background processing), real-time tasks, monitoring,event handling,
and management, alongwith an NVMe target server.
Each of these componentsis restrictedto its own CPU coresto prevent
mutual interferenceand ensurestrict isolation.To achieve this level
of isolation for nvmet_wq we needto use sysfs tunables such as
cpumask that are currently not accessible.
Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
workqueue tunables are exported in the userspace via sysfs.
with this patch :-
nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
affinity_scope affinity_strict cpumask max_active nice per_cpu
power subsystem uevent
"
With that looks good.
Reviewed-by: Chaitanya Kulkarni <kch at nvidia.com>
-ck
More information about the Linux-nvme
mailing list