[PATCH v7 1/1] nvme-multipath: implement "queue-depth" iopolicy
Christoph Hellwig
hch at lst.de
Mon Jun 24 01:46:27 PDT 2024
On Thu, Jun 20, 2024 at 01:54:29PM -0400, John Meneghini wrote:
>>> +static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys,
>>> + int iopolicy)
>>> +{
>>> + struct nvme_ctrl *ctrl;
>>> + int old_iopolicy = READ_ONCE(subsys->iopolicy);
>>> +
>>> + if (old_iopolicy == iopolicy)
>>> + return;
>>> +
>>> + WRITE_ONCE(subsys->iopolicy, iopolicy);
>>
>> What is the atomicy model here? There doesn't seem to be any
>> global lock protecting it? Maybe move it into the
>> nvme_subsystems_lock critical section?
>
> Good question. I didn't write this code. Yes, I agree this looks racy.
> Updates to the subsys->iopolicy variable are not atomic. They don't need to
> be. The process of changing the iopolicy doesn't need to be synchronized
> and each CPU's cache will be updated lazily. This was done to avoid the
> expense of adding (another) atomic read the io path.
Looks like all sysfs ->store calls for the same attribute are protected
by of->mutex in kernfs_fop_write_iter and we should actually be fine
here. Sorry for the noise.
>> pr_notice("%s: changing iopolicy from %s to %s\n",
>> subsys->subnqn,
>> nvme_iopolicy_names[old_iopolicy],
>> nvme_iopolicy_names[iopolicy]);
>
> How about:
>
> pr_notice("Changed iopolicy from %s to %s for subsysnqn %s\n",
> nvme_iopolicy_names[old_iopolicy],
> nvme_iopolicy_names[iopolicy],
> subsys->subnqn);
Having the identification as the prefixe seems easier to parse
and grep for.
More information about the Linux-nvme
mailing list