[PATCH] nvme: enable FDP support
Viacheslav Dubeyko
slava at dubeyko.com
Fri May 17 10:22:11 PDT 2024
> On May 17, 2024, at 7:27 PM, Kanchan Joshi <joshiiitr at gmail.com> wrote:
>
> On Tue, May 14, 2024 at 2:40 PM Viacheslav Dubeyko <slava at dubeyko.com> wrote:
>>
>>
>>
>>> On May 15, 2024, at 6:30 AM, Kanchan Joshi <joshiiitr at gmail.com> wrote:
>>>
>>> On Tue, May 14, 2024 at 1:00 PM Viacheslav Dubeyko <slava at dubeyko.com> wrote:
>>>>> On May 14, 2024, at 9:47 PM, Kanchan Joshi <joshiiitr at gmail.com> wrote:
>>>>>
>>>>> On Mon, May 13, 2024 at 2:04 AM Viacheslav Dubeyko <slava at dubeyko.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On May 10, 2024, at 4:40 PM, Kanchan Joshi <joshi.k at samsung.com> wrote:
>>>>>>>
>>>>>>> Flexible Data Placement (FDP), as ratified in TP 4146a, allows the host
>>>>>>> to control the placement of logical blocks so as to reduce the SSD WAF.
>>>>>>>
>>>>>>> Userspace can send the data lifetime information using the write hints.
>>>>>>> The SCSI driver (sd) can already pass this information to the SCSI
>>>>>>> devices. This patch does the same for NVMe.
>>>>>>>
>>>>>>> Fetches the placement-identifiers (plids) if the device supports FDP.
>>>>>>> And map the incoming write-hints to plids.
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Great! Thanks for sharing the patch.
>>>>>>
>>>>>> Do we have documentation that explains how, for example, kernel-space
>>>>>> file system can work with block layer to employ FDP?
>>>>>
>>>>> This is primarily for user driven/exposed hints. For file system
>>>>> driven hints, the scheme is really file system specific and therefore,
>>>>> will vary from one to another.
>>>>> F2FS is one (and only at the moment) example. Its 'fs-based' policy
>>>>> can act as a reference for one way to go about it.
>>>>
>>>> Yes, I completely see the point. I would like to employ the FDP in my
>>>> kernel-space file system (SSDFS). And I have a vision how I can do it.
>>>> But I simply would like to see some documentation with the explanation of
>>>> API and limitations of FDP for the case of kernel-space file systems.
>>>
>>> Nothing complicated for early experimentation.
>>> Once FS has determined the hint value, it can put that into
>>> bio->bi_write_hint and send bio down.
>>>
>>> If SSDFS cares about user-exposed hints too, it can choose different
>>> hint values than what is exposed to user-space.
>>> Or it can do what F2FS does - use the mount option as a toggle to
>>> reuse the same values either for user-hints or fs-defined hints.
>>
>> How many hint values file system can use? Any limitations here?
>
> As many as already defined (in rw_hint.h). Possible to go higher too.
> No hard limitation per se. Write is not going to fail even if it sends
> a hint that does not exist.
>
OK. I see. Thanks.
>> And how file system can detect that it’s FDP-based device?
>
> It does not need to detect. File system sees write-hints; FDP is a
> lower-level detail.
I see your point. But SSDFS doesn’t need in hints from user-space side.
SSDFS has various types of segments (several types of metadata segments and
user data segment) and I would like to use hints for these different types of segments.
I mean that SSDFS needs to make decisions when and for what type of data or
metadata to send such hints without any instructions from user-space side.
Technically speaking, user-space side doesn’t need to care to provide any hints
to SSDFS because SSDFS can manage everything without such hints.
So, I would like to have opportunity to change SSDFS behavior for different
type of devices:
if (zns_device)
execute_zns_related_logic
else if (fdp_device)
execute_fdp_related_logic
else // conventional SSD
execute_conventional_ssd_logic
Does it mean that there is no such way of FDP based device detection?
Thanks,
Slava.
More information about the Linux-nvme
mailing list