[PATCH] nvmet-passthru: Cleanup nvmet_passthru_map_sg()
Logan Gunthorpe
logang at deltatee.com
Thu Oct 15 14:40:52 EDT 2020
On 2020-10-15 12:01 p.m., Christoph Hellwig wrote:
> On Thu, Oct 15, 2020 at 10:01:30AM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2020-10-15 1:56 a.m., Christoph Hellwig wrote:
>>> On Fri, Oct 09, 2020 at 05:18:16PM -0600, Logan Gunthorpe wrote:
>>>> static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>>> {
>>>> - int sg_cnt = req->sg_cnt;
>>>> struct scatterlist *sg;
>>>> int op_flags = 0;
>>>> struct bio *bio;
>>>> int i, ret;
>>>>
>>>> + if (req->sg_cnt > BIO_MAX_PAGES)
>>>> + return -EINVAL;
>>>
>>> Don't you need to handle larger requests as well? Or at least
>>> limit MDTS?
>>
>> No and Yes: there is already code in nvmet_passthru_override_id_ctrl()
>> to limit MDTS based on max_segments and max_hw_sectors.
>
> But those are entirely unrelated to the bio size. BIO_MAX_PAGES is
> 256, so with 4k pages and assuming none can't be merged that is 1MB,
> while max_segments/max_hw_sectors could be something much larger.
Isn't it constrained by max_segments which is set to NVME_MAX_SEGS=127
(for PCI)... less than BIO_MAX_PAGES...
Would the NVME driver even work if max_segments was greater than
BIO_MAX_PAGES? Correct me if I'm wrong, but it looks like
blk_rq_map_sg() will only map one bio within a request. So there has to
be one bio per request by the time it hits nvme_map_data()...
So I'm not really sure how we could even construct a valid passthrough
request in nvmet_passthru_map_sg() that is larger than BIO_MAX_PAGES.
If you want me to send a patch to future proof the MDTS limit with
BIO_MAX_PAGES, I can do that, but it doesn't look like it will have any
effect right now unless big things change.
Logan
More information about the Linux-nvme
mailing list