[PATCH] nvmet-passthru: Cleanup nvmet_passthru_map_sg()

Douglas Gilbert dgilbert at interlog.com
Thu Oct 15 13:24:34 EDT 2020


On 2020-10-15 12:01 p.m., Logan Gunthorpe wrote:
> 
> 
> On 2020-10-15 1:56 a.m., Christoph Hellwig wrote:
>> On Fri, Oct 09, 2020 at 05:18:16PM -0600, Logan Gunthorpe wrote:
>>>   static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
>>>   {
>>> -	int sg_cnt = req->sg_cnt;
>>>   	struct scatterlist *sg;
>>>   	int op_flags = 0;
>>>   	struct bio *bio;
>>>   	int i, ret;
>>>   
>>> +	if (req->sg_cnt > BIO_MAX_PAGES)
>>> +		return -EINVAL;
>>
>> Don't you need to handle larger requests as well?  Or at least
>> limit MDTS?
> 
> No and Yes: there is already code in nvmet_passthru_override_id_ctrl()
> to limit MDTS based on max_segments and max_hw_sectors. So any request
> that doesn't pass this check should be from a misbehaving client and an
> error should be appropriate.

Running the numbers: with PAGE_SIZE of 4096 bytes and BIO_MAX_PAGES at
256 gives 1 MiB. From memory the block layer won't accept single requests
bigger than 4 MiB (or 8 MiB). Then it is possible that the sgl was
built with sgl_alloc_order(order > 0, chainable=false) in which case
the maximum (unchained) bio carrying size goes up to:
    PAGE_SIZE * (2^order) * 256

I'm using order=3 by default in my driver. So one (unchained) bio will
hold as much (or more) than the block layer will take.

Doug Gilbert





More information about the Linux-nvme mailing list