[PATCH 2/2 v2] UBI: Block: Add blk-mq support

Jens Axboe axboe at fb.com
Tue Jan 13 16:23:00 PST 2015


On 01/13/2015 04:36 PM, Richard Weinberger wrote:
> 
> 
> Am 14.01.2015 um 00:30 schrieb Jens Axboe:
>>> If I understand you correctly it can happen that blk_rq_bytes() returns
>>> more bytes than blk_rq_map_sg() allocated, right?
>>
>> No, the number of bytes will be the same, no magic is involved :-)
> 
> Good to know. :)
> 
>> But lets say the initial request has 4 bios, with each 2 pages, for a
>> total of 8 segments. Lets further assume that the pages in each bio are
>> contiguous, so that blk_rq_map_sg() will map this to 4 sg elements, each
>> 2xpages long.
>>
>> Now, this may already be handled just fine, and you don't need to
>> update/store the actual sg count. I just looked at the source, and I'm
>> assuming it'll do the right thing (ubi_read_sg() will bump the active sg
>> element, when that size has been consumed), but I don't have
>> ubi_read_sg() in my tree to verify.
> 
> Currently the sg count is hard coded to UBI_MAX_SG_COUNT.

The max count doesn't matter, that just provides you a guarantee that
you'll never receive a request that maps to more than that. The point
I'm trying to make is that if you receive 8 segments and it maps to 4,
then you better not look at segments 5..8 after it being mapped.
Whatever the max is, doesn't matter in this conversation.

> I'm sorry, I forgot to CC you and hch to this patch:

Which is as I suspected, you'll do each segment to the length specified,
hence you don't need to track the returned count from blk_rq_map_sg().

-- 
Jens Axboe




More information about the linux-mtd mailing list