atomic queue limits updates v3

Ming Lei ming.lei at redhat.com
Wed Jan 31 19:24:07 PST 2024


On Wed, Jan 31, 2024 at 02:03:46PM +0100, Christoph Hellwig wrote:
> Hi Jens,
> 
> currently queue limits updates are a mess in that they are updated one
> limit at a time, which makes both cross-checking them against other
> limits hard, and also makes it hard to provide atomicy.
> 
> This series tries to change this by updating the whole set of queue
> limits atomically.   This in done in two ways:
> 
>  - for the initial setup the queue_limits structure is simply passed to
>    the queue/disk allocation helpers and applies there after validation.
>  - for the (relatively few) cases that update limits at runtime a pair
>    of helpers to take a snapshot of the current limits and to commit it
>    after picking up the callers changes are provided.
> 
> As the series is big enough it only converts two drivers - virtio_blk as
> a heavily used driver in virtualized setups, and loop as one that actually
> does runtime updates while being fairly simple.  I plan to update most
> drivers for this merge window, although SCSI will probably have to wait
> for the next one given that it will need extensive API changes in the
> LLDD and ULD interfaces.
> 
> Chances since v2:
>  - fix the physical block size default
>  - use PAGE_SECTORS_SHIFT more 
> 
> Chances since v1:
>  - remove a spurious NULL return in blk_alloc_queue
>  - keep the existing max_discard_sectors == 0 behavior
>  - drop the patch nvme discard limit update hack - it will go into
>    the series updating nvme instead
>  - drop a chunk_sector check
>  - use PAGE_SECTORS in a few places
>  - document the checks and defaults in blk_validate_limits
>  - various spelling fixes

For the whole series:

Reviewed-by: Ming Lei <ming.lei at redhat.com>

Thanks,
Ming




More information about the Linux-nvme mailing list