[PATCH 1/8] block: set the disk capacity to 0 in blk_mark_disk_dead
Hannes Reinecke
hare at suse.de
Thu Oct 20 23:49:30 PDT 2022
On 10/20/22 12:56, Christoph Hellwig wrote:
> nvme and xen-blkfront are already doing this to stop buffered writes from
> creating dirty pages that can't be written out later. Move it to the
> common code. Note that this follows the xen-blkfront version that does
> not send and uevent as the uevent is a bit confusing when the device is
> about to go away a little later, and the the size change is just to stop
> buffered writes faster.
>
> This also removes the comment about the ordering from nvme, as bd_mutex
> not only is gone entirely, but also hasn't been used for locking updates
> to the disk size long before that, and thus the ordering requirement
> documented there doesn't apply any more.
>
> Signed-off-by: Christoph Hellwig <hch at lst.de>
> ---
> block/genhd.c | 3 +++
> drivers/block/xen-blkfront.c | 1 -
> drivers/nvme/host/core.c | 7 +------
> 3 files changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/block/genhd.c b/block/genhd.c
> index 17b33c62423df..2877b5f905579 100644
> --- a/block/genhd.c
> +++ b/block/genhd.c
> @@ -555,6 +555,9 @@ void blk_mark_disk_dead(struct gendisk *disk)
> {
> set_bit(GD_DEAD, &disk->state);
> blk_queue_start_drain(disk->queue);
> +
> + /* stop buffered writers from dirtying pages that can't written out */
> + set_capacity(disk, 0);
> }
> EXPORT_SYMBOL_GPL(blk_mark_disk_dead);
>
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 35b9bcad9db90..b28489290323f 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -2129,7 +2129,6 @@ static void blkfront_closing(struct blkfront_info *info)
> if (info->rq && info->gd) {
> blk_mq_stop_hw_queues(info->rq);
> blk_mark_disk_dead(info->gd);
> - set_capacity(info->gd, 0);
> }
>
> for_each_rinfo(info, rinfo, i) {
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 059737c1a2c19..44a5321743128 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -5106,10 +5106,7 @@ static void nvme_stop_ns_queue(struct nvme_ns *ns)
> /*
> * Prepare a queue for teardown.
> *
> - * This must forcibly unquiesce queues to avoid blocking dispatch, and only set
> - * the capacity to 0 after that to avoid blocking dispatchers that may be
> - * holding bd_butex. This will end buffered writers dirtying pages that can't
> - * be synced.
> + * This must forcibly unquiesce queues to avoid blocking dispatch.
> */
> static void nvme_set_queue_dying(struct nvme_ns *ns)
> {
> @@ -5118,8 +5115,6 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
>
> blk_mark_disk_dead(ns->disk);
> nvme_start_ns_queue(ns);
> -
> - set_capacity_and_notify(ns->disk, 0);
> }
>
> /**
I'm ever so slightly concerned about not sending the uevent anymore; MD
for one relies on that event to figure out if a device is down.
And I'm also relatively sure that testing with MD on Xen had been
relatively few.
What do we lose by using the 'notify' version instead?
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare at suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
More information about the Linux-nvme
mailing list