[PATCH v3 09/15] block: Add checks to merging of atomic writes
Nilay Shroff
nilay at linux.ibm.com
Mon Feb 12 02:54:44 PST 2024
>+static bool rq_straddles_atomic_write_boundary(struct request *rq,
>+ unsigned int front,
>+ unsigned int back)
>+{
>+ unsigned int boundary = queue_atomic_write_boundary_bytes(rq->q);
>+ unsigned int mask, imask;
>+ loff_t start, end;
>+
>+ if (!boundary)
>+ return false;
>+
>+ start = rq->__sector << SECTOR_SHIFT;
>+ end = start + rq->__data_len;
>+
>+ start -= front;
>+ end += back;
>+
>+ /* We're longer than the boundary, so must be crossing it */
>+ if (end - start > boundary)
>+ return true;
>+
>+ mask = boundary - 1;
>+
>+ /* start/end are boundary-aligned, so cannot be crossing */
>+ if (!(start & mask) || !(end & mask))
>+ return false;
>+
>+ imask = ~mask;
>+
>+ /* Top bits are different, so crossed a boundary */
>+ if ((start & imask) != (end & imask))
>+ return true;
>+
>+ return false;
>+}
>+
Shall we ensure here that we don't cross max limit of atomic write supported by
device? It seems that if the boundary size is not advertized by the device
(in fact, I have one NVMe drive which has boundary size zero i.e. nabo/nabspf/
nawupf are all zero but awupf is non-zero) then we (unconditionally) allow
merging. However it may be possible that post merging the total size of the
request may exceed the atomic-write-unit-max-size supported by the device and
if that happens then most probably we would be able to catch it very late in
the driver code (if the device is NVMe).
So is it a good idea to validate here whether we could potentially exceed
the atomic-write-max-unit-size supported by device before we allow merging?
In case we exceed the atomic-write-max-unit-size post merge then don't allow
merging?
Thanks,
--Nilay
More information about the Linux-nvme
mailing list