[PATCHv6 11/11] iomap: add support for dma aligned direct-io

Eric Farman farman at linux.ibm.com
Tue Jun 28 20:18:34 PDT 2022


On Tue, 2022-06-28 at 11:20 -0400, Eric Farman wrote:
> On Tue, 2022-06-28 at 11:00 +0200, Halil Pasic wrote:
> > On Mon, 27 Jun 2022 09:36:56 -0600
> > Keith Busch <kbusch at kernel.org> wrote:
> > 
> > > On Mon, Jun 27, 2022 at 11:21:20AM -0400, Eric Farman wrote:
> > > > Apologies, it took me an extra day to get back to this, but it
> > > > is
> > > > indeed this pass through that's causing our boot failures. I
> > > > note
> > > > that
> > > > the old code (in iomap_dio_bio_iter), did:
> > > > 
> > > >         if ((pos | length | align) & ((1 << blkbits) - 1))
> > > >                 return -EINVAL;
> > > > 
> > > > With blkbits equal to 12, the resulting mask was 0x0fff against
> > > > an
> > > > align value (from iov_iter_alignment) of x200 kicks us out.
> > > > 
> > > > The new code (in iov_iter_aligned_iovec), meanwhile, compares
> > > > this:
> > > > 
> > > >                 if ((unsigned long)(i->iov[k].iov_base + skip)
> > > > &
> > > > addr_mask)
> > > >                         return false;
> > > > 
> > > > iov_base (and the output of the old iov_iter_aligned_iovec()
> > > > routine)
> > > > is x200, but since addr_mask is x1ff this check provides a
> > > > different
> > > > response than it used to.
> > > > 
> > > > To check this, I changed the comparator to len_mask (almost
> > > > certainly
> > > > not the right answer since addr_mask is then unused, but it was
> > > > good
> > > > for a quick test), and our PV guests are able to boot again
> > > > with
> > > > -next
> > > > running in the host.  
> > > 
> > > This raises more questions for me. It sounds like your process
> > > used
> > > to get an
> > > EINVAL error, and it wants to continue getting an EINVAL error
> > > instead of
> > > letting the direct-io request proceed. Is that correct? 

Sort of. In the working case, I see a set of iovecs come through with
different counts:

base	count
0000	0001
0000	0200
0000	0400
0000	0800
0000	1000
0001	1000
0200	1000 << Change occurs here
0400	1000
0800	1000
1000	1000

EINVAL was being returned for any of these iovecs except the page-
aligned ones. Once the x200 request returns 0, the remainder of the
above list was skipped and the requests continue elsewhere on the file.

Still not sure how our request is getting us into this process. We're
simply asking to read a single block, but that's somewhere within an
image file.

> > 
> > Is my understanding as well. But I'm not familiar enough with the
> > code to
> > tell where and how that -EINVAL gets handled.
> > 
> > BTW let me just point out that the bounce buffering via swiotlb
> > needed
> > for PV is not unlikely to mess up the alignment of things. But I'm
> > not
> > sure if that is relevant here.

It's true that PV guests were the first to trip over this, but I've
since been able to reproduce this with a normal guest. So long as the
image file is connected with cache.direct=true, it's unbootable. That
should absolve the swiotlb bits from being at fault here.

> > 
> > Regards,
> > Halil
> > 
> > > If so, could you
> > > provide more details on what issue occurs with dispatching this
> > > request?
> 
> This error occurs reading the initial boot record for a guest,
> stating
> QEMU was unable to read block zero from the device. The code that
> complains doesn't appear to have anything that says "oh, got EINVAL,
> try it this other way" but I haven't chased down if/where something
> in
> between is expecting that and handling it in some unique way. I
> -think-
>  I have an easier reproducer now, so maybe I'd be able to get a
> better
> answer to this question.
> 
> > > If you really need to restrict address' alignment to the
> > > storage's
> > > logical
> > > block size, I think your storage driver needs to set the
> > > dma_alignment queue
> > > limit to that value.
> 
> It's possible that there's a problem in the virtio stack here, but
> the
> failing configuration is a qcow image on the host rootfs

(on an ext4 filesystem)

> , so it's not
> using any distinct driver. The bdev request queue that ends up being
> used is the same allocated out of blk_alloc_queue, so changing
> dma_alignment there wouldn't work.




More information about the Linux-nvme mailing list