[PATCH] nvme-pci: 512 byte aligned dma pool segment quirk

Paweł Anikiel panikiel at google.com
Thu Nov 14 06:13:52 PST 2024


On Thu, Nov 14, 2024 at 2:24 PM Robert Beckett
<bob.beckett at collabora.com> wrote:
> This is interesting.
> I had the same idea previously. I initially just changed the hard coded 256 / 8 to use 31 instead, which should have ensured the last entry of each segment never gets used.
> When I tested that, it not longer failed, which was a good sign. So then I modified it to only do that on the last 256 byte segment of a page, but then is started failing again.

Could you elaborate the "only do that on the last 256 byte segment of
a page" part? How did you check which chunk of the page would be
allocated before choosing the dma pool?

> I never saw any bus error during my testing, just wrong data read, which then fails image verification. I was expecting iommu error logs if it was trying to access a chain in to nowhere if it always interpreted last entry in page as a link. I never saw any iommu errors.

Maybe I misspoke, the "bus error" part was just my speculation, I
didn't look at the IOMMU logs or anything like that.

> I'd be glad to if you could share your testing method.

I dumped all the nvme transfers before the crash happened (using
tracefs), and I saw a read of size 264 = 8 + 256, which led me to the
chaining theory. To test this claim, I wrote a simple pci device
driver which creates one IO queue and submits a read command where the
PRP list is set up in a way that tests if the controller treats it as
a chained list or not. I ran it, and it indeed treated the last PRP
entry as a chained pointer.

It is possible that this is not the only thing that's wrong. Could you
share your patch that checks your "only do that on the last 256 byte
segment of a page" idea?



More information about the Linux-nvme mailing list