[PATCH] nvme-pci: 512 byte aligned dma pool segment quirk

Robert Beckett bob.beckett at collabora.com
Thu Nov 14 08:28:48 PST 2024



 ---- On Thu, 14 Nov 2024 14:13:52 +0000  Paweł Anikiel  wrote --- 
 > On Thu, Nov 14, 2024 at 2:24 PM Robert Beckett
 > bob.beckett at collabora.com> wrote:
 > > This is interesting.
 > > I had the same idea previously. I initially just changed the hard coded 256 / 8 to use 31 instead, which should have ensured the last entry of each segment never gets used.
 > > When I tested that, it not longer failed, which was a good sign. So then I modified it to only do that on the last 256 byte segment of a page, but then is started failing again.
 > 
 > Could you elaborate the "only do that on the last 256 byte segment of
 > a page" part? How did you check which chunk of the page would be
 > allocated before choosing the dma pool?
 > 
 > > I never saw any bus error during my testing, just wrong data read, which then fails image verification. I was expecting iommu error logs if it was trying to access a chain in to nowhere if it always interpreted last entry in page as a link. I never saw any iommu errors.
 > 
 > Maybe I misspoke, the "bus error" part was just my speculation, I
 > didn't look at the IOMMU logs or anything like that.
 > 
 > > I'd be glad to if you could share your testing method.
 > 
 > I dumped all the nvme transfers before the crash happened (using
 > tracefs), and I saw a read of size 264 = 8 + 256, which led me to the
 > chaining theory. To test this claim, I wrote a simple pci device
 > driver which creates one IO queue and submits a read command where the
 > PRP list is set up in a way that tests if the controller treats it as
 > a chained list or not. I ran it, and it indeed treated the last PRP
 > entry as a chained pointer.
hmm, I guess a simple debugfs trigger file could be used to construct specially formulated requests. Would work as a debug tool.
Though at this point, the simple dmapool alignment param usage fixes both of these scenarios, so it will be kind of academic to continue putting effort in to understand this. I am trying to get answers out of the vendor to confirm any of these theories, which I hope will be more conclusive than our combined inference from testing.
 > 
 > It is possible that this is not the only thing that's wrong. Could you
 > share your patch that checks your "only do that on the last 256 byte
 > segment of a page" idea?
 > 
Unfortunately I have already rebased away that change with the new one.
I can go hunting in my reflog to see if I can find it again, but probably easier to just implement it again.
I just hacked in a threshold parameter to the dmapool allocator that told it to allocate a different segment if it was the last in a page and the size was over the threshold.




More information about the Linux-nvme mailing list