[BUG] I/O timeouts and system freezes on Kingston A2000 NVME with BCACHEFS

Kent Overstreet kent.overstreet at linux.dev
Fri Jan 19 13:34:05 PST 2024


On Fri, Jan 19, 2024 at 02:22:04PM -0700, Jens Axboe wrote:
> On 1/19/24 5:25 AM, Mia Kanashi wrote:
> > This issue was originally reported here: https://github.com/koverstreet/bcachefs/issues/628
> > 
> > Transferring large amounts of files to the bcachefs from the btrfs
> > causes I/O timeouts and freezes the whole system. This doesn't seem to
> > be related to the btrfs, but rather to the heavy I/O on the drive, as
> > it happens without btrfs being mounted. Transferring the files to the
> > HDD, and then from it to the bcachefs on the NVME sometimes doesn't
> > make the problem occur. The problem only happens on the bcachefs, not
> > on btrfs or ext4. It doesn't happen on the HDD, I can't test with
> > other NVME drives sadly. The behaviour when it is frozen is like this:
> > all drive accesses can't process, when not cached in ram, so every app
> > that is loaded in the ram, continues to function, but at the moment it
> > tries to access the drive it freezes, until the drive is reset and
> > those abort status messages appear in the dmesg, after that system is
> > unfrozen for a moment, if you keep copying the files then the problem
> > reoccurs once again.
> > 
> > This drive is known to have problems with the power management in the
> > past:
> > https://wiki.archlinux.org/title/Solid_state_drive/NVMe#Troubleshooting
> > But those problems where since fixed with kernel workarounds /
> > firmware updates. This issue is may be related, perhaps bcachefs does
> > something different from the other filesystems, and workarounds don't
> > apply, which causes the bug to occur only on it. It may be a problem
> > in the nvme subsystem, or just some edge case in the bcachefs too, who
> > knows. I tried to disable ASPM and setting latency to 0 like was
> > suggested, it didn't fix the problem, so I don't know. If this is
> > indeed related to that specific drive it would be hard to reproduce.
> 
> From a quick look, looks like a broken drive/firmware. It is suspicious
> that all failed IO is 256 blocks. You could try and limit the transfer
> size and see if that helps:
> 
> # echo 64 > /sys/block/nvme0n1/queue/max_sectors_kb
> 
> Or maybe the transfer size is just a red herring, who knows. The error
> code seems wonky:

Does nvme have a blacklist/quirks mechanism, if that ends up resolving
it?



More information about the Linux-nvme mailing list