Disk Says It is Full But There's Nothing on It

Eric Robinson eric.robinson at psmnv.com
Thu Aug 8 12:28:11 PDT 2024


> -----Original Message-----
> From: Linux-nvme <linux-nvme-bounces at lists.infradead.org> On Behalf Of Eric
> Robinson
> Sent: Tuesday, August 6, 2024 10:28 PM
> To: Keith Busch <kbusch at kernel.org>
> Cc: Daniel Wagner <dwagner at suse.de>; linux-nvme at lists.infradead.org
> Subject: RE: Disk Says It is Full But There's Nothing on It
>
> > -----Original Message-----
> > From: Keith Busch <kbusch at kernel.org>
> > Sent: Tuesday, August 6, 2024 9:20 PM
> > To: Eric Robinson <eric.robinson at psmnv.com>
> > Cc: Daniel Wagner <dwagner at suse.de>; linux-nvme at lists.infradead.org
> > Subject: Re: Disk Says It is Full But There's Nothing on It
> >
> > On Tue, Aug 06, 2024 at 08:28:05PM +0000, Eric Robinson wrote:
> > > > someting bogus (nsze, ncap, nuse).
> > >
> > > [root at store11b zpool0]# nvme id-ns /dev/nvme0n1 NVME Identify
> > > Namespace 1:
> > > nsze    : 0x6fc400000
> > > ncap    : 0x6fc400000
> > > nuse    : 0x6fc400000
> > > nsfeat  : 0
> >
> > Is this after a fresh format? If you've no data on here that you wish
> > to save, run something like
> >
> >   # blkdiscard /dev/nvme0n1
> >
>
> That did the trick. First, I issued the blkdiscard command against /dev/nvme0n1.
> I then ran nvme list several times in a row, and each time I could see the Used
> column decreasing. Clearly the drive was doing cleanup. I then ran blkdiscard
> for all the other drives in sequence. Each time it took 24 seconds to return to a
> prompt. At the end, I ran nvme list again, with the following results. It is obvious
> that each of the drives is going through a discard process.
>
> [root at store11c ~]# nvme list
> Node                  Generic               SN                   Model
> Namespace  Usage                      Format           FW Rev
> --------------------- --------------------- -------------------- ---------------------------------------
> - ---------- -------------------------- ---------------- --------
> /dev/nvme0n1          /dev/ng0n1            6240A0Y0TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1          1.18  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme1n1          /dev/ng1n1            6240A0T2TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1          4.37  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme2n1          /dev/ng2n1            6230A0N9TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1          5.92  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme3n1          /dev/ng3n1            62D0A0FFTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1          7.74  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme4n1          /dev/ng4n1            62D0A0FPTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1          9.24  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme5n1          /dev/ng5n1            6240A0XNTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         11.04  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme6n1          /dev/ng6n1            6240A1BJTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.63  TB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme7n1          /dev/ng7n1            6240A0Q4TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         14.04  TB /  15.36  TB    512   B +  0 B   2.2.1
>
> And then finally, after a few minutes...
>
> [root at store11c ~]# nvme list
> Node                  Generic               SN                   Model
> Namespace  Usage                      Format           FW Rev
> --------------------- --------------------- -------------------- ---------------------------------------
> - ---------- -------------------------- ---------------- --------
> /dev/nvme0n1          /dev/ng0n1            6240A0Y0TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme1n1          /dev/ng1n1            6240A0T2TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme2n1          /dev/ng2n1            6230A0N9TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme3n1          /dev/ng3n1            62D0A0FFTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme4n1          /dev/ng4n1            62D0A0FPTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme5n1          /dev/ng5n1            6240A0XNTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme6n1          /dev/ng6n1            6240A1BJTCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
> /dev/nvme7n1          /dev/ng7n1            6240A0Q4TCG8         Dell Ent NVMe
> CM6 RI 15.36TB             0x1         12.82  MB /  15.36  TB    512   B +  0 B   2.2.1
>
> ...success.
>
> > or
> >
> >   # nvme format /dev/nvme0n1 -f
> >
> > and see if it changes. If it still doesn't change, then you need to
> > check with your vendor on the discrepency because nvme-cli is just the
> > messenger and faithfully shows what the device reports.
> >
> > In my experience, about half of nvme devices always report nuse == nsze.
> > LBA allocation tracking is an optional feature that many devices don't
> > do, but since you apparently have the exact same model and firmware
> > that shows it implementing this option, that's just weird.

I thought we resolved it, but one of the three servers refuses to change. It continues to show the drive capacity fully used. I noticed that the drives have newer firmware than the other servers. The NVME drives in the one that won't cooperate have FW 2.3.0, whereas the ones that worked have FW 2.2.1, if that matters.

-Eric




Disclaimer : This email and any files transmitted with it are confidential and intended solely for intended recipients. If you are not the named addressee you should not disseminate, distribute, copy or alter this email. Any views or opinions presented in this email are solely those of the author and might not represent those of Physician Select Management. Warning: Although Physician Select Management has taken reasonable precautions to ensure no viruses are present in this email, the company cannot accept responsibility for any loss or damage arising from the use of this email or attachments.



More information about the Linux-nvme mailing list