[RFC PATCH 00/17] netfs: [WIP] Keep track of folios in a segmented bio_vec[] chain

David Howells dhowells at redhat.com
Wed Mar 4 06:03:07 PST 2026


Hi Willy, Christoph, et al.,

[!] This is a preview.  Please don't expect this to fully compile or work.
    It's been somewhat tested with AFS and CIFS, but not 9P, Ceph or NFS -
    and will not build with Ceph or NFS at the moment.

These patches get rid of folio_queue, rolling_buffer and ITER_FOLIOQ,
replacing the folio queue construct used to manage buffers in netfslib with
one based around a segmented chain of bio_vec arrays instead.  There are
three main aims here:

 (1) The kernel file I/O subsystem seems to be moving towards consolidating
     on the use of bio_vec arrays, so embrace this by moving netfslib to
     keep track of its buffers for buffered I/O in bio_vec[] form.

 (2) Netfslib already uses a bio_vec[] to handle unbuffered/DIO, so the
     number of different buffering schemes used can be reduced to just a
     single one.

 (3) Always send an entire filesystem RPC request message to a TCP socket
     with single kernel_sendmsg() call as this is faster, more efficient
     and doesn't require the use of corking as it puts the entire
     transmission loop inside of a single tcp_sendmsg().

For the replacement of folio_queue, a segmented chain of bio_vec arrays
rather than a single monolithic array is provided:

	struct bvecq {
		struct bvecq		*next;
		struct bvecq		*prev;
		unsigned long long	fpos;
		refcount_t		ref;
		u32			priv;
		u16			nr_segs;
		u16			max_segs;
		bool			inline_bv:1;
		bool			free:1;
		bool			unpin:1;
		bool			discontig:1;
		struct bio_vec		*bv;
		struct bio_vec		__bv[];
	};

The fields are:

 (1) next, prev - Link segments together in a list.  I want this to be
     NULL-terminated linear rather than circular to make it possible to
     arbitrarily glue bits on the front.

 (2) fpos, discontig - Note the current file position of the first byte of
     the segment; all the bio_vecs in ->bv[] must be contiguous in the file
     space.  The fpos can be used to find the folio by file position rather
     then from the info in the bio_vec.

     If there's a discontiguity, this should break over into a new bvecq
     segment with the discontig flag set (though this is redundant if you
     keep track of the file position).  Note that the beginning and end
     file positions in a segment need not be aligned to any filesystem
     block size.

 (3) ref - Refcount.  Each bvecq keeps a ref on the next.  I'm not sure
     this is entirely necessary, but it makes sharing slices easier.

 (4) priv - Private data for the owner.  Dispensible; currently only used
     for storing a debug ID for tracing in a patch not included here.

 (5) max_segs, nr_segs.  The size of bv[] and the number of elements used.
     I've assumed a maximum of 65535 bio_vecs in the array (which would
     represent a ~1MiB allocation).

 (6) bv, __bv, inline_bv.  bv points to the bio_vec[] array handled by
     this segment.  This may begin at __bv and if it does inline_bv should
     be set (otherwise it's impossible to distinguish a separately
     allocated bio_vec[] that follows immediately by coincidence).

 (7) free, unpin.  free is set if the memory pointed to by the bio_vecs
     needs freeing in some way upon I/O completion.  unpin is set if this
     means using GUP unpinning rather than put_page().

I've also defined an iov_iter iterator type ITER_BVECQ to walk this sort of
construct so that it can be passed directly to sendmsg() or block-based DIO
(as cachefiles does).

This series makes the following changes to netfslib:

 (1) The folio_queue chain used to hold folios for buffered I/O is replaced
     with a bvecq chain.  Each bio_vec then holds (a portion of) one folio.
     Each bvecq holds a contiguous sequence of folios, but adjacent bvecqs
     in a chain may be discontiguous.

 (2) For unbuffered/DIO, the source iov_iter is extracted into a bvecq
     chain.

 (3) An abstract position representation ('bvecq_pos') is created that can
     used to hold a position in a bvecq chain.  For the moment, this takes
     a ref on the bvecq it points to, but that may be excessive.

 (4) Buffer tracking is managed with three cursors:  The load_cursor, at
     which new folios are added as we go; the dispatch_cursor, at which new
     subrequests' buffers start when they're created; and the
     collect_cursor, the point at which folios are being unlocked.

     Not all cursors are necessarily needed in all situations and during
     buffered writeback, we actually need a dispatch cursor per stream (one
     for the network filesystem and one for the cache).

 (5) ->prepare_read(), buffer setting up and ->issue_read() are merged, as
     are the write variants, with the filesystem calling back up to
     netfslib to prepare its buffer.  This simplifies the process of
     setting up a subrequest.  It may even make sense to have the
     filesystem allocate the subrequest.

 (6) For the moment, dispatch tracking is removed from netfs_io_request and
     netfs_io_stream.  The problem is that we have several different ways
     (including in the retry code) in which we need to track things, some
     of which (e.g. retry) might happen simultaneously with the main
     dispatch, so keeping things separate helps.  Netfslib sets up a
     context struct, passes it to ->issue_read/write(), which passes it
     back to netfs_prepare_read/write_buffer().

 (7) Netfslib dispatches I/O by accumulating enough bufferage to dispatch
     at least one subrequest, then looping to generate as many as the
     filesystem wants to (they may be limited by other constraints,
     e.g. max RDMA segment count or negotiated max size).  This loop could
     be moved down into the filesystem.  A new method is provided by which
     netfslib can ask the filesystem to provide an estimate of the data
     that should be accumulated before dispatch begins.

 (8) Reading from the cache is now managed by querying the cache to provide
     a list of the next data extents within the cache.  For the moment this
     uses FIEMAP, but should at some point into the future transition to
     using a block-fs metadata-independent way of tracking this.

 (9) AFS directories are switched to using a bvecq rather than a
     folio_queue to hold their contents.

(10) Make CIFS use a bvecq rather than a folio_queue for holding a
     temporary encryption buffer.

(11) CIFS RDMA is given the ability to extract ITER_BVECQ and support for
     extracting ITER_FOLIOQ, ITER_BVEC and ITER_KVEC is removed.

(12) All the folio_queue and rolling_buffer code is removed.

Two further things that I'm working on (but not in this branch) are:

 (1) Make it so that a filesystem can be given a copy of a subchain which
     it can then tack header and trailer protocol elements upon to form a
     single message (I have this working for cifs) and even join copies
     together with intervening protocol elements to form compounds.

 (2) Make it so that a filesystem can 'splice' out the contents of the TCP
     receive queue into a bvecq chain.  This allows the socket lock to be
     dropped much more quickly and the copying of data read to the
     destination buffers to happen without the lock.  I have this working
     for cifs too.  Kernel recvmsg() doesn't then block kernel sendmsg()
     for anywhere near as long.

There are also some things I want to consider for the future:

 (1) Create one or more batched iteration functions to 'unlock' all the
     folios in a bio_vec[], where 'unlock' is the appropriate action for
     ending a read or a write.  Batching should hopefully also improve the
     efficiency of wrangling the marks on the xarray.  Very often these
     marks are going to be represented by contiguous bits, so there may be
     a way to change them in bulk.

 (2) Rather than walking the bvecq chain to get each individual folio out
     via bv_page, use the file position stored on the bvecq and the sum of
     bv_len to iterate over the appropriate range in i_pages.

 (3) Change iov_iter to store the initial starting point and for
     iov_iter_revert() to reset to that and advance.  This would (a) help
     prevent over-reversion and (b) dispense with the need for a prev
     pointer.

 (4) Use bvecq to replace scatterlist.  One problem with replacing
     scatterlist is that crypto drivers like to glue bits on the front of
     the scatterlists they're given (something trivial with that API) - and
     this is one way to achieve it.

The patches can also be found here:

	https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=netfs-next

Thanks,
David

David Howells (17):
  netfs: Fix unbuffered/DIO writes to dispatch subrequests in strict
    sequence
  vfs: Implement a FIEMAP callback
  iov_iter: Add a segmented queue of bio_vec[]
  Add a function to kmap one page of a multipage bio_vec
  netfs: Add some tools for managing bvecq chains
  afs: Use a bvecq to hold dir content rather than folioq
  netfs: Add a function to extract from an iter into a bvecq
  cifs: Use a bvecq for buffering instead of a folioq
  cifs: Support ITER_BVECQ in smb_extract_iter_to_rdma()
  netfs: Switch to using bvecq rather than folio_queue and
    rolling_buffer
  cifs: Remove support for ITER_KVEC/BVEC/FOLIOQ from
    smb_extract_iter_to_rdma()
  netfs: Remove netfs_alloc/free_folioq_buffer()
  netfs: Remove netfs_extract_user_iter()
  iov_iter: Remove ITER_FOLIOQ
  netfs: Remove folio_queue and rolling_buffer
  netfs: Check for too much data being read
  netfs: Combine prepare and issue ops and grab the buffers on request

 Documentation/core-api/folio_queue.rst | 209 ------
 Documentation/core-api/index.rst       |   1 -
 fs/9p/vfs_addr.c                       |  34 +-
 fs/afs/dir.c                           |  41 +-
 fs/afs/dir_edit.c                      |  42 +-
 fs/afs/dir_search.c                    |  33 +-
 fs/afs/file.c                          |  27 +-
 fs/afs/fsclient.c                      |   8 +-
 fs/afs/inode.c                         |  18 +-
 fs/afs/internal.h                      |  16 +-
 fs/afs/write.c                         |  35 +-
 fs/afs/yfsclient.c                     |   6 +-
 fs/cachefiles/io.c                     | 350 +++++----
 fs/ceph/addr.c                         | 109 +--
 fs/ioctl.c                             |  29 +-
 fs/netfs/Makefile                      |   4 +-
 fs/netfs/buffered_read.c               | 495 ++++++++-----
 fs/netfs/buffered_write.c              |   2 +-
 fs/netfs/bvecq.c                       | 634 +++++++++++++++++
 fs/netfs/direct_read.c                 | 123 ++--
 fs/netfs/direct_write.c                | 313 +++++++-
 fs/netfs/fscache_io.c                  |   6 -
 fs/netfs/internal.h                    | 164 ++++-
 fs/netfs/iterator.c                    | 313 +++-----
 fs/netfs/misc.c                        | 145 +---
 fs/netfs/objects.c                     |  17 +-
 fs/netfs/read_collect.c                | 124 ++--
 fs/netfs/read_pgpriv2.c                |  68 +-
 fs/netfs/read_retry.c                  | 226 +++---
 fs/netfs/read_single.c                 | 177 +++--
 fs/netfs/rolling_buffer.c              | 222 ------
 fs/netfs/stats.c                       |   6 +-
 fs/netfs/write_collect.c               |  96 ++-
 fs/netfs/write_issue.c                 | 950 ++++++++++++++-----------
 fs/netfs/write_retry.c                 | 144 ++--
 fs/nfs/fscache.c                       |  13 +-
 fs/smb/client/cifsglob.h               |   2 +-
 fs/smb/client/cifssmb.c                |  13 +-
 fs/smb/client/file.c                   | 149 ++--
 fs/smb/client/smb2ops.c                |  78 +-
 fs/smb/client/smb2pdu.c                |  28 +-
 fs/smb/client/smbdirect.c              | 152 +---
 fs/smb/client/transport.c              |  15 +-
 include/linux/bvec.h                   |  54 ++
 include/linux/fiemap.h                 |   3 +
 include/linux/folio_queue.h            | 282 --------
 include/linux/fscache.h                |  19 +
 include/linux/iov_iter.h               |  66 +-
 include/linux/netfs.h                  | 177 +++--
 include/linux/rolling_buffer.h         |  61 --
 include/linux/uio.h                    |  17 +-
 include/trace/events/netfs.h           | 118 ++-
 lib/iov_iter.c                         | 395 +++++-----
 lib/scatterlist.c                      |  56 +-
 lib/tests/kunit_iov_iter.c             | 183 ++---
 net/9p/client.c                        |   8 +-
 56 files changed, 3815 insertions(+), 3261 deletions(-)
 delete mode 100644 Documentation/core-api/folio_queue.rst
 create mode 100644 fs/netfs/bvecq.c
 delete mode 100644 fs/netfs/rolling_buffer.c
 delete mode 100644 include/linux/folio_queue.h
 delete mode 100644 include/linux/rolling_buffer.h




More information about the linux-afs mailing list