[PATCH v9 4/8] lib/scatterlist: add check when merging zone device pages
John Hubbard
jhubbard at nvidia.com
Mon Sep 5 17:21:03 PDT 2022
On 8/25/22 08:24, Logan Gunthorpe wrote:
> Consecutive zone device pages should not be merged into the same sgl
> or bvec segment with other types of pages or if they belong to different
> pgmaps. Otherwise getting the pgmap of a given segment is not possible
> without scanning the entire segment. This helper returns true either if
> both pages are not zone device pages or both pages are zone device
> pages with the same pgmap.
>
> Factor out the check for page mergability into a pages_are_mergable()
> helper and add a check with zone_device_pages_are_mergeable().
>
> Signed-off-by: Logan Gunthorpe <logang at deltatee.com>
> ---
> lib/scatterlist.c | 25 +++++++++++++++----------
> 1 file changed, 15 insertions(+), 10 deletions(-)
>
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index c8c3d675845c..a0ad2a7959b5 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -410,6 +410,15 @@ static struct scatterlist *get_next_sg(struct sg_append_table *table,
> return new_sg;
> }
>
> +static bool pages_are_mergeable(struct page *a, struct page *b)
> +{
> + if (page_to_pfn(a) != page_to_pfn(b) + 1)
Instead of "a" and "b", how about naming these args something like
"page" and "prev_page", in order to avoid giving the impression that
comparing a and b is the same as comparing b and a?
In other words, previously, as an unrolled function, the code made
sense:
page_to_pfn(pages[j]) != page_to_pfn(pages[j - 1]) + 1)
But now, the understanding that this *must* be called with a page and
its previous page has gotten lost during refactoring, and we are left
with a check that is, on its own, not understandable.
Otherwise, the diffs look good. With some sort of naming change to
the args there, please feel free to add:
Reviewed-by: John Hubbard <jhubbard at nvidia.com>
thanks,
--
John Hubbard
NVIDIA
> + return false;
> + if (!zone_device_pages_have_same_pgmap(a, b))
> + return false;
> + return true;
> +}
> +
> /**
> * sg_alloc_append_table_from_pages - Allocate and initialize an append sg
> * table from an array of pages
> @@ -447,6 +456,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
> unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> unsigned int added_nents = 0;
> struct scatterlist *s = sgt_append->prv;
> + struct page *last_pg;
>
> /*
> * The algorithm below requires max_segment to be aligned to PAGE_SIZE
> @@ -460,21 +470,17 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
> return -EOPNOTSUPP;
>
> if (sgt_append->prv) {
> - unsigned long paddr =
> - (page_to_pfn(sg_page(sgt_append->prv)) * PAGE_SIZE +
> - sgt_append->prv->offset + sgt_append->prv->length) /
> - PAGE_SIZE;
> -
> if (WARN_ON(offset))
> return -EINVAL;
>
> /* Merge contiguous pages into the last SG */
> prv_len = sgt_append->prv->length;
> - while (n_pages && page_to_pfn(pages[0]) == paddr) {
> + last_pg = sg_page(sgt_append->prv);
> + while (n_pages && pages_are_mergeable(last_pg, pages[0])) {
> if (sgt_append->prv->length + PAGE_SIZE > max_segment)
> break;
> sgt_append->prv->length += PAGE_SIZE;
> - paddr++;
> + last_pg = pages[0];
> pages++;
> n_pages--;
> }
> @@ -488,7 +494,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
> for (i = 1; i < n_pages; i++) {
> seg_len += PAGE_SIZE;
> if (seg_len >= max_segment ||
> - page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) {
> + !pages_are_mergeable(pages[i], pages[i - 1])) {
> chunks++;
> seg_len = 0;
> }
> @@ -504,8 +510,7 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
> for (j = cur_page + 1; j < n_pages; j++) {
> seg_len += PAGE_SIZE;
> if (seg_len >= max_segment ||
> - page_to_pfn(pages[j]) !=
> - page_to_pfn(pages[j - 1]) + 1)
> + !pages_are_mergeable(pages[j], pages[j - 1]))
> break;
> }
>
More information about the Linux-nvme
mailing list