[PATCH net-next v1 00/12] First try to replace page_frag with page_frag_cache
Alexander Duyck
alexander.duyck at gmail.com
Mon Apr 8 08:09:22 PDT 2024
On Mon, Apr 8, 2024 at 6:38 AM Yunsheng Lin <linyunsheng at huawei.com> wrote:
>
> On 2024/4/8 1:02, Alexander Duyck wrote:
> > On Sun, Apr 7, 2024 at 6:10 AM Yunsheng Lin <linyunsheng at huawei.com> wrote:
> >>
> >> After [1], Only there are two implementations for page frag:
> >>
> >> 1. mm/page_alloc.c: net stack seems to be using it in the
> >> rx part with 'struct page_frag_cache' and the main API
> >> being page_frag_alloc_align().
> >> 2. net/core/sock.c: net stack seems to be using it in the
> >> tx part with 'struct page_frag' and the main API being
> >> skb_page_frag_refill().
> >>
> >> This patchset tries to unfiy the page frag implementation
> >> by replacing page_frag with page_frag_cache for sk_page_frag()
> >> first. net_high_order_alloc_disable_key for the implementation
> >> in net/core/sock.c doesn't seems matter that much now have
> >> have pcp support for high-order pages in commit 44042b449872
> >> ("mm/page_alloc: allow high-order pages to be stored on the
> >> per-cpu lists").
> >>
> >> As the related change is mostly related to networking, so
> >> targeting the net-next. And will try to replace the rest
> >> of page_frag in the follow patchset.
> >>
> >> After this patchset, we are not only able to unify the page
> >> frag implementation a little, but seems able to have about
> >> 0.5+% performance boost testing by using the vhost_net_test
> >> introduced in [1] and page_frag_test.ko introduced in this
> >> patch.
> >
> > One question that jumps out at me for this is "why?". No offense but
> > this is a pretty massive set of changes with over 1400 additions and
> > 500+ deletions and I can't help but ask why, and this cover page
> > doesn't give me any good reason to think about accepting this set.
>
> There are 375 + 256 additions for testing module and the documentation
> update in the last two patches, and there is 198 additions and 176
> deletions for moving the page fragment allocator from page_alloc into
> its own file in patch 1.
> Without above number, there are above 600+ additions and 300+ deletions,
> deos that seems reasonable considering 140+ additions are needed to for
> the new API, 300+ additions and deletions for updating the users to use
> the new API as there are many users using the old API?
Maybe it would make more sense to break this into 2 sets. The first
one adding your testing, and the second one consolidating the API.
With that we would have a clearly defined test infrastructure in place
for the second set which is making significant changes to the API. In
addition it would provide the opportunity for others to point out any
other test that they might want pulled in since this is likely to have
impact outside of just the tests you have proposed.
> > What is meant to be the benefit to the community for adding this? All
> > I am seeing is a ton of extra code to have to review as this
> > unification is adding an additional 1000+ lines without a good
> > explanation as to why they are needed.
>
> Some benefits I see for now:
> 1. Improve the maintainability of page frag's implementation:
> (1) future bugfix and performance can be done in one place.
> For example, we may able to save some space for the
> 'page_frag_cache' API user, and avoid 'get_page()' for
> the old 'page_frag' API user.
The problem as I see it is it is consolidating all the consumers down
to the least common denominator in terms of performance. You have
already demonstrated that with patch 2 which enforces that all drivers
have to work from the bottom up instead of being able to work top down
in the page.
This eventually leads you down the path where every time somebody has
a use case for it that may not be optimal for others it is going to be
a fight to see if the new use case can degrade the performance of the
other use cases.
> (2) Provide a proper API so that caller does not need to access
> internal data field. Exposing the internal data field may
> enable the caller to do some unexpcted implementation of
> its own like below, after this patchset the API user is not
> supposed to do access the data field of 'page_frag_cache'
> directly[Currently it is still acessable from API caller if
> the caller is not following the rule, I am not sure how to
> limit the access without any performance impact yet].
> https://elixir.bootlin.com/linux/v6.9-rc3/source/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c#L1141
This just makes the issue I point out in 1 even worse. The problem is
this code has to be used at the very lowest of levels and is as
tightly optimized as it is since it is called at least once per packet
in the case of networking. Networking that is still getting faster
mind you and demanding even fewer cycles per packet to try and keep
up. I just see this change as taking us in the wrong direction.
> 2. page_frag API may provide a central point for netwroking to allocate
> memory instead of calling page allocator directly in the future, so
> that we can decouple 'struct page' from networking.
I hope not. The fact is the page allocator serves a very specific
purpose, and the page frag API was meant to serve a different one and
not be a replacement for it. One thing that has really irked me is the
fact that I have seen it abused as much as it has been where people
seem to think it is just a page allocator when it was really meant to
just provide a way to shard order 0 pages into sizes that are half a
page or less in size. I really meant for it to be a quick-n-dirty slab
allocator for sizes 2K or less where ideally we are working with
powers of 2.
It concerns me that you are talking about taking this down a path that
will likely lead to further misuse of the code as a backdoor way to
allocate order 0 pages using this instead of just using the page
allocator.
> >
> > Also I wouldn't bother mentioning the 0.5+% performance gain as a
> > "bonus". Changes of that amount usually mean it is within the margin
> > of error. At best it likely means you haven't introduced a noticeable
> > regression.
>
> For micro-benchmark ko added in this patchset, performance gain seems quit
> stable from testing in system without any other load.
Again, that doesn't mean anything. It could just be that the code
shifted somewhere due to all the code moved so a loop got more aligned
than it was before. To give you an idea I have seen performance gains
in the past from turning off Rx checksum for some workloads and that
was simply due to the fact that the CPUs were staying awake longer
instead of going into deep sleep states as such we could handle more
packets per second even though we were using more cycles. Without
significantly more context it is hard to say that the gain is anything
real at all and a 0.5% gain is well within that margin of error.
More information about the linux-arm-kernel
mailing list