Deprecating and removing SLOB

Vlastimil Babka vbabka at suse.cz
Mon Nov 21 09:02:39 PST 2022


On 11/21/22 05:30, Damien Le Moal wrote:
> On 11/17/22 02:51, Vlastimil Babka wrote:
>> On 11/15/22 05:24, Damien Le Moal wrote:
>>> On 11/14/22 23:47, Hyeonggon Yoo wrote:
>>>> On Mon, Nov 14, 2022 at 08:35:31PM +0900, Damien Le Moal wrote:
>>>
>>> Test notes: I used Linus 6.1-rc5 as the base. That is the only thing I
>>> changed in buildroot default config for the sipeed maix bit card, booting
>>> with SD card. The test is: booting and run "cat /proc/vmstat" and register
>>> the nr_free_pages value. I repeated the boot + cat 3 to 4 times for each case.
>>>
>>> Here are the results:
>>>
>>> 6.1-rc5, SLOB:
>>>     - 623 free pages
>>>     - 629 free pages
>>>     - 629 free pages
>>> 6.1-rc5, SLUB:
>>>     - 448 free pages
>>>     - 448 free pages
>>>     - 429 free pages
>>> 6.1-rc5, SLUB + slub_max_order=0:
>>>     - Init error, shell prompt but no shell command working
>>>     - Init error, no shell prompt
>>>     - 508 free pages
>>>     - Init error, shell prompt but no shell command working
>>> 6.1-rc5, SLUB + patch:
>>>     - Init error, shell prompt but no shell command working
>>>     - 433 free pages
>>>     - 448 free pages
>>>     - 423 free pages
>>> 6.1-rc5, SLUB + slub_max_order=0 + patch:
>>>     - Init error, no shell prompt
>>>     - Init error, shell prompt, 499 free pages
>>>     - Init error, shell prompt but no shell command working
>>>     - Init error, no shell prompt
>>>
>>> No changes for SLOB results, expected.
>>>
>>> For default SLUB, I did get all clean boots this time and could run the
>>> cat command. But I do see shell fork failures if I keep running commands.
>>>
>>> For SLUB + slub_max_order=0, I only got one clean boot with 508 free
>>> pages. Remaining runs failed to give a shell prompt or allow running cat
>>> command. For the clean boot, I do see higher number of free pages.
>>>
>>> SLUB with the patch was nearly identical to SLUB without the patch.
>>>
>>> And SLUB+patch+slub_max_order=0 gave again a lot of errors/bad boot. I
>>> could run the cat command only once, giving 499 free pages, so better than
>>> regular SLUB. But it seems that the memory is more fragmented as
>>> allocations fail more often.
>>>
>>> Hope this helps. Let me know if you want to test something else.
>> 
>> Could you please try this branch with CONFIG_SLUB_TINY=y?
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slub-tiny-v1r0
>> 
>> Seeing your results I didn't modify default slub_max_order by this new
>> CONFIG (yet?) so maybe after trying the default, trying then also with
>> manual slub_max_order=0 and slub_max_order=1 would be useful too. Otherwise
>> it should be all changes to lower SLUB memory footprint. Hopefully it will
>> be visible in the number of free pages. But if fragmentation is an issue, it
>> might not be enough. BTW, during boot there should be a line "Built X
>> zonelists, mobility grouping ..." can you grep for it and provide please, I
>> wonder if mobility grouping ends up being off or on on that system.
> 
> I ran your branch with CONFIG_SLUB_TINY=y. Here are the results with 3-4
> runs per config:
> 
> * tiny slub with default slub_max_order:
> 	- Clean boot, 579 free pages
> 	- Clean boot, 575 free pages
> 	- Clean boot, 579 free pages
> 
> * tiny slub with slub_max_order=0 as boot argument:
>         - Init error, shell prompt but no shell command working
> 	- Init error, shell prompt, 592 free pages
> 	- Init error, shell prompt, 591 free pages
> 	- Init error, shell prompt, 591 free pages
> 
> * tiny slub with slub_max_order=1 as boot argument:
> 	- Clean boot, 601 free pages
> 	- Clean boot, 601 free pages
> 	- Clean boot, 591 free pages
> 	- Clean boot, 601 free pages

Oh that's great result, better than I'd hope!
I'll change the default slub_max_order=1 with CONFIG_SLUB_TINY then.

> For all cases, mobility grouping was reported as off:
> 
> [    0.000000] Built 1 zonelists, mobility grouping off.  Total pages: 2020

Yeah, expected that would be the case, thanks for confirming.

> So it looks like your tiny slub branch with slub_max_order=1 puts us
> almost on par with slob and that slub_max_order=0 seems to be generating
> more fragmentation leading to unreliable boot. I also tried
> slub_max_order=2, which gives clean boot and around 582 free pages, almost
> the same as the default.
> 
> With this branch applied, I have no issues with having slob deprecated :)
> Thanks !

Great, thanks for the testing!

>> 
>> Thanks!
>> 
>>> Cheers.
>>>
>> 
> 




More information about the linux-riscv mailing list