Deprecating and removing SLOB
Damien Le Moal
damien.lemoal at opensource.wdc.com
Wed Nov 16 00:02:11 PST 2022
On 2022/11/16 16:57, Matthew Wilcox wrote:
> On Tue, Nov 15, 2022 at 01:28:14PM +0900, Damien Le Moal wrote:
>> On 11/15/22 13:24, Damien Le Moal wrote:
>>> 6.1-rc5, SLOB:
>>> - 623 free pages
>>> - 629 free pages
>>> - 629 free pages
>>> 6.1-rc5, SLUB:
>>> - 448 free pages
>>> - 448 free pages
>>> - 429 free pages
>>> 6.1-rc5, SLUB + slub_max_order=0:
>>> - Init error, shell prompt but no shell command working
>>> - Init error, no shell prompt
>>> - 508 free pages
>>> - Init error, shell prompt but no shell command working
>>> 6.1-rc5, SLUB + patch:
>>> - Init error, shell prompt but no shell command working
>>> - 433 free pages
>>> - 448 free pages
>>> - 423 free pages
>>> 6.1-rc5, SLUB + slub_max_order=0 + patch:
>>> - Init error, no shell prompt
>>> - Init error, shell prompt, 499 free pages
>>> - Init error, shell prompt but no shell command working
>>> - Init error, no shell prompt
>>>
>>> No changes for SLOB results, expected.
>>>
>>> For default SLUB, I did get all clean boots this time and could run the
>>> cat command. But I do see shell fork failures if I keep running commands.
>>>
>>> For SLUB + slub_max_order=0, I only got one clean boot with 508 free
>>> pages. Remaining runs failed to give a shell prompt or allow running cat
>>> command. For the clean boot, I do see higher number of free pages.
>>>
>>> SLUB with the patch was nearly identical to SLUB without the patch.
>>>
>>> And SLUB+patch+slub_max_order=0 gave again a lot of errors/bad boot. I
>>> could run the cat command only once, giving 499 free pages, so better than
>>> regular SLUB. But it seems that the memory is more fragmented as
>>> allocations fail more often.
>>
>> Note about the last case (SLUB+patch+slub_max_order=0). Here are the
>> messages I got when the init shell process fork failed:
>>
>> [ 1.217998] nommu: Allocation of length 491520 from process 1 (sh) failed
>> [ 1.224098] active_anon:0 inactive_anon:0 isolated_anon:0
>> [ 1.224098] active_file:5 inactive_file:12 isolated_file:0
>> [ 1.224098] unevictable:0 dirty:0 writeback:0
>> [ 1.224098] slab_reclaimable:38 slab_unreclaimable:459
>> [ 1.224098] mapped:0 shmem:0 pagetables:0
>> [ 1.224098] sec_pagetables:0 bounce:0
>> [ 1.224098] kernel_misc_reclaimable:0
>> [ 1.224098] free:859 free_pcp:0 free_cma:0
>> [ 1.260419] Node 0 active_anon:0kB inactive_anon:0kB active_file:20kB
>> inactive_file:48kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
>> mapped:0kB dirty:0kB writeback:0kB shmem:0kB writeback_tmp:0kB
>> kernel_stack:576kB pagetables:0kB sec_pagetables:0kB all_unreclaimable? no
>> [ 1.285147] DMA32 free:3436kB boost:0kB min:312kB low:388kB high:464kB
>> reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB
>> inactive_file:28kB unevictable:0kB writepending:0kB present:8192kB
>> managed:6240kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
>> [ 1.310654] lowmem_reserve[]: 0 0 0
>> [ 1.314089] DMA32: 17*4kB (U) 10*8kB (U) 7*16kB (U) 6*32kB (U) 11*64kB
>> (U) 6*128kB (U) 6*256kB (U) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3460kB
>> [ 1.326883] 33 total pagecache pages
>> [ 1.330420] binfmt_flat: Unable to allocate RAM for process text/data,
>> errno -12
>
> What you're seeing here is memory fragmentation. There's more than 512kB
> of memory available, but nommu requires it to be contiguous, and it's
> not. This is pretty bad, really. We didn't even finish starting up
> and already we've managed to allocate at least one page from each of
> the 16 512kB chunks which existed. Commit df48a5f7a3bb was supposed
> to improve matters by making exact allocations reassemble once they
> were freed. Maybe the problem is entirely different.
I suspected something like this when seeing the reported "free:859" :)
What I can try next is booting without SD card and the bare minimum set of
drivers to see if the fragmentation is still there or not. Would that help ?
These one page allocations may be for device drivers so never freed, no ?
--
Damien Le Moal
Western Digital Research
More information about the linux-riscv
mailing list