printk_caller_id() crashing with UML/GCOV

Vimal Agrawal avimalin at gmail.com
Fri Sep 15 00:27:19 PDT 2023


I got following GCOV configs enabled

CONFIG_GCOV_KERNEL=y
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
CONFIG_GCOV_PROFILE_ALL=y

and was trying to run UML (kernel version 6.1.44) and ended up in a
crash in printk_caller_id().

I tried to debug and it seems current_thread_info is returning NULL
and that is because it is not yet set up. It hit much before mm_init
or any thread_info could be created.

this print is actually coming from __gcov_init

void __gcov_init(struct gcov_info *info)

{

        static unsigned int gcov_version;


        mutex_lock(&gcov_lock);

        if (gcov_version == 0) {

                gcov_version = gcov_info_version(info);

                /*

                 * Printing gcc's version magic may prove useful for debugging

                 * incompatibility reports.

                 */

                 // Need to uncomment post fixing NC-124446

                 pr_info("version magic: 0x%x\n", gcov_version);
<<<<<<< this pr_info causing crash



$ gdb ./linux
(gdb) run mem=512M rootfstype=hostfs rw init=/bin/bash

Starting program:
/home/vimal.agrawal/Morane/git-openwrt/basesystem/NEMO/linux-kernel/linux
mem=512M rootfstype=hostfs rw init=/bin/bash

[Thread debugging using libthread_db enabled]

Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".


Program received signal SIGSEGV, Segmentation fault.

0x00000000600f2de3 in printk_caller_id () at kernel/printk/printk.c:2033

2033 return in_task() ? task_pid_nr(current) :

(gdb) bt

#0  0x00000000600f2de3 in printk_caller_id () at kernel/printk/printk.c:2033

#1  vprintk_store (facility=0, level=<optimized out>, dev_info=0x0,

    fmt=0x6097ed2a "\001\066version magic: 0x%x\n",
args=0x7fffffffe188) at kernel/printk/printk.c:2143

#2  0x0000000000000000 in ?? ()

I am not sure if any of my .config is causing this. I don't think we
can use pr_info so early. I do see that this pr_info in __gcov_init
was added long back in 2019.

Thanks,
Vimal



More information about the linux-um mailing list