Chapter 3: Building the 6.x Linux Kernel from Source (Part 2)
Chapter narrative: In the previous chapter, we configured the kernel like prepping a surgical table. Now the scalpel goes down—we're going to slice into the code, compile it, install it, and watch it awaken under GRUB's guidance. This isn't just typing commands; this is handcrafting the soul of an operating system.
We stopped halfway through in the last chapter. By then, we had learned how to obtain the kernel source (whether by extracting a tar archive or pulling directly with git), understood the maze-like source tree structure, and—perhaps most importantly—nailed down the kernel configuration to generate a .config file of our own. You even tried adding a custom item to the configuration menu.
But those were just the preparations.
Now, we begin the actual manufacturing process. The remaining four steps will transform a pile of text files into a kernel image capable of making a machine fly:
- Compile the kernel image and modules (Step 4): This is the main event, and when your CPU heats up the most.
- Install kernel modules (Step 5): The compiled
.kofiles need to go where they belong. - Generate initramfs and configure the bootloader (Step 6): Solve the classic "chicken-and-egg" problem and tell BIOS/UEFI where to find the new kernel.
- Customize GRUB and final verification (Step 7): Ensure our new kernel appears at boot and confirm it actually works as expected.
As an epilogue to this chapter, we'll also break free from architecture constraints and try cross-compiling a kernel for another board—the legendary Raspberry Pi.
Ready? This time, we're not just reading the manual—we're turning the screws.
3.1 Compiling the Kernel Image and Modules
If you approach this purely as an end user, compiling the kernel is shockingly simple.
Just make sure you're in the root directory of the kernel source tree, type make, and—go brew a cup of coffee. Really, just that one command. The kbuild system automatically handles everything else: it compiles the kernel image, compiles all components you configured as modules (m), and even conveniently compiles the Device Tree Blobs (DTBs) in embedded systems.
The first build will take some time, which is perfectly normal. The modern Linux kernel codebase is staggeringly large, estimated at 25 to 30 million lines of source code (SLOC). This is an incredibly memory- and CPU-intensive task, so much so that some people even use kernel compilation as a stress-testing tool!
Of course, make can be followed by different targets. If you type make help, you'll see a huge list of options. We used it in the previous chapter to view configuration targets; now we use it to look at build targets.
We already set the environment variable LKP_KSRC to point to our source directory in the previous chapter, so let's use it directly:
$ cd ${LKP_KSRC}
$ make help
[...]
Other generic targets:
all - Build all targets marked with [*]
* vmlinux - Build the bare kernel
* modules - Build all modules
[...]
Architecture specific targets (x86):
* bzImage - Compressed kernel image (arch/x86/boot/bzImage)
[...]
$
Notice something here: running make all (or just typing make, since it's the default target) builds the three targets marked with * above. What do they represent?
- vmlinux: This is the uncompressed kernel image file. It's huge, especially when debug information is enabled. We usually don't boot with it directly, but it's invaluable for kernel debugging—never delete it.
- modules: All configuration items marked as
m(module) are compiled into.ko(Kernel Object) files, temporarily stored in the source tree. - bzImage: This is the x86-architecture-specific compressed kernel image (big zImage). This is the file the bootloader actually loads into memory, decompresses, and jumps to for execution.
Here's a frequently asked question: since we boot with bzImage, what's the point of vmlinux?
Imagine bzImage is a sealed shipping box with compressed contents inside, convenient for transport. vmlinux is the full parts list spread out on your desk. If you need to debug a kernel crash, you need that uncompressed vmlinux packed with symbol information. Without it, you'll just see a dizzying array of addresses instead of function names.
Parallel Compilation: Squeeze Every Drop from Your CPU
Modern make is smart enough to support multi-process parallel builds. If you're still using single-threaded make, you're wasting your machine's performance.
You can control the parallelism via the -jn option, where n is the upper limit for parallel tasks. A general rule of thumb (heuristic) is:
n = CPU 核心数 * 系数
This coefficient is usually 2. If your system has an extreme number of cores (hundreds or thousands), the coefficient can be lowered to 1.5. Of course, the core count here ideally refers to logical cores supporting SMT (Simultaneous Multi-Threading, aka Intel's Hyper-Threading technology).
How do you know how many cores your machine has? Just use nproc:
$ nproc
4
This is my virtual machine configuration, allocated with 4 cores. So, we can set the parallel count to 8 (4 * 2).
$ make -j8
💡 Side effect warning: Compiling the kernel is extremely CPU- and memory-intensive.
If you compile with a graphical interface running inside a VM, you might experience system lag or even get suddenly logged out. This is because you've run out of memory.
- Recommendation: Switch to multi-user text mode (runlevel 3 or
multi-user.target) before compiling. Under systemd, usesudo systemctl isolate multi-user.target.- Alternatively: Allocate more memory to the VM. Memory is cheap these days, but it's cheaper than watching your build fail halfway through.
- Best practice: Connect to the VM via
sshto compile, redirecting output to a file for easier troubleshooting:make -j8 2>&1 | tee out.txt
Those Annoying Dependency Errors
After you gleefully type make -j8, things don't always go smoothly. You might encounter errors like this:
warning: Cannot use CONFIG_STACK_VALIDATION=y, please install libelf-dev
[...]
make[1]: *** No rule to make target 'debian/canonical-revoked-certs.pem'
The first problem is a missing libelf-dev. On Ubuntu, sudo apt install libelf-dev will fix it.
The second problem is more interesting and deceptive. It suddenly pops up after compilation has been running for a while, causing the build to fail. The root cause lies in a configuration item named CONFIG_SYSTEM_REVOCATION_KEYS.
On recent Ubuntu systems, this configuration item points by default to a file that doesn't exist in the vanilla kernel (pure source). The easiest fix is to simply disable it:
# 使用脚本工具禁用该配置项
scripts/config --disable SYSTEM_REVOCATION_KEYS
# 验证一下
$ grep CONFIG_SYSTEM_REVOCATION_KEYS .config
# CONFIG_SYSTEM_REVOCATION_KEYS is not set
Now, run make -j8 again. This time, it should run straight through to the end.
Build Artifacts: The Three Files You're Looking For
If all goes well, the screen will finally scroll with output similar to this:
LD vmlinux
SYSMAP System.map
[...]
BUILD arch/x86/boot/bzImage
Kernel: arch/x86/boot/bzImage is ready (#3)
At this point, in the root directory of the source tree, you should find three key files (there are many others, but these three are the most important):
$ ls -lh vmlinux System.map
-rw-rw-r-- 1 c2kp c2kp 4.8M May 16 16:12 System.map
-rwxrwxr-x 1 c2kp c2kp 704M May 16 16:12 vmlinux
$ file vmlinux
vmlinux: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
See the size of that vmlinux? 704MB. That's the raw kernel image containing all debug symbols and metadata.
And that System.map is the kernel symbol table. It records the memory addresses corresponding to function and variable names inside the kernel. This is crucial when debugging OOPS or crashes.
As for the compressed image we actually boot—bzImage—it's hidden in the architecture-specific directory:
$ ls -lh arch/x86/boot/bzImage
-rw-rw-r-- 1 c2kp c2kp 12M May 16 16:12 arch/x86/boot/bzImage
$ file arch/x86/boot/bzImage
arch/x86/boot/bzImage: Linux kernel x86 boot executable bzImage, RO-rootX, swap_dev 0x6, Normal VGA
12MB. This is what ultimately gets stuffed into memory.
Here's a handy trick: the kernel's Makefile has some built-in verification targets that can help you confirm the current kernel version or image name, saving you from typos in paths:
$ make kernelrelease kernelversion image_name
6.1.25-lkp-kernel
6.1.25
arch/x86/boot/bzImage
Alright, we have the image. Now, we need to support it—by installing the components compiled as modules.
3.2 Installing Kernel Modules
During the compilation phase, all kernel modules marked as m were compiled into .ko files, scattered throughout the source tree. But compiling them isn't enough. The system needs to look for these modules in a "place everyone knows about" at boot time.
That "place everyone knows about" is: /lib/modules/$(uname -r)/.
Where Did the Modules Go?
Before installing, let's first see what modules were actually generated in the source. Use the find command to find out:
$ find . -name "*.ko"
./crypto/crypto_simd.ko
./crypto/cryptd.ko
[...]
./fs/binfmt_misc.ko
./fs/vboxsf/vboxsf.ko
Right now, they're just regular files in the source tree. For the system to load them during startup and runtime, we need to perform the installation step.
Running the Installation
Installation is simple—just one command—but it requires root privileges because we're writing to /lib/modules:
$ sudo make modules_install
[...]
INSTALL /lib/modules/6.1.25-lkp-kernel/kernel/arch/x86/crypto/aesni-intel.ko
SIGN /lib/modules/6.1.25-lkp-kernel/kernel/arch/x86/crypto/aesni-intel.ko
[...]
DEPMOD /lib/modules/6.1.25-lkp-kernel
$
Watch what happens in the output:
- INSTALL: Modules are copied to their corresponding paths under the
/lib/modules/6.1.25-lkp-kernel/kernel/directory. - SIGN: If your system has kernel module signing enabled (
CONFIG_MODULE_SIG, a strong security feature), the installation process will sign the modules. If secure boot signing is enforced (CONFIG_MODULE_SIG_FORCE), unsigned or incorrectly signed modules will be refused loading. - DEPMOD: Finally, the system runs the
depmodtool. Its job is to analyze dependencies between modules (e.g., module A might depend on module B) and generate metadata files likemodules.depto ensure the correct loading order.
Now, go check out that directory:
$ ls /lib/modules
5.19.0-40-generic/ 5.19.0-41-generic/ 6.1.25-lkp-kernel/
Each installed kernel has its own dedicated folder. Let's see what's under our new kernel:
$ ls /lib/modules/6.1.25-lkp-kernel/kernel/
arch/ crypto/ drivers/ fs/ lib/ net/ sound/
These are all the modules we just compiled, now in position and ready for action.
⚠️ Warning: Don't Break Your Host Machine
The Cross-Compilation Trap
If you are cross-compiling (e.g., compiling for ARM on x86), absolutely do not run sudo make modules_install directly unless you have set INSTALL_MOD_PATH. Otherwise, you'll overwrite your host machine's modules, or mix ARM modules into the x86 directories, which can cause extreme instability or even prevent the system from booting.
The correct approach is to set an installation root directory:
export STG_MYKMODS=../staging/rootfs/my_kernel_modules
make INSTALL_MOD_PATH=${STG_MYKMODS} modules_install
This way, all modules will be installed under ${STG_MYKMODS}/lib/modules/, completely isolated from your host machine. This is standard practice in embedded development.
3.3 Generating the initramfs Image and Boot Configuration
Now we have the kernel and the modules. Just one final kick left.
On the x86 architecture, this step usually involves two parts: generating the initramfs (Initial RAM Filesystem) image, and updating the bootloader (GRUB) configuration.
Why bother with an initramfs? That's a good question, and we'll break it down in detail shortly. For now, let's just get this step working. On x86_64 Ubuntu, all of this usually takes just one command:
$ sudo make install
INSTALL /boot
run-parts: executing /etc/kernel/postinst.d/dkms 6.1.25-lkp-kernel /boot/vmlinuz-6.1.25-lkp-kernel
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 6.1.25-lkp-kernel /boot/vmlinuz-6.1.25-lkp-kernel
update-initramfs: Generating /boot/initrd.img-6.1.25-lkp-kernel
[...]
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 6.1.25-lkp-kernel
Sourcing file `/etc/default/grub'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.1.25-lkp-kernel
Found initrd image: /boot/initrd.img-6.1.25-lkp-kernel
done
As you can see, make install does a lot of work behind the scenes. It calls a bunch of scripts under /etc/kernel/postinst.d/, including update-initramfs (to generate that .img file) and update-grub (to modify GRUB's menu).
What Was Generated?
Now there are a few key new files in the /boot directory:
/boot/vmlinuz-6.1.25-lkp-kernel: This is a copy ofbzImage, the compressed kernel image./boot/initrd.img-6.1.25-lkp-kernel: This is the initramfs image just generated./boot/System.map-6.1.25-lkp-kernel: A copy of the symbol table.
You might ask: What exactly is initramfs? Why is it mandatory?
Understanding the initramfs Framework: Untying the "Chicken-and-Egg" Knot
This is a deeper question than it appears.
Imagine you're maintaining a Linux distribution. Your users might format their root filesystems into all sorts of exotic types—ext4, btrfs, f2fs, or even encrypted LUKS partitions.
The kernel itself is lean. To maintain flexibility, these specific filesystem drivers (like f2fs.ko) are usually compiled as kernel modules rather than baked directly into the kernel image.
This leads to the classic "chicken-and-egg" problem:
- The kernel boots and is running in memory.
- It wants to mount the root filesystem (e.g., in f2fs format).
- To mount f2fs, it needs to load the
f2fs.kodriver module. - But the
f2fs.kofile itself lies on that not-yet-mounted root filesystem (usually under/lib/modules/.../fs/f2fs/f2fs.ko).
Deadlock.
initramfs is the knot-breaker.
It's an extremely minimal filesystem image contained in memory. It's packed with the bare essentials needed to mount the real root filesystem: driver modules (f2fs.ko), crypto libraries (for password decryption), scripts, and the /sbin/init program.
The flow looks like this:
- Bootloader (GRUB) loads the kernel image (
vmlinuz) and the initramfs image (initrd.img) into memory. - The kernel starts up and decompresses the initramfs into a temporary RAM disk.
- The kernel mounts this RAM disk as a temporary root filesystem.
- Scripts inside the initramfs run, loading necessary hardware and filesystem drivers (like
f2fs.ko). - Once ready, the script performs a
pivot_rootoperation—switching the root filesystem from RAM to the real disk partition. - The system continues booting, executing the real
/sbin/init(systemd or SysVinit).
This is why, even if your hard drive is encrypted, you see that little password prompt at boot—that's the userspace environment provided by initramfs running.
Sneaking a Peek Inside initramfs
Don't be intimidated by it; it's really just an archive. On Ubuntu, you can use the lsinitramfs command to see what's inside:
$ lsinitramfs /boot/initrd.img-6.1.25-lkp-kernel | head -n 20
.
kernel
bin
conf/initramfs.conf
etc
lib64
libx32
run
sbin
scripts
usr
usr/bin/cpio
usr/bin/dd
[...]
usr/lib/modules/6.1.25-lkp-kernel/kernel/fs/f2fs/f2fs.ko
[...]
Look, there's usr/lib/modules inside, which means that f2fs.ko allowing us to mount the real root filesystem is right in there. Once pivot_root is complete, this temporary filesystem's mission is over, and its memory is reclaimed.
3.4 Customizing the GRUB Bootloader
The kernel and initramfs are standing by in /boot, and the GRUB configuration has been updated. Now, we need to handle the final piece of the interaction layer: making GRUB show its menu at boot.
By default, modern GRUB often boots directly to the newest kernel in the pursuit of faster boot times and a so-called "clean experience," giving you no chance to choose. This is a disaster for development and debugging—if the new kernel fails to boot, you won't even have a chance to fall back.
Forcing the Menu to Display
We need to modify GRUB's configuration file. Note that these operations are performed on your target system (the VM or physical machine running Ubuntu).
-
Back up the configuration file (this is a good habit):
sudo cp /etc/default/grub /etc/default/grub.orig -
Edit the file:
sudo vi /etc/default/grub -
Modify the key lines: To make the menu display every time, find
GRUB_TIMEOUT_STYLEand change it tomenu; or if there's a lineGRUB_HIDDEN_TIMEOUT_QUIET=true, change it tofalse.At the same time, set the timeout (how long it waits before auto-booting the default entry if you don't press a key):
GRUB_TIMEOUT=3GRUB_TIMEOUT_STYLE=menuGRUB_HIDDEN_TIMEOUT_QUIET=false -
Update GRUB: Don't forget this step; changes not taking effect is a common beginner mistake.
sudo update-grub
Specifying the Default Boot Kernel
GRUB defaults to booting the 0th kernel in the list (usually the most recently installed one). If you want to play it safe and have it default to the distro's original older kernel, you can modify it like this:
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.19.0-42-generic"
This syntax looks a bit like a path, specifying the submenu and the exact menu entry. Remember to run sudo update-grub again after making changes.
3.5 Ignition: Witnessing Your New Kernel
Everything is ready. Now comes the heart-pounding moment—reboot the system.
$ sudo reboot
When the VM (or physical machine) restarts, hold down the Shift key (or if you're using UEFI, you might need to press Esc, depending on the firmware). You should see the GRUB menu interface.
Select "Advanced options for Ubuntu", then choose the kernel with your compilation marker (e.g., Ubuntu, with Linux 6.1.25-lkp-kernel), and press Enter.
If you remove quiet splash from the kernel parameters (press the e key to edit the boot entry and delete it), you'll also get to see a cascade of kernel boot logs flying by. It's a very primal kind of romance.
Verification: Is It Really Our Kernel?
Once you're in the system, don't just celebrate. First, verify that we're actually running the kernel we just compiled.
$ uname -r
6.1.25-lkp-kernel
That's not enough. Remember how we changed CONFIG_HZ in our configuration in the last chapter? We changed it to 300. Let's confirm this configuration actually took effect. There's a script in the kernel source called extract-ikconfig that can extract configuration information from the kernel image:
$ ${LKP_KSRC}/scripts/extract-ikconfig /boot/vmlinuz-6.1.25-lkp-kernel | grep -E "LOCALVERSION|CONFIG_HZ"
CONFIG_LOCALVERSION="-lkp-kernel"
[...]
CONFIG_HZ_300=y
CONFIG_HZ=300
Perfect. Or, since we enabled CONFIG_IKCONFIG_PROC in our configuration, we can also check directly from the /proc filesystem:
$ gunzip -c /proc/config.gz | grep -E "LOCALVERSION|CONFIG_HZ"
CONFIG_LOCALVERSION="-lkp-kernel"
CONFIG_HZ=300
At this moment, you can be certain: you control this machine's kernel.
3.6 Crossing Architectures: Cross-Compiling a Kernel for the Raspberry Pi
If your work is strictly on x86 servers, you can consider yourself graduated. But as someone who tinkers with kernels, you'll eventually encounter embedded devices. Here, we'll use the Raspberry Pi 4 (Raspberry Pi 4 Model B, ARM64 architecture) for practice.
Why not compile directly on the Raspberry Pi? Because the Raspberry Pi's performance is relatively weak, and compiling a kernel could take several hours. Cross-compiling, on the other hand, generates ARM code on your high-performance x86 host machine and gets it done in minutes. This is the standard approach in embedded development.
Step 1: Preparing the Source
We need the kernel source officially maintained by the Raspberry Pi Foundation. Pick a working directory:
export RPI_STG=~/rpi_work
mkdir -p ${RPI_STG}/kernel_rpi
cd ${RPI_STG}/kernel_rpi
git clone --depth=1 --branch=rpi-6.1.y https://github.com/raspberrypi/linux.git
Here we're cloning the rpi-6.1.y branch, which conveniently matches the mainline version we compiled on x86 (both are LTS).
Step 2: Installing the Cross-Compilation Toolchain
In modern Debian/Ubuntu systems, cross-compilers are already packaged up, so you don't need to struggle with crosstool-ng yourself.
We need the aarch64 (ARM 64-bit) toolchain:
$ sudo apt install gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu
Once installed, you'll find a bunch of prefixed tools under /usr/bin/: aarch64-linux-gnu-gcc, aarch64-linux-gnu-ld, and so on.
This prefix aarch64-linux-gnu- is the so-called Toolchain Prefix. We need to tell the kernel's Makefile about it.
Step 3: Configuration and Compilation
The key here is to tell the Makefile two environment variables:
ARCH=arm64: We're compiling for the ARM64 architecture.CROSS_COMPILE=aarch64-linux-gnu-: Which toolchain to use for compilation.
Let's clean up first, then load the default Raspberry Pi 4 configuration (bcm2711_defconfig is the config for the Broadcom 2711 chip, used in the Raspberry Pi 4 and 400):
cd ${RPI_STG}/kernel_rpi/linux
make mrproper
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- bcm2711_defconfig
If you need to fine-tune the configuration, you can still use menuconfig, remembering to pass the architecture parameter:
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- menuconfig
Finally, fire away:
make -j8 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- all
The generated file names are different this time. On ARM64, the compressed kernel image isn't called bzImage; it's called Image.gz, and it lives under arch/arm64/boot/.
$ ls -lh arch/arm64/boot/Image.gz
-rw-rw-r-- 1 c2kp c2kp 7.9M Jun 21 13:24 arch/arm64/boot/Image.gz
This is what you need to copy to the Raspberry Pi's SD card. Of course, don't forget to install the compiled modules to some directory and package them over as well.
Packaging as deb: A More Elegant Delivery Method
If you want to install the compiled kernel on another machine (or another board), the easiest way isn't to manually copy files, but to package them as deb packages.
The kernel Makefile thoughtfully provides this target:
make -j8 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- bindeb-pkg
This will generate several .deb files in the parent directory:
linux-image-*.deb: Kernel image and modules.linux-headers-*.deb: Header files.linux-libc-dev_*.deb: Userspace libraries.
You just need to copy these files to the target machine and run sudo dpkg -i *.deb to complete the installation. Pretty cool, right?
3.7 Chapter Echoes
We did a lot in this chapter.
On the surface, we were learning how to type make, how to modify GRUB configuration, and how to use a cross-compiler. But in reality, we were building an intuition about "system building."
You should now understand that an operating system isn't a monolithic slab. It's layered:
- At the very bottom is the Bootloader (GRUB), the first gatekeeper to be awakened, responsible for pulling the kernel into memory.
- Then comes the kernel image, the heart, but it's often too bulky and needs to strip out some functionality (like filesystem drivers).
- To solve the dependency problems caused by this stripping, we have initramfs, the temp worker that paves the ground before the real world is mounted.
- Finally, there are the kernel modules, on-demand plugins that keep the kernel lean yet extensible.
Remember that intuition we mentioned at the beginning of this chapter?—"building a kernel from source is manufacturing a soul." Now you can add to that: this soul needs not just a torso (the kernel image), but also limbs (modules), and a midwife to ensure it's safely delivered.
In the next chapter, we'll enter a more microscopic world. Instead of just building the kernel, we'll start injecting code into it. We'll learn how to write Linux Kernel Modules (LKMs). If this chapter was about learning how to build a car, the next chapter is about learning how to modify the engine.
Exercises
Exercise 1: Understanding
Question: During the kernel build process, what are the fundamental differences in purpose and state among the vmlinux, bzImage, and vmlinuz files? If you only see vmlinuz but not bzImage in the /boot directory of an x86 system, does this mean the build failed?
Answer and Analysis
Answer: vmlinux is the uncompressed kernel ELF executable, containing debug symbols, huge in size, and primarily used for debugging—it is not used directly for booting. bzImage is the x86-architecture-specific "big zImage," a compressed boot image that the bootloader actually loads, decompresses into memory, and executes. vmlinuz is the compressed version name corresponding to vmlinux (the 'z' stands for compressed), and is usually just a copy or symlink of bzImage. Seeing vmlinuz in the /boot directory is normal; it is typically a copy or symbolic link of bzImage, so the difference in file names alone does not indicate a build failure.
Analysis: This tests your understanding of core artifact names. vmlinux is the raw product after compilation and linking; although it's in ELF format, it contains too much metadata and symbols to be suitable for direct booting. bzImage (boot zImage) is a compressed boot image designed to solve early memory limitations. vmlinuz is simply a naming convention (vmlinux + z). In most modern distributions, /boot/vmlinuz-xxx is actually copied from the compiled bzImage artifact. Therefore, as long as vmlinuz is generated, it usually means bzImage was successfully built.
Exercise 2: Application
Question: Suppose you are cross-compiling kernel modules for an embedded ARM device. To avoid overwriting the host development machine's modules, you need to install the modules to a temporary directory /tmp/rootfs/lib/modules. Write the make command to achieve this goal, and explain why the system cannot directly load these modules not installed under /lib/modules.
Answer and Analysis
Answer: Command: make INSTALL_MOD_PATH=/tmp/rootfs modules_install. Analysis: Using the INSTALL_MOD_PATH environment variable specifies the root path for module installation, so modules will be installed under the /tmp/rootfs/lib/modules/<kernel-version>/ directory. The system cannot load them directly because module loading tools (like modprobe) rely on dependency files like modules.dep under /lib/modules/$(uname -r) to locate modules, and the kernel's default security mechanisms (module signature verification paths) also point to the standard system paths.
Analysis: This tests the application of INSTALL_MOD_PATH. In cross-compilation or system building, we usually don't want to pollute the host environment. By modifying the installation prefix, we can package the artifacts into a root filesystem image. Additionally, this involves how depmod works—the index files it generates must match the actual module loading paths; otherwise, even if you manually specify the .ko file path, loading may fail due to missing dependencies.
Exercise 3: Thinking
Question: Why does a Linux system need initramfs (Initial RAM Filesystem) at boot? If you compile the disk driver into the kernel (y) instead of as a module (m), can you completely discard initramfs? Analyze this from the perspective of the root filesystem mounting "chicken-and-egg" problem.
Answer and Analysis
Answer: initramfs exists to solve the "chicken-and-egg" driver dependency problem: the kernel must load a disk driver to read and mount the root filesystem, but if the driver file itself is stored in the /lib/modules of the not-yet-mounted root filesystem, the kernel cannot read it. initramfs is a minimal filesystem in memory; after the bootloader loads it into memory, the kernel can directly access the driver modules inside it, thereby mounting the real root filesystem. Even if the disk driver is compiled into the kernel, which can simplify the boot process, in certain complex scenarios (like LVM logical volumes, encrypted root filesystems, or network-mounted NFS), you still need userspace tools in initramfs (like cryptsetup, lvm) to assist in preparing and switching (pivot_root) the root filesystem, so you usually cannot completely discard it.
Analysis: This is a critical thinking question. The core is understanding the collaboration between kernel space and userspace tools. Although compiling the driver into the kernel (y) does allow the kernel to recognize hardware early in the boot process, when dealing with modern storage stacks (like RAID, LUKS encryption, complex device mappings), static code inside the kernel alone is often insufficient. You also need userspace scripts to configure the environment. initramfs provides a minimal Linux environment to run these necessary preparations before switching the root filesystem. Therefore, unless it's an extremely simple single-partition ext4 boot disk, initramfs is usually mandatory.
Key Takeaways
The core of compiling the Linux kernel lies in executing the make command, which generates vmlinux (an uncompressed image with debug symbols), bzImage (the compressed image actually used for booting on x86), and .ko kernel modules based on the .config configuration. Because the kernel codebase is massive (~30 million lines), the build process is extremely CPU- and memory-intensive, so make -j$(nproc) is typically used for parallel compilation to boost efficiency. Additionally, the build process may fail due to missing dependency libraries (like libelf-dev) or configuration conflicts (like CONFIG_SYSTEM_REVOCATION_KEYS pointing to a non-existent file), requiring environment or configuration adjustments based on the error messages.
Compiled kernel modules must be installed to the system's /lib/modules/$(uname -r)/ directory via sudo make modules_install so the system can find and load them at runtime. This step doesn't just copy files; it also uses depmod to analyze dependencies between modules and generate mapping files. Notably, when cross-compiling (e.g., compiling for ARM on an x86 host), you must set the INSTALL_MOD_PATH variable to specify the installation root directory; otherwise, you'll mistakenly overwrite the host's modules, causing the system to crash.
To enable the system to mount the real root filesystem, you must generate an initramfs (early userspace), which is the key to solving the "chicken-and-egg" dependency problem at kernel boot. Because the kernel itself usually doesn't include filesystem drivers (like f2fs or LUKS encryption drivers), and these driver files happen to reside on the not-yet-mounted hard drive, initramfs acts as a temporary in-memory filesystem responsible for loading necessary drivers and executing the pivot_root switch during early boot, ultimately handing control over to the real disk system.
Booting a new kernel requires copying the compiled image and initramfs to the /boot directory and updating the GRUB configuration (usually done automatically via sudo make install). To prevent a "bricked" system if the new kernel fails to boot, it's recommended to modify /etc/default/grub to set GRUB_TIMEOUT_STYLE to menu to force the boot menu to display, and to make good use of GRUB_DEFAULT to specify the default boot entry. After rebooting, you can not only check the version via uname -r, but also use /proc/config.gz to verify that specific kernel parameters (like CONFIG_HZ) took effect as expected.
When the target device has weak performance (like a Raspberry Pi), you should cross-compile on a high-performance host. This requires installing the corresponding cross-toolchain (like gcc-aarch64-linux-gnu) and explicitly specifying the ARCH (like arm64) and CROSS_COMPILE variables in the make command. After compilation, using the bindeb-pkg target packages the kernel and modules into .deb files, which is a more elegant delivery method than manually copying files, making it easier to deploy and manage on the target device.