CTF exploitable kernel infrastructure
This year, I got the opportunity to write challenges for CSAW CTF again. One of the challenges I wrote, “krackme”, was an exploitable loadable kernel module disguised as a simple crackme.
While the challenge itself may not need a blog post, I hope the the infrastructure that I used to provide vms to the teams is helpful in creating similar challenges in the future.
The problem with having kernel exploits in a ctf is that crashing a kernel can have some serious impact on the system, and allowing more than one team access to the instance is therefore not really a possibility. With userspace programs, it is cheap to just serve up a new instance or a fork of a program on each connection, and userspace exploits have always been a big part of CTF. With kernel exploits, each team needs its own vm, and the ability to reboot their vm in the case of a crash. This has restricted kernel challenges to on-site CTF mostly, and mncoppola has made a great framework for on-site kernel exploit challenges. I wanted to design my challenge to be more similar to the userspace exploit challenges that we all know and lov, so my goals were.
- Spawn new VM on connect.
- Give each connection a ‘fresh’ vm (eg. new copy of hard drive).
Building the VM
The first step to hosting a kernel exploit challenge is to build your vulnerable VM. I used buildroot to build my kernel with a minimal busybox userland, and a tiny 4.5mb ext2 disk. You can start by downloading the buildroot tools.
1 2 3
and you’ll be dropped to the buildroot configurator. If you’ve ever built a kernel before, this type of configuration screen may seem familiar (except gui instead of curses). Important options are
Target to select the architecture you wish to build,
Toolchain To specify your kernel header version,
Kernel to build a kernel (if you want a custom configuration, you can check out the kernel source and make your own custom config for that, or you could just use the defconfig from buildroot), and
Filesystem Images to specify what to use as a root filesystem. I picked
Hit save, and then
This may take a while as you download the toolchain, kernel source, and build everything. Check
output/images for your kernel
rootfs.ext2 filesystem. At this point you now have a minimal kernel with busybox that you can boot with qemu.This is great, but we need some more things, like the vulnerable module, and some way to launch it. By default, the username
root has not password and you can login with that. Please remember to change the password!
Building the module.
If you’re like me and don’t build kernel modules all the time, you probably just steal the kernel module makefile from the first hit on Google for
Kernel module makefile. This won’t work to build against your custom kernel.
I’m going to assume you have your module source that works here, and just provide the makefile. If you need help writing the module, there are better guides than I can give here. With this makefile, replace the linux version (3.2.64) that I used, with whatever version you use, krackme.ko with whatever your kernel module is called, and the /path/to/buildroot/ with your actual path to buildroot.
1 2 3 4 5
So now you should have a .ko ready to be inserted to your new kernel.
Setting up the vm.
This magic qemu incantation will give you access to your vm with some networking.
The first thing to do is to log in as
root and do some setup. Change the password to something random, and get the vm to the state you want.
Busybox’s init scripts run
/etc/init.d/rcS, so you can add additional instructions there. Mine looks like
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
/root/setup.sh looking like
1 2 3 4 5 6 7
The nameserver setup is important for networking, and make sure to actually insmod your kernel module. You can transfer the .ko by any means, wget, mount and copy, etc.
Next add a new user, with a password you will provide to the attackers and exit.
Now you should have a frozen good state vm. This launcher script can be used to give every launch a new copy of the vm from this snapshot. Thanks to acez for telling me about redirecting monitor to
/dev/null to prevent players from dorpping to the qemu monitor.
1 2 3 4 5
So every launch from this script will now create a new copy of the hard disk, and boot from there. Users will not be able to interfere, and with the small amount of memory, the host vm should be able to host quite a few guests at once.
Launch on connect.
The last step is make the qemu vm launch when users connect to it. The simplest way is to add a new user to the host vm, and make the launch script the login shell. Give the players login access to the host vm with the provided username/password and the credentials to the user account on the guest vm.
So essentially what needs to be done is
1 2 3 4 5 6 7 8 9
and then login with the
myuser user will spawn the qemu vm!
See the PPP suggestions for running a ctf here for other tips for a local kernel challenge. While what I’ve posted here will help you set up the infrastructure, it won’t guarantee a good challenge, so following the advice there is an important step! Most importantly, be creative in your challenge, generic challenges are also boring to solve!
Let me know if you have any questions!