Why monolithic kernels are fail
Yes, there have been long enough flamewars to no end, and we know where Minix and Linux stand today in regards to the installed device base, … however, modern Windows NT, and Mac OS X are a bit micro kernel’ish to some degree, …
In wake of last twelve months (and counting) NSA, GCHQ & Co revelations let’s look at the processes running on a typical network appliance:
PID Uid VmSize Stat Command
1 root 364 S init
2 root SW [keventd]
3 root RWN [ksoftirqd_CPU0]
4 root SW [kswapd]
5 root SW [bdflush]
6 root SW [kupdated]
8 root SW [mtdblockd]
35 root SWN [jffs2_gcd_mtd2]
87 root 364 S logger -s -p 6 -t
89 root 364 S init
96 root 376 S syslogd -C 16
99 root 348 S klogd
230 root 388 S udhcpc -b -p /var/run/udhcpc.eth0.1.pid -i eth0.1
314 root 388 S /usr/sbin/dropbear -g
Hm, ok, so aside the minimal logging, dhcp server and the dropbear SSH server for administration tasks we got nothing separated in the user-mode context. All the networking, packet filtering, firewalling, load balancing, WiFi stack and what not is all running in the kernel context.
Yeah, right, exactly that kernel context where a typo, off-by-one, etc. pp. likely sooner than later crashes (oops) the whole system, or gives you a root login.
Would it not be nice if such a typo, bug, … in the NIC driver, the IPv4 or v6 stack, or firewall, or mostly anywhere else would just segfault, and restart an associated user-space ipv4d, iptabled, hosted?
With more isolated drivers and sub-system we certainly should have rather less security issues, and given Linus’ famous performance quotes - I do rather trade some percent of performance for more security. Besides, nowadays we run most systems virtualized with even more performance overhead, … for security, management and scalability.