“Comprehensive formal verification of an OS microkernel” is an interesting paper worth quibbling a bit with.
The customary definition of a kernel is the software that executes in the privileged mode of the hardware, meaning that there can be no protection from faults occurring in the kernel, and every single bug can potentially cause arbitrary damage.
Practically and conservatively I agree, but if there is a bug in the kernel code defining a kernel object that is not invoked, you are not at risk.
Perhaps there exist no capabilities to some of those objects.
I was going to quibble about “it is possible to guarantee the absence of bugs” until I encountered “the degree of assurance” and saw that they recognized a tad of relativity of assurance.
Proofs start from axioms which involve semantics of CPUs and perhaps languages.
The theorems you prove are often difficult to understand.
I think formal proofs help find bugs and even find a class of bugs difficult to find otherwise; they force you to think about well settled issues.
Proofs will get better.
I agree that you can build a kernel the size of Keykos, Coyotos or seL4 and debug it.
seL4 has six system calls, of which five require possession of a capability (the sixth is a yield to the scheduler).
The five system calls are IPC primitives that are used either to invoke services implemented by other processes (using capabilities to port-like endpoints that facilitate message passing), or invoke kernel services (using capabilities to kernel objects, such as a thread control block (TCB)).
I think this is a different way of speaking than earlier papers where a program spoke to a kernel about an object, instead of speaking to that object—a distinction with a difference.
I prefer speaking to the object without even wondering whether it is in the kernel.