I am aware of these applications of capability disciplines:
The attempt to impose capability discipline on even malicious programs was fairly common in kernels but uncommon for languages.
I think it is not an exaggeration to say that the large majority of security exploits in systems can be tracked down to some design element where capability discipline would have prevented the exploit. Exploits generally require several steps, several of which capability design would prevent. The language analog to capability discipline is called ‘type safety’ or ‘memory safety’. Languages with these properties pay some runtime cost. C and consequently C++ lack such safety. Automatic garbage collection accompanies all of the safe languages that I know. Of the safe languages I am familiar with OCaml seems to have the best performance. Languages often ‘add features’, such as class wide mutable variables, which diminish the set of problems that can be solved.
A large fraction of exploits can be described under the pattern where a packet of bits crosses a trust boundary and the recipient must delve into the bits to ascertain the meaning. Too commonly an implicit assumption is that the packet conforms to some standard format. The attack is to not conform to the format whereupon the packet builder corrupts the logic of the recipient even to the extent of obeying code from the packet. This is aptly called a virus. Recent hardware features somewhat limit the introduction of new code thus.