How do capability ideas support security naturally? I think that most security discussions and security reasoning is about preventing bad actions. Sometimes accentuating the positive provides another vantage point from which to gather insights. I attempt here to support the claim that the security of an application and its data comes about naturally when the application is designed and implemented for a capability environment. Furthermore the required design patterns are not alien, but familiar to anyone who has used the scope rules of familiar computer languages to divide the responsibilities and authorities of the application components, and thus to achieve modularity. What is new is the application of these familiar design patterns at higher levels in the integration of the parts of the application and the integration of the application with other system components.
A running program that is divided into subroutines, or language supported objects, performs calls or method invocations and thus begins to invoke code with different authority. Arguments passed at this time represent a bit of transferred authority from the caller to the called. When such calls are within the same address space the mechanisms are familiar and efficient. When they are across address spaces any of a large number of mechanisms are used, few of which conventionally adhere to the capability discipline inherent in the call and invocation rules. Many of our security problems arise at this point.
These “cross space calls” include Unix pipes, “Remote Procedure Calls”, returning to a scripting language (Perl, Python, Shell script), which starts another Unix procedure that has preestablished authority to shared files. Ascii path names may be passed back and forth and categorical permission must have been granted according to the coarse permission rules of Unix.