The ideas that we have described for distributing parts of applications use crypto to (1) assure their own secrecy and (2) authenticate messages thus providing their own defense against spoofing. It would seem that firewalls are unneeded to protect such applications except, perhaps, to protect the systems upon which the capability technology is built. If the foundation systems do not need firewalls then neither do the applications. Of course we advocate the use of such systems.
There seems to be no need for firewalls in distributed capability design. The distributed modules of an application are invulnerable to bogus signals if we have done our crypto right. There remains the question of the platform that a particular application module runs on. If it runs on Unix then there are the usual fears about the integrity of that Unix platform. Firewalls seem mainly designed to prevent hostile signals from beyond the firewall, reaching gullible Unix programs that were not designed to reject commands that would breach abstractions or damage their community.
Macintosh or Windows platforms may well be less vulnerable since they do not have so many such traditional deamons designed to obey external commands.
The resulting hardware deployment might thus consist of firewalls to protect the legacy systems that require it, and capability spaces connected transparently thru firewalls or, indeed directly to the Internet.
If capability systems are built directly on hardware or on secure operating systems then firewalls are not required for the security of those applications.
I do not know general principles for interfaces between capability applications and legacy systems for I do not know the general security principles of the legacy systems.