Explaining Abstractions

When we have a black box sitting on a table and we honestly try to describe how it works to someone watching, we will probably open up the covers and point to gears inside which are normally abstracted or hidden, perhaps to fool, or perhaps to merely avoid distracting the user in the intended use of the box. In this description phase we will often open or close the covers and most of those watching will know why. We are actually describing the abstract box and the concrete box at once. We expect the listener to understand why we are opening and closing the covers.

As we describe capability systems and other complex digital artifacts we don’t provide a visible clue as to whether we have the covers open or closed. I think I have tracked down some of the difficulty in such descriptions to the listener not knowing in particular instances of whether we intend for the covers to be metaphorically open or closed at this point in the description.

Experts talk among themselves without difficulty making reliable subconscious distinctions on abstraction levels—subconscious and unaware. Worse in the case of capabilities there are often several abstraction layers in play at once in one description. I have in mind descriptions of communicating capabilities over wires where even the experts talk past each other.

It is said that once you understand category theory you become unable to describe it to others. I think this is a real phenomenon related to this abstraction dilemma.

The infamous Algol 68 manual used several different fonts to make such distinctions. They sort of succeeded but the result was a book that took a year to comprehend. Perhaps natural language precludes a fix to these problems.