Whitehead and Russel popularized relations in their massive “Principia Mathematica”. It is essentially the same theory as today’s graph theory. The notation for relations is somewhat more developed whereas computer scientists have introduced concepts to represent graphs on computers. Here is the map that I see between the two disciplines. The graph theorist would display a number of vertices together with some arrows each emanating from one vertex and terminating in another; the relationist would say “consider the relation R”:
∀x∀y(xRy → yRx) is how the relationist defines a symmetric relation.
∀x∀y∀z(xRy ⋀ yRz → xRz) defines a transitive relation.
∀x(xRx) defines a reflexive relation.
A partial order is a reflexive, transitive relation.
A total (or simple) order is a partial order where ∀x∀y(xRy ⋁ yRx).
An equivalence relation is a transitive symmetric relation.
A function is a relation where ∀x∀y∀z(xRy ⋀ xRz → y=z).
Both disciplines speak of transitive closures of a given graph of relation. The transitive closure of a relation R is the smallest transitive relation that contains R. (R is contained in S iff ∀x∀y(xRy → xSy.)
I have not run across graph theory notation that afford such concise notion for such definitions as these.
When it comes to computers we leave classic relation theory behind and limit ourselves to finite graphs. A central problem is to define data structures that represent a transitive relation without using storage proportional to the number of pairs (x, y) such that xRy. For a finite transitive relation R there is a minimal relation B such that R is the transitive closure of B. Storing B is often feasible when storing R is not. It remains to organize the storage of B so that it is quick to determine whether xRy for given x and y.
In graph theory there is the ‘strongly connected component’ which is a subset of the vertices between any two of which one can move following the arrows. If we make a new graph by