top of page

Universal Data Analytics as Semantic Spacetime(Part 7)

Part 7. Coincidence and the Four Horsemen of Semantics

After taming the tools in foregoing episodes, we now have a robust set of techniques for comprehending and organizing data, i.e. for turning input streams into analytical structures. These dynamical “knowledge representations” describe information flows or spacetime processes on all levels. Let’s now dive deeper, with some examples, and return to some subtle questions of modelling. To tell a story about causation — whether to uncover the internal workings of nature, or to invent new services for ourselves — we need to map out the scenarios in space and in time. It doesn’t matter whether the space is physical, simulated, or completely imaginary — the principles for representing the characteristics of locations and times along a system path are central to making this map.

In this installment, we examine the four previously mentioned core relationships of Semantic Spacetime so that we can identify them quickly for use in model scenarios. We’ll see how to recognize them in the context of key-values, documents, and graphs. We start, of course, by returning to teh central topic of semantic spacetime: processes and how they interact.

Eventful coincidence (x-x’)!

Coincidence is what happens when several things or processes meet at the same location: their timelines or trajectories join or cross — they are “incidentally” together. A related term co-activation is also used in biology for coincident proteins that activate processes in a kind of “logical AND” handshake, with proposal AND confirmation both required to switch on a process. It’s the same idea used in forensic investigations: if A and B are observed together, then there is some kind of connection between them — perhaps to be discovered or elaborated upon later. The role of coincidence is not always easy to discern, but in spacetime it’s simply a matter of expressing how events are composed from their coincident parts.

Co-activation is an important “valve” mechanism for regulating processes, in everything from control systems to immunology. In linguistics, coincident words convey compound semantics by forming phrases and sentences. It leads to a hierarchy of meaning from the bottom up — a kind of semantic chemistry. The particular combination of components present in the same place at the same time is the basis for encoding and elaborating specific meaning by composition of generic parts. But, we need to understand what “the same place” means for each process: when are processes independent, and when should they be seen as parts of a larger whole? The answer depends basically on characteristic scales for the processes. This is a subtle and difficult topic that I’ve addressed in my book Smart Spacetime. Combinatorics is a huge topic that spans subjects from chemistry to category theory.

As a taster of things to come, take a look at figures 1 and 2. These are process graphs translated directly into semantic spacetime by the Linux version of the traceroute program. It might come as a surprise to some that an Internet traversal isn’t just a linear chain — of course each individual observation does follow a unique path, but over coarse time, the map of possible paths splits into a multi-path integral view — quite analogous to a quantum multi-slit experiment. Along the path, the intensity of traffic at each point may be a sum over several coincident paths.



Figure 1: An Internet traceoute graphed end-to-end. Illustrating the complexity of process paths due to parallelism. Pink nodes are indistinguishable like a QM multi-slit experiment, and each arrow defines a proper time clock — so that on a map forms a kind of wavefunction for Internet service.




The pink nodes in the figure represent transverse coarse grains — indistinguishable multi-slit alternative path directions (spacelike hypersections). The nodes marked “wormhole” are unidentifiable locations (a kind of dark matter!) where we know the path went, but was unobservable. These are longitudinal coarse grains.





Figure 2: Adding a second destination and merging the graphs to build up a map of the whole network adds a fork in the path, but some common path — as seen from the same starting node.






The code to generate these figures is provided in traceroute.go. Video on SST: https://www.arangodb.com/dev-days-2021/day-4/

The Semantic Spacetime answer to understanding coincidence is pragmatic: if two agents are bonded by some observed promises to one another, then they form an effective coarse grain or superagent, which can promise new features on a larger composite scale. Each grain is an event. Events have different semantics to spatial locations. They are spacetime process steps, directly tied to other nodes that represent invariant characteristics.

In Euclidean space, lacking clear boundaries scale is a subtle issue, but in graphs it’s straightforward. Atoms come together to form molecules. If nodes are joined by the right kind of edge or link, then they act as a supernode. Supernodes are interaction regions (like ballistic scattering regions in physics), and their composition represents a larger scale. Figure 3 shows two independent processes meeting at a location, which forms an event, and then continuing on independently.



Figure 3: Temporary coincidence is like scattering of processes, or ships passing in the night. As a causal diagram (Feynman diagram) it looks like this.



In the previous post, I showed how to easily register such events by using coincidence functions. These form event hubs that bind invariants into process steps. Such event hubs form a timeline of context specific events (e.g. PersonLocation(person,location) — see the passport examples from part 6). Some processes come together and remain together, forming a new invariant (see figure 4).


Figure 4: Sometimes two separate processes join and stay joined, forming a new “molecule” with different semantics. This is an interaction that transforms spacetime properties.




The scale of events and observers plays a role here, because some phenomena can only been seen by an agent of a certain size — too big and it won’t be able to resolve small details, too small and it won’t see large features. A hydrogen atom can always interact with much larger DNA, but it can’t unlock DNA’s functional code, because the two parts promise functionality on different scales. Unlocking the code would require something on the same scale as the DNA code, say a protein. This scaling principle obviously applies to processes, but it’s also reflected in the passive data derived from them too — when we’re “imaging” processes.

From a source of data, every exterior downstream process, sampling the data stream, is entitled to its own opinion about what is separate and what is whole, but there are usually some scale markers created by processes themselves. When the configuration of parts changes (as in figure 2), this becomes clear. This, indeed, is what leads to the great diversity of genes, particles, data, and other invariant characteristics. It’s the root of the great spectacle we see all around us.

Property or coincidence? Document or graph?


bottom of page