top of page

Universal Data Analytics as Semantic Spacetime(Part 8)


Part 8: The Cone of Semantic Future: Causal Prediction, Entanglement, and Mixed States in Graph Spacetime

Having developed some of the basic code patterns, and introduced the four spacetime relations, let’s explain how these four semantic interpretations can be used in practice — to understand and predict dynamical processes across multiple scales. I’ll use a couple of more realistic examples that show how these real-world processes form graphs: i) with some Natural Language Processing based on fragmentation of a stream of input, analogous to bioinformatics and ii) multi-path flow processes with alternative routes, such as we find in routing, supply chains, and quantum experiments. We’ll see how the coding of a data representation in applications can now be extremely simple, and how the four spacetime relations allow us to see and explore beyond constrained typological relationships, to reveal emergent discoveries about the data.

Graphs are active not passive structures

In part 7, I showed how to capture and generate spacetime process graphs, merging parallel timelines where necessary. At the data capture stage, we may be entirely unaware of the existence of a graph representation— only later, during analysis, would the graph be revealed as a network. Identifying a working representation early will be an advantage, but we mustn’t over-constrain a representation and kill off the benefits of a network in the process. Graphs naturally want to remain active processes, not become passive or dead archival data structures. In a sense, the construction and the interaction by search are continuations of the same larger cognitive process. The semantic spacetime co-activation method generates this for us without prejudice.

In this post, we’ll look at how to create and search active graphs and explain the meaning of their large scale structures as they grow. It turns out that searching graphs is the more interesting problem, because it exposes an apparent conflict between the desire for a single-threaded storytelling about data, and the multiplicity of causal chains in the larger spacetime that might ultimately contribute to an outcome. This is the “pipeline” problem. Unlike typological approaches (e.g. OWL, RDF, etc) that aim for uniqueness and rigidity of reasoning, a spacetime approach works with general superpositions of concerns that emerge unpredictably from complexity at scale.

There are two main themes to address in this post:

  • Causal path relationships as processes (timelike trajectories), which may include loops. These are naturally represented by graphs.

  • Locally expressed attributes and similarities (snapshots in space), which describe intrinsic nature (scalar promises). These may be represented either as graphs or as document internals.

Superposition, multi-slit experiments, path analysis — these sound exotic but are surprisingly common phenomena. For some they’ve come to be uniquely associated with subjects like Quantum Mechanics, for others it’s the Internet. In the last post I showed how such structurally complex graphs can emerge from “simple-minded” traceroute measurements of the Internet. One might have assumed such graphs to be purely linear, yet parallel processes — juxtaposed by projecting along a common observer timeline — reveal a kind of “many worlds” view of a network packet trajectory. The existence of parallel process clocks, integrated over a complete map, will always lead to a non-local picture!

Non-local Internet: The major Internet routing algorithms, e.g. like BGP, which bind together locally autonomous regions of the Internet, promise each other information about non-local “link state”, by a slow process of diffusion(see part 7). The alternatives are then “integrated” to form a kind of probabilistic map, with weights governed by underlying dynamics. The map behaves much like a wavefunction in quantum mechanics in predicting general features but not explicit deterministic outcomes. The Principle of Separation of Scales suggests that this formation of a guide map (as an independent process over longer timescales, and in advance of the trajectories it predicts) is a naturally separable process from the actual probabilistic enactment of each trajectory. This is what happens in transportation, in physical, chemical, and biological gradient fields, and it seems likely that it’s at the root of the Schrodinger wavefunction non-locality in quantum mechanics too.

The four characteristics (again)

It sounds like something out of Sun Tzu’s Art of War, but then we could indeed claim that data representation and mapping are about strategic planning. The implication of the four spacetime semantics is simple, and is depicted iconically in figures 1 and 3 below.




Figure 1: Transitive relations can be used for extended reasoning. From a focal point in a search, they reach forwards or backwards in process time forming past and future cones. Many roads lead to Rome, and Rome has roads to many places — making a conical structure, like an hourglass. The non-transitive relations have only a point-to-point effect so they look like clouds surrounding an anchorpoint.


Of these relations, we recall that:

FOLLOWS relations need prerequisites — i.e. from and to nodes, implying causation, which can trace paths in a consistent direction. If A follows B and B follows C then we have a story from A to C, though not necessarily with the same semantics as the prior links. The class of relations that falls under FOLLOWS tells us about causal order, e.g. is caused by, depends on, follows from, may be derived from, etc.

When we trace processes by traversing FOLLOWS links, we are not allowed to suddenly change to another type of link, as that is meaningless. The rules for paths are very simple in semantic spacetime — far simpler than in logical approaches like RDF or OWL.

FOLLOWS relations may also entangle nodes (making nodes co-dependent), e.g. linking in opposite directions with respect to the proper time direction, to form deadlocks or equilibria (see figure 2). In a proper time basis, these look like diagrams with acausal behaviour (loop diagrams). They oppose or describe countervailing processes, which appear to move backwards in proper time; this is because they form bound interior states of a new strongly connected composite agent, and we now have to view proper time on the composite scale — i.e. as the time which is exterior to the bonded superagent.




Figure 2: Countervailing causally ordered processes form entanglements of agents, in which time goes in no particular direction on a large scale. The arrows are FOLLOWS or directed because they represent conditional transitions. For example, a graph of trade with promises “I’ll pay if I get the goods” and “I’ll send the goods if you pay”. Some symmetry breaking is required to progress in exterior time. The agents are in “deadlock” or dynamical equilibrium until the causal symmetry is broken.


Sometimes physicists claim that entanglement (co-dependence) and mixed states are purely quantum mechanical features. This is not correct. They are characteristics of multi-scale maps, and we need to appreciate their significance in order to reason about graphs at scale. Several parallel interpretations could be in play in a graph transition. These interpretations have to be compatible, since they are spacetime compatible — but can they all be reconciled semantically in context? These mixed semantic states are illustrated below in the multi-path example.

CONTAINS relations explain what things are inside or outside other things, e.g. is a kind of, is part of, etc. This is important in coarse-graining or scaled representations of data. It has a transitive directionality too, but over a limited range — a largest scale (limited by finite speed) and smallest scale (a ground state of the invariant chemistry).

CONTAINS can be combined with FOLLOWS to tell multilinear stories with semantic generalization, e.g. iron CONTAINS(belongs to) metals and metalsFOLLOWS(are required for/implies) strong structures CONTAINS(which contains/includes) bridges. Now we have a story connecting iron with bridges. There could be many more stories, and they will have the same structure, i.e. Iron (CONTAINS.Bwd) Node (FOLLOWS.Fwd) Node (CONTAINS) Node. The CONTAINS relation can be used to jump up and down the levels of description. Iron is a kind of metal, what are other metals that could tell the same story?

EXPRESSES: At each location, along a path formed by the above, we can stop and look parenthetically at a node to see what is EXPRESSed by it, e.g. Iron is a grey metal which rusts, melts at 1500 degrees Celcius, and is used to form alloys like steel. Expressing interior detail is the equivalent of opening a document in ArangoDB and looking inside, but we can also imagine referring to properties that are not inside a document, but are represented as exterior graph nodes. That’s a modelling choice. Not everything should be graphical in representation. We have to decide what scale we model at: what we consider to be interior and what we consider to be exterior to a location.

In the search above, we could add an inference based on exploration: Iron is used to make steel. Steel (belongs to) stable alloys and stable alloys (are required for) strong structures, notice that strong structures (are similar to) skeletons (which includes) hip joints. Now we find an application for Iron as a possible hip replacement.

NEAR: Finally, a model sometimes has a notion of things that are similar to other things. NEARness can be physical or logical, e.g. perspex NEAR(is like) glass (in the context of kitchenware or display technology). Iron NEAR(may be confused with) steel. This connection between iron and steel doesn’t negate or override the causal relation that steel derives from iron. They are both active as a “mixed interpretational state” to be resolved by context.

The future cone

No single relationship (say CONTAINment) is enough by itself to model a process, nor does it help to treat every relationship as different. This is why logical data types fail to capture properties fully.

When looking for predictive outcomes, we are usually trying to trace paths fowards in some generalized event timeline. From a given event, the possible subsequent events will typically spread out by some process into a “cone”-like subgraph structure. Searching predictive outcomes means generating this cone (watching out for possibly acausal loops).

Figure 3 illustrates the geometry of these processes in semantic spacetime. You might recognize pictures like this from light-cone sketches in Einstein’s relativity theory. The similarity is obvious and uncomplicated. Einsteinian spacetime is also a kind of semantic spacetime, and past and future are about causal order.



Figure 3: when we start a search from some node in a graph, the forward of backward cones for FOLLOWS follow the progression of the possible stories, while the EXPRESS links describe the node in situ. CONTAINS provides an orthogonal cone of generalization or restriction, while NEAR tells us about projected similarity.


Example 1: Natural Language narrative

Let’s use these observations about the four types to see how we would go about building a narrative from a data stream, by scanning documents (Natural Language Processing) and expressing it as a multi-scale graph. We can then generate the future cone for a search term to see what happened around the relevant finds. This is very like the co-activation picture used in the passport example. A narrative is obviously a chain of words, but if we pay attention to its inner structure, it’s actually a more complicated graph. Not all stories can be told in a simple linear fashion. No one knows this better than those who rely on delivery systems and supply chains, but we can also apply this to the kind of narrative we read in books (where we can more easily get data). See figure 4. We can use the boilerplate code in the SST library to accomplish this easily.

The idea behind text analysis is to do what we do to oil or DNA: fractionate it, i.e. smash it into small pieces and see how the pieces recombine to provide common and repeated signatures. It can be argued that this is how concepts are formed in our minds as proto-cognitive representations over time — a process which is the same basic process as the way complex organisms evolve out of patterns of protein, over very long times! To apply this, we need to identify some natural scales of the process of natural language — of writing. The bulk of the logic is about this fractionation process. We use the semantic spacetime functions to register events and their fractions.

Note this is different from what Amazon and others do in their machine learning method Word2Vec. This technique is a simple form of unsupervised learning that can be run on any laptop in realtime.

Figure 4: A narrative is built from the four spacetime association types.



If we scan documents, we have choices about how to classify what we see. Data “events” could be considered single characters, words, sentences, chapters, etc. I’ll take sentences as a smallest semantic unit that expresses intent specifically. Sentences CONTAIN words and word fragments of length N, called N-grams (some use this for N character strings).


Fractionation

We can split up sentences into N-gramslike splitting DNA into fragments. Some of these will act as “genes” that can be reused in different contexts, others are just junk padding. When an N-gram appears in more than one sentence, it indicates a semantic relationship between them, not just a probabilistic occurrence, so the sentences form a cluster around the fragment, which becomes an EXPRESS or CONTAINS hub (see figure 4). We can begin parsing text from books and articles in this way, measuring the importance of word and N-grams as key-value histograms, and building a graph data structure of important parts of the narrative (ranked by the key value histograms).

Using the SST functions, we register a single threaded timeline for each sufficiently important sentence, then find its fragments and add these to the event container, by the principle of co-activation.

event := S.NextDataEvent(&G, key, sentence)  // for each fragment, find unique key and annotate
  key := fmt.Sprintf("F:L%d,N%d,E%d",i,f,index)  
                         
  frag := S.CreateFragment(G,key,fragment)  
  S.CreateLink(G,event,"CONTAINS",frag,1.0)

Sample code for this process is provided in scan.go. Key-value stores are used, along the way, to build frequency histograms of different N-grams. Important sentences are ranked by the importance of the N-grams they CONTAIN. Importance is a “Goldilocks” property — not too frequent and not too rare is what makes a phrase “just right”. This is another occurrence of scale separation. We can combine these histograms with a graphical annotation of the text. Notice how little coding is needed to use the database. Only functions CreateFragment(), NextDataEvent() and CreateLink(). We never even see that we are using ArangoDB (though our choice makes the back end library much simpler).


Comprehending the graph structure computationally


bottom of page