Session 2 — Interpreter, Knowledge Base & Markov Chains¶
What Was Built¶
The interpreter — the component that takes the AST from the parser and actually executes it. Plus Markov chain sequence learning, one of Terse's most distinctive AI-native features.
The Interpreter¶
File: src/interpreter/interpreter.py
The interpreter walks the AST tree node by node and executes each statement. It maintains the program's state in several data structures:
| Structure | What it stores |
|---|---|
facts{} |
Node properties — dog: {is: animal, has: fur} |
relationships{} |
Edges between nodes — dog: {chases: cat} |
rules[] |
Inference rules — when has fur then is mammal |
sequences{} |
Markov chain transitions — chases → {cat: 2} |
functions{} |
Defined functions |
scope{} |
Current parameter bindings during function calls |
Executing Facts¶
The interpreter stores these in facts:
Executing Inference Rules¶
When infer dog runs, the interpreter checks every rule against dog's facts. dog has fur matches the condition has fur, so the interpreter automatically adds dog is mammal to the facts. No explicit code needed.
Design Principle
Inference in Terse is a language construct, not a library call. The compiler understands inference semantics at parse time.
Functions and Scope¶
When classify dog is called:
- The interpreter finds the
classifyfunction definition - Creates a new scope:
{thing: "dog"} - Executes the body —
infer thingbecomesinfer dog return thingresolvesthingfrom scope → returns"dog"- Scope is discarded after the call
Parameters are resolved at call time via resolve_scope(). Each function call gets its own isolated scope.
Markov Chain Sequence Learning¶
One of Terse's most distinctive features — the ability to learn probabilistic sequences from examples.
How it works¶
The learn statement extracts consecutive pairs from the sequence and records how often each transition occurs:
predict after chases returns cat with confidence 1.0 (it always follows chases).
generate from dog steps 3 follows the most likely path: dog → chases → cat → runs
Analogy
Autocomplete — but driven by meaning and learned relationships, not statistics over raw text.
The Knowledge Base¶
After executing a Terse program, the interpreter holds a live knowledge graph in memory. Every know statement adds a node property. Every relationship statement adds an edge. Every when/infer pair can derive new facts automatically.
This knowledge base is the foundation that the NCI architecture is built on — the same graph-based knowledge representation, the same inference model, scaled up with semantic signatures and Hebbian learning.