Skip to content

Session 2 — Interpreter, Knowledge Base & Markov Chains

What Was Built

The interpreter — the component that takes the AST from the parser and actually executes it. Plus Markov chain sequence learning, one of Terse's most distinctive AI-native features.


The Interpreter

File: src/interpreter/interpreter.py

The interpreter walks the AST tree node by node and executes each statement. It maintains the program's state in several data structures:

Structure What it stores
facts{} Node properties — dog: {is: animal, has: fur}
relationships{} Edges between nodes — dog: {chases: cat}
rules[] Inference rules — when has fur then is mammal
sequences{} Markov chain transitions — chases → {cat: 2}
functions{} Defined functions
scope{} Current parameter bindings during function calls

Executing Facts

know dog is animal
know dog has fur

The interpreter stores these in facts:

facts = {
    "dog": {
        "is": "animal",
        "has": "fur"
    }
}

Executing Inference Rules

when has fur then is mammal
infer dog

When infer dog runs, the interpreter checks every rule against dog's facts. dog has fur matches the condition has fur, so the interpreter automatically adds dog is mammal to the facts. No explicit code needed.

Design Principle

Inference in Terse is a language construct, not a library call. The compiler understands inference semantics at parse time.


Functions and Scope

to classify thing
  infer thing
  return thing

classify dog

When classify dog is called:

  1. The interpreter finds the classify function definition
  2. Creates a new scope: {thing: "dog"}
  3. Executes the body — infer thing becomes infer dog
  4. return thing resolves thing from scope → returns "dog"
  5. Scope is discarded after the call

Parameters are resolved at call time via resolve_scope(). Each function call gets its own isolated scope.


Markov Chain Sequence Learning

One of Terse's most distinctive features — the ability to learn probabilistic sequences from examples.

learn dog chases cat runs away
learn dog chases cat hides
predict after chases

How it works

The learn statement extracts consecutive pairs from the sequence and records how often each transition occurs:

dog → chases: 2
chases → cat: 2
cat → runs: 1
cat → hides: 1
runs → away: 1

predict after chases returns cat with confidence 1.0 (it always follows chases).

generate from dog steps 3 follows the most likely path: dog → chases → cat → runs

Analogy

Autocomplete — but driven by meaning and learned relationships, not statistics over raw text.


The Knowledge Base

After executing a Terse program, the interpreter holds a live knowledge graph in memory. Every know statement adds a node property. Every relationship statement adds an edge. Every when/infer pair can derive new facts automatically.

This knowledge base is the foundation that the NCI architecture is built on — the same graph-based knowledge representation, the same inference model, scaled up with semantic signatures and Hebbian learning.