Skip to content

Core Concepts

All Terse concepts explained in plain English.


The Three Laws

Law I

"Knowledge is not stored, it is organized. Retrieval is not lookup, it is reconstruction."

Knowledge in Terse isn't a database you query. It's a graph of relationships you traverse. When you ask Terse what a dog is, it doesn't look up a record — it reconstructs the answer by following associations.

Law II

"Capability is not authorization."

Just because a function can do something doesn't mean it should. Terse separates what code is capable of from what it is permitted to do. The can and needs permission keywords encode this distinction at the language level. The NCI Ethics Core chip enforces it in silicon.

Law III

"The compiler works harder so you don't have to."

Memory allocation, type layout, tensor representation, hardware targeting — these are compiler problems, not programmer problems. Terse code reads simply. The compiler handles the complexity underneath.


Knowledge Graph

The fundamental data structure in Terse. Not an array. Not a hash map. A graph of nodes connected by typed, weighted edges.

Analogy

A mind map. When you think of "dog", you don't retrieve a database record — you activate a cluster of associations: animal, fur, loyal, chases cats. Terse stores knowledge the same way.

Node

A concept in the knowledge graph. Created with know.

know dog is animal

Edge

A relationship between two nodes. Created with relationship syntax.

dog chases cat

Weight

A numeric strength on a fact or edge. Higher weight = stronger association.

know dog weight 0.87

Inference

The process of deriving new facts from existing ones using rules.

when has fur then is mammal
infer dog

After infer dog, if dog has fur is true, Terse automatically derives dog is mammal. No explicit code needed — the rule fires automatically.

Analogy

A detective. Sherlock doesn't just recall facts — he chains them. Tan line → outdoors a lot → army doctor. Terse inference works the same way.


Markov Chain Sequence Learning

Terse can learn probabilistic sequences — "given this concept, what comes next?"

learn dog chases cat runs away
learn dog chases cat hides
predict after chases

After two training sequences, predict after chases returns cat with a confidence score, because cat always follows chases in the training data.

Analogy

Autocomplete — but driven by learned relationships, not statistics over text.


Compression as a Type

In most languages, compression is something you do to data after the fact — zip a file, quantize a model. In Terse, compressed is a native type. Values have a compressed and expanded form that the compiler knows about. The runtime manages transitions automatically.

This is Phase 1 planned — not yet implemented.


Graph Semantics, Tensor Performance

Terse presents a graph-shaped programming model to the developer. Under the hood, the compiler represents knowledge structures as tensors for performance. The programmer writes intuitive graph code. The compiler generates fast tensor operations.


Ethics as a Language Construct

Terse provides can and needs permission as language-level keywords — not library calls.

to delete_node target
  can remove, modify
  needs permission: admin
  if authorized
    remove target

This separates capability (what the function can do) from authorization (what it's permitted to do). This is Law II encoded in syntax.

The NCI Ethics Core chip takes this further — ethics rules written in Terse are compiled to silicon and executed in hardware. A capability that is not authorized never reaches the AI system. You cannot jailbreak hardware.


Self-Hosting

The long-term goal: Terse is written in Python until it is capable of compiling itself. Once the LLVM compiler (Phase 3) is complete, Terse will be rewritten in Terse. The compiler bootstraps itself.

This is a milestone, not a current goal. It's included here because it's a meaningful test of language completeness.