The Technology

The Logos is an extensible formal language of thought with a modular operator system, dual verification architecture, and unlimited clean training data generation.

Layer Architecture

The Logos is built as a modular, extensible system. Each layer adds capabilities on top of the foundations below it.

Constitutive Foundation

Required base layer

Fundamental building blocks: predicates, functions, quantifiers, and logical connectives

Required

The fundamental building blocks: predicates, functions, variables, boolean operators, lambda abstraction, quantifiers, identity, propositional identity, grounding, essence, and reduction. Uses hyperintensional semantics with bilateral propositions (verifier/falsifier pairs).

Operators

  • Boolean operators AND, OR, NOT, IF...THEN
  • Lambda abstraction λ
  • Quantifiers ∀, ∃
  • Identity =
  • Propositional identity
  • Grounding
  • Essence
  • Reduction

Dynamical Foundation

Required, builds on Constitutive

Temporal evolution operators for modeling change, processes, and state transitions

Required

Extends the base with temporal and modal operators: necessity, possibility, past, future, since, until, counterfactual conditionals, causal operators, and the actuality predicate.

Operators

  • Necessity / Possibility □, ◇
  • Temporal operators Past, Future, Since, Until
  • Counterfactuals Would, Might
  • Causal operators
  • Actuality predicate

Middle Extensions

Modular plugins, any combination

Specialized domains: belief, knowledge, obligation, permission, and value

Optional domain-specific extensions that can be loaded independently: epistemic logic (belief, probability), normative logic (obligation, permission), abilities, choice theory, and spatial reasoning.

Operators

  • Epistemic Belief, Probability, Might, Must
  • Normative Obligation, Permission, Preference
  • Abilities Specific, Generic, STIT
  • Choice Free Permission, Best Choice
  • Spatial Location, Distance

Agent-Dependent Extensions

Multi-agent indexed operators

Intentionality, action, and agent-relative operators for modeling minds

Operators indexed to specific agents for multi-agent reasoning: agential operators, social operators (common ground, trust), and reflection operators (metacognition, perspectival).

Operators

  • Agential Multi-agent indexed operators
  • Social Common ground, Trust
  • Reflection Metacognition, Perspectival

Three Integrated Packages

The Logos verification system is implemented through three complementary packages, each providing a distinct role in the dual verification architecture.

Proof-Checker

Lean 4
In Progress

Derives valid inferences with mathematical certainty. Generates proof receipts that serve as positive reinforcement learning signals for AI training.

  • Core Foundation layer implemented with full bilateral semantics
  • Dynamical Foundation complete (syntax, truth conditions, frame)
  • Metalogic partially complete (~60% soundness proofs)
  • Generates proof receipt certificates for verified inferences

Model-Checker

Python / Z3
v1.2.12

Searches for counterexamples using constraint solving. Validates or refutes inferences, providing corrective RL signals when reasoning is invalid.

  • Active development, approaching v1 release
  • Published on PyPI as model-checker
  • Uses Z3 SMT solver for semantic verification
  • Generates countermodel certificates for invalid inferences

Model-Builder

Rust
Planned

Transforms natural language into formal semantic models that interpret the Logos to draw inferences. Provides a foundation for abductive and inductive reasoning.

  • Architecture fully specified in design documents
  • Will translate natural language to formal Logos representations
  • Enables non-experts to leverage formal verification
  • Key to scaling Logos to practical applications

Dual Verification

Every inference is checked from two independent directions -- syntactic proof and semantic countermodel search.

Inference Premises and conclusion to verify Proof-Checker (Lean 4) Derives valid inferences with mathematical certainty Syntactic verification via formal proof Model-Checker (Z3) Searches for counterexamples using constraint solving Semantic verification via model search Proof Receipt Positive RL Signal Reinforces valid reasoning patterns Countermodel Corrective RL Signal Corrects invalid reasoning patterns AI Training Data Unlimited verified training signals

The Proof-Checker generates proof receipts -- mathematical certificates that an inference is valid. These serve as positive reinforcement learning signals, teaching AI systems which reasoning patterns are correct.

The Model-Checker searches for countermodels -- concrete scenarios where a claimed inference fails. These serve as corrective reinforcement learning signals, teaching AI systems to avoid invalid reasoning patterns.

Together, this dual verification architecture generates unlimited clean training data without any dependence on human annotation. Every inference is checked from two independent directions, providing the foundation for scalable AI oversight.