Logos Laboratories

Interpreted and Verified Reasoning for Trustworthy AI

The Logos enables AI systems to construct models of the world in order to draw verified inferences for advanced planning and discovery.

Problems

Current AI systems are trained on human reasoning data that is limited, unverified, and uninterpreted.

Limited

Reasoning data collected from expert humans is expensive to annotate and limited in quantity, quality, and complexity. Training large language models on what humans take to be valid reasoning inherits the inaccuracies, limitations, and vulnerabilities to deception that are only human.

Unverified

Training on annotated human reasoning data rewards the appearance of validity rather than validity itself, making it impossible tell valid and invalid reasoning apart. Without any mechanism to verify inferences, even the most sophisticated AI systems cannot be trusted for high-stakes applications.

Uninterpreted

Pattern-matching on natural language does not train AI systems to construct and revise explicit representations of the context in question. In addition to limiting oversight, AI systems are not able to engage in abductive, deductive, and inductive reasoning employed throughout the sciences.

Solutions

Generative AI works best under a system of constraints.

The Logos is a formal logic system that generates mathematically verified synthetic reasoning data of arbitrary complexity for pretraining language models, far exceeding human reasoning capabilities.

Fine-tuning AI systems to construct and test consistent semantic models of real-world situations to interpret the Logos creates a capacity to reason abductively, deductively, and inductively from incomplete information.

The packages below provide the foundations for general intelligence, enabling AI systems to conduct interpreted and verified reasoning within the Logos to drive discovery in industry and science.

Core Tools

Advantages

A methodology for superhuman reasoning in industry and science.

Infinite Clean Data

Generate an unlimited supply of verified training data without human annotation or limits on complexity. Supports synthetic data tailored for specific applications as well as pretraining foundation models.

Transparency and Validity

AI systems interpret and reason within the Logos via structured semantic models. This enables trustworthy oversight with proof receipts that witness the validity of every inference.

Dual Verification

Valid inferences are verified by the Proof-Checker (Lean 4) to provide positive RL signals and invalid inferences are verified by the Model-Checker (Z3) to provide corrective RL signals.

Extensible Architecture

A modular plugin system extends the Logos to include logical operators that are relevant for a specific application, e.g., legal reasoning requires epistemic and normative operators in addition to the foundations.

Model-Based Discovery

Build consistent interpretations of situations and weigh evidence between competing models. This capacity for abductive and inductive reasoning from partial information enables AI systems to drive systematic discovery as well as advanced applications in industry.

Coordination Under Uncertainty

Epistemic reasoning about beliefs and credences in multiagent systems supports coordination on shared objectives in both cooperative and adversarial contexts. AI systems can model what others believe to predict their actions and align on goals despite partial information.

Contextual Parameters

Sentences are evaluated at contextual parameters encoding preferences, norms, beliefs, and time. AI systems reason forward from context to conclusions, or backward from findings to context updates, enabling bidirectional contextual reasoning.

Normative Control

Normative parameters define what is permissible and obligatory at each context, as well as the preference ordering over the range of possible outcomes. This enables oversight and control of AI systems operating autonomously under conditions of uncertainty.