Frontier AI Research
Exploring resilient, interpretable intelligence that augments human capability. This space gathers experiments, models, and field notes from the Computer Wizard lab.
Latest Work
Read the current working paper outlining our loss-informed training regime and evaluation harness.
AI Alignment
Our current research formalises alignment at the objective level via the LP–PM–ERT architecture: Logical Pragmatism for physical feasibility, Pragmatic Morality for cooperative stability, and Epistemic Responsibility Theory for calibrated truth-seeking. The framework targets self-modification safety by tying utility weights to Lyapunov-stable dynamics.
Read the full working paper for definitions, implementation paths, and failure analyses: LP–PM–ERT Alignment Architecture.
Direct PDF: LP–PM–ERT Alignment Architecture (Full Paper).pdf.