AI Research

Alignment research and experimental tooling.

Exploring resilient, interpretable intelligence that augments human capability. This space gathers experiments, models, and field notes from the Computer Wizard lab.

Now in progress: alignment prototypes, evaluation harnesses, and model diagnostics tied to real-world tasks.

Latest Work

Read the current working paper outlining our loss-informed training regime and evaluation harness.

AI Alignment

Our research formalises alignment at the objective level via the LP-PM-ERT architecture, focusing on cooperative stability and calibrated truth-seeking.

Working Paper

LP-PM-ERT Alignment Architecture and the full PDF are available on this site.