Latest Work
Read the current working paper outlining our loss-informed training regime and evaluation harness.
AI Alignment
Our research formalises alignment at the objective level via the LP-PM-ERT architecture, focusing on cooperative stability and calibrated truth-seeking.
Working Paper
LP-PM-ERT Alignment Architecture and the full PDF are available on this site.