"We cannot rely on probabilistic safety for deterministic stakes."
I conduct forensic audits of Tier 1 AI models, documenting failure modes in high-stakes professional environments. My focus is Goal-Oriented Factual Inversion (GOFI) — a failure class where models correctly identify factual ground truth and then invert that finding under goal pressure. Confirmed across legal, clinical, physical safety, and positive-valence domains.
These findings motivated the development of the Sovereign Sentinel Architecture (SSA), a six-axis deterministic framework for AI safety in high-stakes professional environments.
22 years in procurement, materials management, contract analysis, and risk prevention. Volunteer patient advocate, liaising between patients and clinical teams on high-stakes medical decisions. I approach AI safety from inside the professional workflows where these systems are actually deployed — not from a lab.
No institutional affiliation. Documented evidence only.
- Trinity-Audit-Forensics: Forensic archive of longitudinal red-team audits across four frontier models.
- SSA v1.2 Abstract: Six-axis deterministic control stack. Open for peer review.