AlphaJudge evaluates AlphaFold-predicted protein complexes by merging AI-derived confidences (ipTM, pTM, iptm+ptm/confidence_score, pLDDT, PAE) with fast, self-contained interface biophysics (contacts, H-bonds, salt bridges, buried area, solvation proxy, shape complementarity) into a tidy CSV for downstream analysis.
⚠️ Disclaimer
Interface biophysical scores in AlphaJudge have not yet been validated against CCP4/PISA and are intended for relative ranking, not quantitative biophysical interpretation.
AlphaJudge parses AF2 and AF3 outputs and summarizes per-model / per-interface metrics:
| category | metrics (examples) | notes |
|---|---|---|
| AlphaFold internal | ipTM, pTM, iptm+ptm/confidence_score, avg interface PAE, avg interface pLDDT | unified for AF2/AF3 |
| physical & geometric | buried area, contact pairs, H-bonds, salt bridges, interface composition | self-contained |
| derived scores | pDockQ, pDockQ2, mpDockQ, ipSAE, LIS, interface score | implemented here |
Use cases: rank poses, sanity-check AF confidences, or export features for ML.
AlphaFold models (AF2 or AF3) → AlphaJudge → interfaces.csv
- Detects AF2 vs AF3 automatically from the run directory
- Loads structure and confidences, computes interface descriptors
- Writes
interfaces.csvinto the same directory
Create conda/mamba env
git clone https://github.com/KosinskiLab/AlphaJudge.git
cd AlphaJudge
mamba env create -f environment.yaml
mamba activate alphajudgeThen, pip install in the existing environment
pip install .or pip editable install in existing environment
pip install -e .Requirements: Python ≥3.10; runtime deps are biopython, numpy, matplotlib (installed automatically with pip install .).
The package exposes an alphajudge entry point.
# Basic synopsis
alphajudge PATH [PATH ...] \
--models_to_analyse {best,all} \
--contact_thresh 8.0 \
--pae_filter 100.0 \
[-r|--recursive] \
[-o|--summary SUMMARY.csv]- PATH: One or more run directories or roots to search
- --contact_thresh: Contact cutoff in Å (default: 8.0)
- --pae_filter: Skip interfaces with avg interface PAE above this (default: 100.0)
- --models_to_analyse:
bestorall(default: best) - -r / --recursive: Recursively discover runs under each PATH
- -o / --summary: Write an aggregated CSV across all processed runs
Outputs:
- Always writes
interfaces.csvinside each processed run directory. - For each processed model, also writes a PAE heatmap PNG
pae_<model>.pngnext tointerfaces.csv. - If
--summaryis provided, also writes a union-header CSV at the given path containing rows from all runs.
Examples
# Single AF2 run (directory contains ranking_debug.json, pae_*.json, and model files)
alphajudge test_data/af2/pos_dimers/Q13148+Q92900
# Single AF3 run (directory contains ranking_scores.csv, per-model summary/confidence files, and model files)
alphajudge test_data/af3/pos_dimers/Q13148+Q92900 --models_to_analyse all
# Aggregate multiple runs into one summary
alphajudge test_data/af2/pos_dimers/Q13148+Q92900 \
test_data/af3/pos_dimers/Q13148+Q92900 \
-o interfaces_summary.csv
# Recursively discover runs under roots and write a combined summary
alphajudge test_data/af2/pos_dimers test_data/af3/pos_dimers -r -o interfaces_summary.csvMinimal example:
from pathlib import Path
from alphajudge.parsers import pick_parser
from alphajudge.runner import process, process_many
run_dir = Path("test_data/af2/pos_dimers/Q13148+Q92900")
parser = pick_parser(run_dir)
print("Detected parser:", parser.name) # "af2" or "af3"
process(str(run_dir), contact_thresh=8.0, pae_filter=100.0, models_to_analyse="best")
print("Wrote:", run_dir / "interfaces.csv")
# Multiple runs + optional recursion and summary
process_many(
[str(run_dir), "test_data/af3/pos_dimers/Q13148+Q92900"],
contact_thresh=8.0,
pae_filter=100.0,
models_to_analyse="best",
recursive=False,
summary_csv="interfaces_summary.csv",
)Key outputs per interface include: average_interface_pae, interface_average_plddt, interface_contact_pairs, interface_area, interface_hb, interface_sb, interface_sc, interface_solv_en, interface_ipSAE, interface_LIS, interface_pDockQ2, and per-run pDockQ/mpDockQ.
AlphaJudge expects standard AlphaFold run outputs.
- AF2: directory with
ranking_debug.json,pae_<model>.json, and model structure files (model.cifor*.pdb/*.cif) - AF3: directory with
ranking_scores.csv, per-modelsummary_confidences.jsonandconfidences.json(or top-levelranked_0_summary_confidences.json), and structure files
The tool searches for model.cif inside each model subdirectory first; otherwise it tries to match *<model>*.cif or *<model>*.pdb at the run root.
AlphaJudge writes interfaces.csv with one row per interface (and includes the selected model). Core fields include:
- jobs: run directory name
- model_used: selected model identifier
- interface: chain-pair label (e.g.,
A_B) - iptm_ptm, iptm, ptm, confidence_score: unified AF confidences
- pDockQ/mpDockQ: global dockQ-like score (mpDockQ if multimer; pDockQ if dimer)
- average_interface_pae, interface_average_plddt, interface_num_intf_residues
- interface_contact_pairs, interface_score, interface_pDockQ2, interface_ipSAE, interface_LIS
- interface_hb, interface_sb, interface_sc, interface_area, interface_solv_en
Exact header is asserted in tests to be consistent across AF2 and AF3 runs.
pytest -qTests exercise both AF2 and AF3 parsers and validate the CSV fields against bundled fixtures in test_data/.
A minimal multi-stage Dockerfile is provided under docker/:
# Build image (runs tests in the build stage)
docker build -t alphajudge -f docker/Dockerfile .
# Inspect CLI inside the runtime image
docker run --rm alphajudge alphajudge --helpPlease cite:
AlphaJudge: we will come up with a better name. (xxxx).
https://github.com/KosinskiLab/AlphaJudge
License: MIT for this repository. AlphaFold2/AlphaFold3, and other tools remain under their own licenses.