Papers
Topics
Authors
Recent
Search
2000 character limit reached

LigandMPNN: Ligand-Aware Protein Design

Updated 10 February 2026
  • LigandMPNN is a ligand-aware extension of ProteinMPNN that uses a graph neural network to jointly model protein backbones and small-molecule ligands within a unified graph structure.
  • It employs fine-tuning strategies like Direct Preference Optimization and ResiDPO, leveraging AlphaFold pLDDT scores to drive sequence design towards enhanced structural folding.
  • EnhancedMPNN, derived from LigandMPNN, demonstrates significant improvements in enzyme and binder design benchmarks by aligning generated sequences with high-confidence folding metrics.

LigandMPNN is a ligand-aware extension of the ProteinMPNN framework, designed for protein sequence design tasks in the presence of small-molecule ligands. It integrates a graph neural network (GNN) approach that models both protein backbones and associated ligands within a unified graph structure, enabling sequence generation that jointly considers geometric, chemical, and ligand–protein interaction features. LigandMPNN forms the basis for advanced preference-based fine-tuning strategies, notably Direct Preference Optimization (DPO) and Residue-level Designability Preference Optimization (ResiDPO), which shift the optimization target from native sequence recovery to explicit structural designability, using AlphaFold pLDDT scores as reward signals. These developments culminate in EnhancedMPNN, which demonstrates substantial improvements in in silico protein design benchmarks by better aligning generated sequences with high-confidence structural folding, as quantified by pLDDT-based metrics (Xue et al., 30 May 2025).

1. LigandMPNN Architecture and Training Objective

LigandMPNN implements a GNN that constructs an undirected graph G=(V,E)G = (V, E), where the node set VV consists of both amino-acid residues of a target protein (using backbone Cα coordinates and side-chain scaffolding) and all atoms from associated small-molecule ligands. Residue nodes are encoded with geometric and chemical features such as relative local frame coordinates, pairwise distances, and torsion angles, while ligand-atom nodes carry attributes like element type, formal charge, hybridization, and aromaticity. Edges represent both intra-protein and protein–ligand distances and, for ligand–ligand edges, include bond type information.

A message-passing encoder aggregates local context over this hybrid graph, yielding per-node embeddings hih_i. Decoding is performed in a random order: all side-chain identities are masked, then residues are sequentially “unmasked,” with the model updating node embeddings at each step to produce probability distributions over the 20 canonical amino acids at each position: pθ(yitx,y<t)=softmax(Whit)p_\theta(y_{i_t}\mid x,y_{<t}) = \mathrm{softmax}\big(W\,h_{i_t}\big) where xx denotes the fixed backbone and ligand context and y<ty_{<t} are previously sampled residues.

Original pre-training maximizes the likelihood of native sequences, or equivalently, minimizes the cross-entropy: LCE(θ)=(x,ynative)Dtraini=1Llogpθ(yinativex,y<inative)\mathcal{L}_{\mathrm{CE}}(\theta) = - \sum_{(x,\,y^\mathrm{native})\in \mathcal{D}_\mathrm{train}} \sum_{i=1}^L \log\,p_\theta\left(y^\mathrm{native}_i \mid x,\,y^\mathrm{native}_{<i}\right) This leads to high sequence recovery rates (60–65%) but does not explicitly guarantee that generated sequences fold as desired (Xue et al., 30 May 2025).

2. Direct Preference Optimization (DPO) Using AlphaFold pLDDT

To optimize for designability—that is, the likelihood that a sequence folds to a target structure—LigandMPNN is fine-tuned via Direct Preference Optimization. DPO requires a dataset D\mathcal{D} of preference triples (x,yw,yl)(x, y_w, y_l) for a structure–ligand context xx, where ywy_w is a more designable sequence than yly_l as determined by per-sequence pLDDT scores from AlphaFold2.

Given a reference policy πref\pi_{\mathrm{ref}} (pre-trained LigandMPNN) and policy πθ\pi_\theta (to be fine-tuned), the DPO loss is

LDPO(θ;πref)=E(x,yw,yl)D[logσ(βlogπθ(ywx)πref(ywx)βlogπθ(ylx)πref(ylx))]\mathcal{L}_{\mathrm{DPO}}(\theta;\,\pi_\mathrm{ref}) = -\mathbb{E}_{(x,\,y_w,\,y_l)\sim \mathcal{D}} \left[\log\,\sigma\left(\beta \log\frac{\pi_\theta(y_w|x)}{\pi_\mathrm{ref}(y_w|x)} - \beta \log\frac{\pi_\theta(y_l|x)}{\pi_\mathrm{ref}(y_l|x)}\right)\right]

with σ\sigma the logistic sigmoid and β\beta a scaling parameter. Preference pairs are sampled from candidate sequences generated by πref\pi_{\mathrm{ref}} for each xx, with pairs selected if pLDDT scores differ by at least a threshold δ=10\delta=10.

This procedure explicitly drives πθ\pi_\theta to favor sequences with higher AlphaFold folding confidence, penalizing large deviations from the reference policy, and empirically increases design success over the cross-entropy baseline (Xue et al., 30 May 2025).

3. Residue-level Designability Preference Optimization (ResiDPO)

Sequence-level objectives can induce conflicting gradients and excessive regularization. ResiDPO extends DPO to the residue level by utilizing AlphaFold’s per-residue pLDDT scores. For each preference pair (x,yw,yl)(x, y_w, y_l), define:

  • Rewarded positions I={ipLDDT(yw,i)pLDDT(yl,i)>α}\mathcal{I} = \{i\,|\,\mathrm{pLDDT}(y_w, i) - \mathrm{pLDDT}(y_l, i) > \alpha\} (with α=10\alpha=10), targeting residues where ywy_w is locally more designable.
  • Preservation positions J={jpLDDT(yw,j)>β,πref(ywjx)>γ}\mathcal{J} = \{j \,|\, \mathrm{pLDDT}(y_w, j) > \beta,\, \pi_{\mathrm{ref}}(y_w^j|x)>\gamma\} (with β=80\beta=80, γ=0.5\gamma=0.5), capturing already reliable residues.

The total ResiDPO loss is

LResiDPO=LRPL+λLRCL\mathcal{L}_{\mathrm{ResiDPO}} = \mathcal{L}_{\mathrm{RPL}} + \lambda\,\mathcal{L}_{\mathrm{RCL}}

where Residue-level Preference Learning (RPL) and Residue-level Constraint Learning (RCL) are defined: LRPL=E(x,yw,yl)D[logσ(1IiI[logπθ(ywix)logπθ(ylix)])]\mathcal{L}_{\mathrm{RPL}} = -\mathbb{E}_{(x, y_w, y_l)\sim\mathcal{D}} \left[ \log\,\sigma\left( \frac{1}{|\mathcal{I}|} \sum_{i\in \mathcal{I}} \left[\log\pi_\theta(y_w^i|x) - \log\pi_\theta(y_l^i|x)\right] \right)\right]

LRCL=E(x,yw,yl)D[1JjJπref(ywjx)logπref(ywjx)πθ(ywjx)]\mathcal{L}_{\mathrm{RCL}} = \mathbb{E}_{(x, y_w, y_l)\sim\mathcal{D}} \left[ \frac{1}{|\mathcal{J}|} \sum_{j\in\mathcal{J}} \pi_{\mathrm{ref}}(y_w^j|x)\log \frac{\pi_{\mathrm{ref}}(y_w^j|x)}{\pi_\theta(y_w^j|x)} \right]

with λ=0.01\lambda=0.01 balancing the two. If I\mathcal{I} is empty, the objective reverts to standard sequence-level preference, ensuring coverage. This approach achieves stable, fine-grained optimization that reinforces improvements while protecting well-predicted regions (Xue et al., 30 May 2025).

4. Fine-tuning Protocol and EnhancedMPNN

PDB-D, a dataset of approximately 19,000 monomeric protein backbones (≤3.5 Å X-ray, <1000 amino acids), each with eight LigandMPNN-generated sequences and per-residue pLDDT annotations from AlphaFold, was constructed. Structures deposited post-30 September 2021 were held out for validation. Relative Sampling generates pairs whose pLDDT difference exceeds δ=10, resulting in 9,557 training pairs.

Fine-tuning uses Adam (initial LR=5×1075\times10^{-7}, 3% warmup, cosine decay), batch size 8×8, gradient accumulation to effective batch 128, on two NVIDIA L40 GPUs for 100,000 iterations. The EnhancedMPNN model refers to LigandMPNN fine-tuned with ResiDPO under these settings, with hyperparameters α=10\alpha=10, β=80\beta=80, γ=0.5\gamma=0.5, λ=0.01\lambda=0.01 (Xue et al., 30 May 2025).

5. Benchmarking and Empirical Outcomes

EnhancedMPNN was evaluated on enzyme active-site scaffolding (five EC classes, RFDiffusion2-generated backbones) and protein binder design (five targets, RFDiffusion backbones). Results are summarized:

Task LigandMPNN Baseline DPO Fine-tuned EnhancedMPNN (ResiDPO)
Enzyme Seq. Success 6.56% ≈10% 17.57%
Enzyme Backbone Success 19.74% 40.34%
Binder Design Success 7.07% 10.40% 16.07%

EnhancedMPNN demonstrated a near 3-fold improvement in enzyme sequence-level success and a 2.3-fold boost in binder design success. On the PDB-D validation set, EnhancedMPNN achieved a pLDDT-accuracy (correlation between πθ\pi_\theta-rank and true pLDDT) of 66.08%, up from 57.71% for LigandMPNN and 62.11% for DPO. Sequence recovery remained high at approximately 55% (Xue et al., 30 May 2025).

6. Mechanistic Insights and Designability Alignment

Preference-based fine-tuning with AlphaFold pLDDT aligns the sequence design process with folding confidence, emphasizing functionally relevant criteria rather than mere sequence recovery. Residue-level decoupling in ResiDPO allows gradient updates to focus on positions requiring improvement while preserving high-confidence regions, circumventing issues such as catastrophic forgetting. Analysis of amino acid substitution in EnhancedMPNN reveals systematic replacement of ambiguous residues (A, S, T, Q) with charged or polar residues (E, K, R), reducing sequence–structure ambiguity and increasing pLDDT. Ablations confirm that ResiDPO can be effective even with training sets as small as 500–1,000 backbones, indicating data efficiency (Xue et al., 30 May 2025).

7. Limitations and Further Directions

Although EnhancedMPNN achieves substantial in silico gains, a subset of designs exhibit high AlphaFold PAE, suggesting that direct optimization for interatomic errors remains an open problem. All results are currently computational; experimental validation is required to confirm improvements in folding yield. The ResiDPO alignment framework is model-agnostic and can, in principle, be adapted to optimize other protein properties when reliable per-residue or per-sequence metrics exist, such as stability or expressibility. Potential extension includes combining ResiDPO with advanced backbone generative models to enlarge the diversity and viability of designable protein structures (Xue et al., 30 May 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to LigandMPNN.