Reticular Logo

Research Publications

Formal research publications and technical reports on biological AI interpretability

Towards Interpretable Protein Structure Prediction with Sparse Autoencoders
February 2025ICLR GEM Bio Workshop 2025arXiv

Towards Interpretable Protein Structure Prediction with Sparse Autoencoders

Protein language models have revolutionized structure prediction, but their nonlinear nature obscures how sequence representations inform structure prediction. While sparse autoencoders (SAEs) offer a path to interpretability here by learning linear representations in high-dimensional space, their application has been limited to smaller protein language models unable to perform structure prediction. In this work, we make two key advances: (1) we scale SAEs to ESM2-3B, the base model for ESMFold, enabling mechanistic interpretability of protein structure prediction for the first time, and (2) we adapt Matryoshka SAEs for protein language models, which learn hierarchically organized features by forcing nested groups of latents to reconstruct inputs independently. We demonstrate that our Matryoshka SAEs achieve comparable or better performance than standard architectures. Through comprehensive evaluations, we show that SAEs trained on ESM2-3B significantly outperform those trained on smaller models for both biological concept discovery and contact map prediction. Finally, we present an initial case study demonstrating how our approach enables targeted steering of ESMFold predictions, increasing structure solvent accessibility while fixing the input sequence. Upon publication, we plan to release our code, trained models, and visualization tools to facilitate further investigation by the research community.

InterpretabilityProtein StructureSparse AutoencodersESMFold
Authors: Nithin Parsan, David J. Yang, John J. Yang(Corresponding author. Contact contact@reticular.ai)