Neurons in the brain are spatially organized such that neighbors on tissue often exhibit similar response profiles. In the human language system, experimental studies have observed clusters for syntactic and semantic categories, but the mechanisms underlying this functional organization remain unclear. Here, building on work from the vision literature, we develop TopoLM, a transformer language model with an explicit two-dimensional spatial representation of model units. By combining a next-token prediction objective with a spatial smoothness loss, representations in this model assemble into clusters that correspond to semantically interpretable groupings of text and closely match the functional organization in the brain's language system. TopoLM successfully predicts the emergence of the spatio-functional organization of a cortical language system as well as the organization of functional clusters selective for fine-grained linguistic features empirically observed in the human cortex. Our results suggest that the functional organization of the human language system is driven by a unified spatial objective, and provide a functionally and spatially aligned model of language processing in the brain.
Using 3 independent neuroimaging datasets (Fedorenko 2024, Hauptman 2024, Moseley 2014) we show that TopoLM predicts key linguistic clusters in the human cortex.
Finding I: TopoLM exhibits a core language system similar to the core human language system (Fedorenko 2010, 2024).
Finding II: TopoLM predicts clusters selective to linguistic categories such as verbs and nouns (Hauptman 2024) or concrete verbs and concrete nouns (Moseley 2014)
Finding III: The spatial loss comes at virtually no cost with regard to downstream performances or brain alignment.
@inproceedings{rathi_topolm_2025,
title = {TopoLM: brain-like spatio-functional organization in a topographic language model},
url = {http://topolm.epfl.ch/},
doi = {10.48550/arXiv.2410.11516},
language = {en},
booktitle = {International {Conference} on {Learning} {Representations}},
author = {Rathi, Neil and Mehrer, Johannes and AlKhamissi, Badr and Binhuraib, Taha and Blauch, Nicholas M and Schrimpf, Martin},
year = {2025}
}
This website is adapted from The LLM Language Network, LLaVA-VL, Nerfies, and VL-RewardBench, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.