Scientists have found that the human brain understands spoken language in a way that is strikingly similar to advanced artificial intelligence systems.
The new study shows that the human brain comprehends spoken language through a sequential series of steps that closely resemble how modern AI language models operate. By recording brain activity while people listened to a spoken story, researchers found that later brain signals corresponded to deeper layers of AI models, particularly in key language regions such as Broca’s area.
The results challenge older theories based on rigid language rules and are supported by a new public dataset that provides an important tool for studying how the brain constructs meaning.
The research, published in Nature Communications, was conducted by scientists from the Hebrew University of Jerusalem, Princeton University in the United States, and Google Research. The team revealed an unexpected connection between how humans interpret spoken language and how modern artificial intelligence models process text.
The study
Using electrocorticography recordings from individuals listening to a 30-minute podcast, the researchers tracked brain responses with high precision. Their analysis showed that language processing in the brain unfolds through a structured sequence that closely matches the multilayered architecture of large language models such as GPT-2 and Llama 2.
When someone listens to speech, the brain does not grasp meaning instantaneously. Instead, each word passes through a series of neural stages. The researchers found that these stages develop over time in a way that strongly resembles the operation of AI models. Early AI layers focus on basic word features, while deeper layers integrate context, tone, and overall meaning.
The same pattern was observed in the brain. Early brain responses aligned with the initial processing stages of AI, while later responses matched deeper model layers. This temporal correspondence was especially pronounced in advanced language areas such as Broca’s area, where peak brain activity occurred later and aligned with deeper layers of the models.
According to Dr. Goldstein, “What surprised us most was how closely the temporal evolution of meaning construction in the brain matches the sequence of transformations within large language models. Although these systems are built very differently, both seem to arrive at a similar step-by-step process of understanding.”
Why the findings matter
The results suggest that artificial intelligence is not merely a tool for generating text—it can also help scientists better understand how the human brain processes meaning. For many years, language comprehension was thought to rely on fixed symbols and strict grammatical rules. This study challenges that view, supporting a more flexible, data-driven approach in which meaning emerges gradually through context.
The researchers also examined traditional linguistic units such as phonemes and morphemes. These features did not explain real-time brain activity as well as the contextual representations produced by AI models, reinforcing the idea that the brain relies more on broader context than on rigidly defined language units.
To further advance research in this area, the team released the entire set of neural recordings along with the corresponding linguistic features to the public. By making these data openly available, scientists worldwide can compare different theories of language understanding and develop computational models that more faithfully reflect how the human brain works.

