The current fervor surrounding large language models often feels like a conversation without a history. As philosophers and technologists debate the "intelligence" of systems like ChatGPT and its successors, they frequently overlook a body of work that anticipated these very questions decades ago. Paul Churchland, a figure synonymous with the neurophilosophical turn of the late 20th century, remains a vital yet under-cited resource for understanding the architecture of modern artificial intelligence — and the conceptual puzzles it generates.
Beginning in the mid-1980s, Churchland pivoted his focus toward connectionism, also known as parallel distributed processing. While many of his peers remained tethered to symbolic logic or traditional linguistic analysis, Churchland recognized that artificial neural networks were not just engineering curiosities but profound philosophical tools. He sought to use these models to reframe the nature of human knowledge, suggesting that cognition might function more like a weighted network than a library of rigid rules. That claim, once marginal, now reads less like speculation and more like a description of the systems running on data center hardware worldwide.
From Eliminativism to Engineering
Churchland's broader philosophical project — eliminative materialism — argued that the folk-psychological categories people use to describe mental life (beliefs, desires, intentions) would eventually be replaced by the vocabulary of neuroscience. The thesis was controversial in philosophy of mind circles and remains so. But the rise of deep learning has given it an unexpected second act. Modern neural networks do not operate on anything resembling propositional beliefs. They learn statistical regularities across high-dimensional vector spaces, adjusting millions or billions of parameters without recourse to the symbolic structures that classical cognitive science once treated as foundational.
This is precisely the territory Churchland mapped. In works such as A Neurocomputational Perspective (1989) and The Engine of Reason, the Seat of the Soul (1995), he explored how trained networks develop internal representations — activation patterns across layers of artificial neurons — that capture the structure of their input domains without being reducible to discrete rules. The parallel to contemporary transformer architectures is not exact, but it is instructive. Today's large language models generate coherent text not by consulting a grammar book but by navigating learned representational geometries. Churchland's framework offers a philosophical vocabulary for describing what that process is and what it is not.
This intellectual project was part of a broader naturalistic approach to philosophy, often shared with his colleague and wife, Patricia Churchland. While Patricia's work gravitated toward the granular realities of neurobiology, Paul focused on the representational power of the networks themselves. His work offers a bridge between the biological brain and the silicon-based models that now dominate the technological landscape, providing a framework for the ontological and epistemological questions that current-generation AI has forced back into the spotlight.
The Vocabulary Gap
Much of the public discourse around AI oscillates between two poles: anthropomorphism (the model "understands," "hallucinates," "reasons") and dismissal (it is "just" autocomplete). Neither pole is analytically satisfying. Churchland's neurophilosophy suggests a third path — one that takes the representational capacities of trained networks seriously without collapsing them into human cognitive categories. His concept of state-space semantics, in which meaning is located in the geometric relationships among activation vectors rather than in correspondence to external symbols, provides a more precise way to discuss what happens inside these systems.
The gap matters beyond academic philosophy. Regulatory frameworks, safety research, and public trust all depend on how societies conceptualize what AI systems do. If the available vocabulary is limited to "it thinks like us" or "it's a stochastic parrot," policy will be built on caricature rather than analysis. Churchland's work does not resolve the question of machine intelligence, but it sharpens the terms in which the question can be posed.
By revisiting Churchland, the discourse gains access to a sophisticated set of tools for discussing how machines represent, generalize, and fail. In an era where AI is often treated as either a black box or a digital ghost, his neurophilosophical lens offers a grounded, materialist alternative. Whether that alternative ultimately proves sufficient — whether connectionist philosophy of mind can scale to meet the complexity of systems its author never envisioned — is itself a question worth holding open. The tension between Churchland's framework and the sheer scale of modern AI may be exactly where the next productive philosophical work lies.
With reporting from Blog of the APA.
Source · Blog of the APA



