The Emergence of Physicalism (eop)
How a Metaphysics Became Invisible
Project: Return to Consciousness
Author: Bruno Tonetto
Authorship Note: Co-authored with AI as a disciplined thinking instrument—not a replacement for judgment. Prioritizes epistemic integrity and truth-seeking as a moral responsibility.
Finalized: February 2026
11 pages · ~20 min read · PDF
Abstract
Physicalism—the view that reality is fundamentally physical and that consciousness is derivative or reducible—now functions less as a defended thesis than as the background assumption of serious intellectual discourse. This essay asks: how did this particular metaphysical framework come to feel like common sense rather than doctrine? The answer is not primarily philosophical. Physicalism’s dominance emerged from a convergence of methodological success, political and religious pressures, industrial transformation, institutional incentives, and cultural shifts—until its status as a metaphysical position faded from view. Understanding this history does not refute physicalism, but it does reveal its contingency. What feels inevitable is shown to be historically situated. This clarity is a precondition for responsible inquiry.
Keywords: history of physicalism · scientific revolution · methodological restriction · Galileo · Descartes · Newton · philosophical genealogy · institutional consolidation
Introduction: The Question Behind the Question
Contemporary intellectual culture operates within a pervasive but largely invisible framework: the assumption that reality is fundamentally physical, that consciousness emerges from or reduces to material processes, and that explanations invoking mind as fundamental are pre-scientific residue awaiting elimination.
This framework—physicalism—rarely presents itself as a framework. It appears instead as the absence of metaphysics, as what remains when speculation is cleared away, as the sober recognition of what science has shown. To question it is to appear confused about the difference between evidence and wishful thinking.
The companion essay Myth of Metaphysical Neutrality argues that this appearance is mistaken—that physicalism is not the absence of metaphysics but a specific metaphysical commitment operating without examination. The present essay asks a different but complementary question:
How did physicalism acquire this status? How did one metaphysical position among many come to feel like the natural default—indeed, like no position at all?
This is a historical question, not a philosophical one. It traces how physicalism became dominant—not through philosophical proof, but through a convergence of factors that made it increasingly difficult to see as a position requiring defense.
The goal is genealogical clarity. Understanding how assumptions arise is the first step toward examining them responsibly.
I. Early Modern Science: The Strategic Restriction
Galileo’s Pragmatic Move
The scientific revolution is often narrated as the triumph of reason over superstition, of evidence over dogma. This narrative contains truth, but it obscures a crucial subtlety: the founders of modern science were not physicalists. They adopted quantitative methods as a strategic restriction, not an ontological commitment.
Galileo’s distinction between primary and secondary qualities illustrates this clearly. Primary qualities—size, shape, position, motion—were amenable to mathematical treatment. Secondary qualities—color, taste, smell, warmth—were not. Galileo proposed studying the former while bracketing the latter.
This was a methodological move, not a metaphysical claim. Galileo did not argue that secondary qualities were unreal or that primary qualities exhausted reality. He argued that primary qualities were more tractable for the kind of inquiry he was developing. The restriction was pragmatic, not ontological.
Descartes and the Divided World
Descartes formalized this division into a metaphysical dualism: res extensa (extended substance, the physical) and res cogitans (thinking substance, the mental). Whatever the problems with Cartesian dualism, it explicitly preserved mind as a fundamental category. Descartes was not a physicalist. He carved out a domain for mathematical physics by separating it from mind, not by reducing mind to physics.
The irony is that Descartes’ division, intended to protect both domains, eventually enabled the elimination of one. Once the physical world was defined as that which could be treated mathematically, and once mathematical treatment proved spectacularly successful, the other half of the division came to seem superfluous—a remainder awaiting absorption.
Newton’s Silence
Newton, too, was not a physicalist. His private writings reveal extensive engagement with alchemy, theology, and questions about the relationship between God and nature. But Newton’s public scientific work was carefully restricted to mathematical description of observable regularities. He famously declared “hypotheses non fingo”—I frame no hypotheses—regarding the ultimate nature of gravity.
This restraint was partly temperamental, but it was also strategic. The condemnation of Galileo in 1633 had demonstrated the dangers of making claims that trespassed on theological territory. Scientists learned to say: We study only measurable patterns. We make no claims about souls, divine action, or ultimate reality.
This defensive posture was enormously productive. By limiting its scope, science avoided ecclesiastical conflict and achieved unprecedented predictive success. But the restriction that enabled this success was methodological, not ontological. The founders of modern science were not claiming that only the measurable exists. They were claiming only that the measurable was what their methods could address.
II. From Method to Metaphysics: The Gradual Conflation
The Success That Obscured Its Own Limits
The methodological restriction worked brilliantly. Mathematical physics achieved predictive accuracy and technological power that no previous framework had approached. The movements of planets, the behavior of projectiles, the properties of gases—all yielded to quantitative treatment.
This success created a cognitive pull. If mathematical description worked so well for so much, perhaps it worked for everything. If the measurable yielded such reliable knowledge, perhaps the measurable was all there was. The very success of the method began to suggest an ontology.
This is the pivotal conflation: methodological success was mistaken for ontological completeness. The statement “we study only measurable aspects of reality” gradually transformed into “only measurable aspects are real.”
The Mechanisms of Drift
Several factors enabled this transformation:
Definitional creep. Terms began to shift meaning. “Natural” became synonymous with “physical.” “Scientific” became synonymous with “quantitative.” “Real” became synonymous with “measurable.” These equations were not argued for; they accumulated through usage until they felt obvious.
Success misattribution. The achievements of physics were attributed to physicalist assumptions rather than to the methods themselves. But a scientist operating under idealist metaphysics—where consciousness is fundamental and physical phenomena are patterns within experience—could employ exactly the same methods and reach identical quantitative conclusions. The empirical content transfers; the metaphysics was never doing the work.
The eliminativist slide. Phenomena that resisted easy quantification—consciousness, meaning, purpose, value—began to be treated not merely as outside current methods, but as somehow less real. What started as “we don’t yet know how to study this quantitatively” became “this is probably not a genuine feature of reality.”
Institutional selection. Academic positions, funding, and prestige flowed toward work that fit the quantitative paradigm. Researchers learned what kinds of questions were rewarded and what kinds were career-limiting. Over generations, this selection pressure shaped not just research programs but intuitions about what serious inquiry looked like.
The Genuine Attractions
A fair genealogy must acknowledge that physicalism’s persistence was not merely inertial. Beyond sociological mechanisms, there were genuine intellectual attractions—though each was more contingent than it appeared at the time.
Causal closure. If physical effects have sufficient physical causes, there seems no work left for non-physical entities. The physical world appeared self-contained. (This was compelling in the classical era; quantum mechanics later preserved formal closure while leaving outcome selection unexplained.)
Conservation laws. Energy and momentum are conserved. If minds could influence brains, where would the energy come from? (This assumes minds must push matter—an assumption alternative frameworks do not share.)
Reductive successes. Chemistry yielded to atomic structure; biology to molecular interactions. Mind would presumably follow. (What this established was decomposability—how parts interact—not explanatory sufficiency—how systems achieve outcomes. That distinction was invisible when physicalism consolidated.)
Ontological unification. One kind of stuff, one set of laws. Alternatives threatened irreducible pluralism. (This assumed experience could eventually be subsumed. Before the hard problem was articulated, that seemed reasonable.)
These attractions were real. They explain why physicalism won adherents. But being attractive is not the same as being neutral. A framework can be elegant, successful within its domain, and intellectually satisfying while remaining one option among others. The attractions made physicalism powerful; the mechanisms of drift made it invisible.
The Forgetting
By the 19th century, the transformation was largely complete. Laplace could imagine a demon that, knowing the position and momentum of every particle, could predict the entire future of the universe. This was not presented as a metaphysical speculation but as the logical implication of what physics had revealed.
The crucial point is that somewhere in this trajectory, the memory of the original restriction was lost. What had begun as a strategic methodological move—”let us study the quantifiable”—had become an ontological claim—”the quantifiable is all there is.” And because the transformation was gradual, because no single argument marked the transition, the metaphysical commitment became invisible. It no longer appeared as a position. It appeared as what remained when positions were abandoned.
III. Political and Religious Pressures
The Galileo Effect
The condemnation of Galileo in 1633 was a formative trauma for the emerging scientific community. The message was clear: claims about the nature of reality that conflicted with Church teaching were dangerous. Scientists who wished to pursue their work without persecution learned to restrict their claims to descriptions of appearances and regularities, avoiding assertions about ultimate reality.
This was not cowardice but prudence. The strategic value of methodological restraint was demonstrated by its results: science flourished precisely by declining to compete with theology on theological ground.
But defensive postures can harden into worldviews. What began as “we make no claims about souls or spirits” gradually became “souls and spirits are not the kind of thing serious inquiry considers.” The absence of a topic became evidence of its illegitimacy.
Secularization and Authority Transfer
As political power shifted from religious to secular institutions over the following centuries, the tactical reasons for methodological restriction weakened. But by then, the habits had become institutionalized. More significantly, a new dynamic emerged: as religious authority declined, something needed to fill the explanatory vacuum.
Scientific naturalism—the view that nature as described by science is all there is—inherited epistemic authority by default. This was not the result of an argument demonstrating that naturalism was true. It was the result of a cultural transition in which the previous authority (religion) lost credibility, and the most successful knowledge-producing institution (science) absorbed its cultural role.
In this transfer, a subtle but consequential shift occurred. Science’s legitimate authority—its ability to produce reliable predictions and effective technologies—was extended into domains where it had not been earned. Science could tell you how bodies fall and how diseases spread. It was less clear that it could tell you whether consciousness is fundamental or whether life has meaning. But the cultural authority of science, once established, did not respect such distinctions.
The Conflation of “Natural” and “Physical”
One linguistic marker of this transition is the shifting meaning of “natural.” In earlier usage, “natural” meant something like “pertaining to the nature of things”—a broad category that could include souls, purposes, and meanings as features of how reality was constituted. Aristotle’s physics was a science of natural things, including their inherent purposes and forms.
Gradually, “natural” became synonymous with “physical” in the modern sense—pertaining to matter, energy, and their mathematically describable interactions. To call something “supernatural” was no longer to indicate that it transcended ordinary nature; it was to suggest that it didn’t exist at all, or existed only as a projection of human confusion.
This redefinition was not argued for. It accumulated through usage, reinforced by institutional practice, until it became the unmarked default. To question it was to sound confused about words everyone else understood.
IV. Industrialization and the Authority of Control
The Pragmatic Proof
The 18th and 19th centuries transformed the relationship between knowledge and power. The steam engine, the railroad, the telegraph, electric light, industrial chemistry—these were not merely useful devices. They were demonstrations that a certain way of understanding reality worked.
This pragmatic success had epistemological weight. If treating the world as a system of matter in motion enabled you to build engines that actually ran, your treatment seemed to be tracking something real. The ability to predict and control became evidence of ontological insight.
This inference is not unreasonable, but it is also not airtight. Predictive success within a domain does not establish that the framework yielding those predictions captures the fundamental nature of reality. Ptolemaic astronomy made accurate predictions for centuries. Newtonian mechanics remains predictively excellent despite being superseded by relativity. A framework can be enormously useful without being ontologically final.
But in the cultural reception of industrial success, such subtleties faded. What could be built, measured, and controlled came to define what was “real.” Domains that resisted engineering—consciousness, meaning, value—came to seem less substantial, perhaps merely subjective, perhaps eventually eliminable.
The Mechanical Imagination
Industrialization also shaped imagination—the intuitive sense of how things work. Before the industrial age, the paradigmatic examples of causation were organic: seeds growing, animals moving, people deciding. After industrialization, the paradigmatic examples were mechanical: gears turning, pistons firing, currents flowing.
This shift in background imagery made mechanism feel like the natural model for explanation as such. When people asked “how does X work?”, they increasingly expected answers in mechanical terms. The question “how does the mind work?” invited answers involving gears, circuits, or (later) computers. The very form of acceptable explanation had been shaped by industrial experience.
This was not a philosophical argument. It was a transformation in what felt intuitively satisfying—a shift in the cognitive background against which explicit arguments would be evaluated.
V. Logical Positivism and the Illusion of Elimination
The Vienna Circle’s Project
The most explicit attempt to eliminate metaphysics came from the logical positivists of the Vienna Circle in the early 20th century. Their project was ambitious: to demarcate meaningful from meaningless statements using the verification principle. A statement was meaningful only if it was either analytically true (true by definition) or empirically verifiable (testable by observation).
Metaphysical claims—about the fundamental nature of reality, the existence of God, the status of consciousness—were declared not false but meaningless. They were pseudo-statements, grammatically well-formed but semantically empty. The goal was to purify philosophy of its metaphysical confusions and reconstitute it as the logical analysis of scientific language.
The Self-Refutation
The project failed on its own terms. The verification principle itself is neither analytically true nor empirically verifiable. It is a metaphysical claim about what counts as meaningful—precisely the kind of claim it sought to exclude. This self-refutation was recognized relatively quickly, and logical positivism as an explicit program collapsed by mid-century.
The Cultural Success
But here is the crucial irony: the philosophical failure was a cultural success. The explicit doctrine was abandoned, but its effects persisted. The attitude of logical positivism—that metaphysics is meaningless speculation, that serious inquiry concerns only what can be empirically tested, that philosophy’s job is to clean up after science rather than to ask foundational questions—this attitude survived the refutation of its official justification.
Physicalism ceased to require defense because it ceased to appear as a thesis. It became the background against which theses were evaluated. To propose an alternative was not to enter a debate; it was to reveal that one had not understood what serious inquiry looked like.
The positivists had failed to eliminate metaphysics. They had succeeded in making one metaphysics invisible.
VI. Psychology and the Suppression of Subjectivity
The Rise and Fall of Introspection
Early scientific psychology took consciousness seriously as a subject of study. Wilhelm Wundt’s laboratory in Leipzig, founded in 1879, used trained introspection to investigate the structure of conscious experience. The assumption was that consciousness, being the most directly accessible phenomenon, should be amenable to systematic investigation.
This project ran into difficulties. Introspective reports proved unreliable across observers, laboratories disagreed about basic findings, and the method seemed to lack the intersubjective verifiability that defined successful science. By the early 20th century, introspectionism was in crisis.
The Behaviorist Overcorrection
Behaviorism emerged as a reaction—and an overcorrection. John B. Watson’s 1913 manifesto declared that psychology should abandon consciousness entirely and study only observable behavior. The mind was a “black box” whose internal states were either inaccessible or irrelevant; what mattered was the input-output relationship between stimuli and responses.
This was not merely a methodological restriction (though it was presented as such). It carried implicit ontological weight. If scientific psychology could proceed without reference to consciousness, perhaps consciousness was not a genuine phenomenon requiring explanation. Perhaps it was an epiphenomenon, a folk concept, a confusion to be dissolved rather than a reality to be understood.
Behaviorism’s influence waxed and waned over the following decades, but its attitude toward consciousness persisted. Even as cognitive psychology replaced behaviorism, consciousness remained peripheral—an embarrassment, a “hard problem” to be deferred, or an illusion to be explained away. The discipline that should have been centrally concerned with experience had learned to treat experience as suspect.
The Neuroscientific Displacement
Contemporary neuroscience has reintroduced consciousness as a topic of investigation, but often within a framework that presupposes the answer. Consciousness is assumed to be “produced by” or “identical with” neural processes; the research program is to discover the neural correlates of consciousness and, eventually, to explain how brain activity generates experience.
This framing builds physicalism into the research question. It forecloses the possibility that consciousness might be fundamental and brain activity might be correlated with rather than productive of experience. The correlation between brain states and mental states is an empirical finding; the production claim is a metaphysical addition that goes beyond the evidence.
That this addition goes unnoticed is a measure of how thoroughly physicalism has become the default. The production model feels like mere common sense—not a hypothesis requiring evidence, but the obvious interpretation of evidence whose meaning is self-evident.
VII. Computation, Intelligence, and the Cold War Mind
The Birth of the Computational Metaphor
The mid-20th century introduced a new framework for understanding mind: computation. Alan Turing’s theoretical work on computability, combined with the development of actual computers during and after World War II, suggested a powerful analogy. Perhaps the mind was a kind of computer—a system that manipulated symbols according to rules, transforming inputs into outputs through formal operations.
This metaphor proved enormously productive. Cognitive science emerged as the interdisciplinary study of mind-as-computation, and artificial intelligence emerged as the project of building minds in machines. The successes were real: computer models illuminated aspects of perception, memory, language, and reasoning that had resisted previous approaches.
The Institutional Context
But the computational framework did not emerge in a vacuum. It arose within a specific institutional context: Cold War military funding, RAND Corporation strategic analysis, and government interest in automation, code-breaking, and decision systems. The questions driving early AI research were not purely scientific; they were shaped by military and intelligence priorities.
This context influenced what “intelligence” came to mean. Intelligence was reconceived as optimization—finding optimal solutions to well-defined problems under constraints. Rationality was identified with expected utility maximization. Values were reframed as preferences or utility functions amenable to formal treatment.
These reconceptions were not neutral descriptions of intelligence as it naturally occurs. They were engineering idealizations shaped by the problems researchers were funded to solve. When your task is to build systems that play chess, break codes, or allocate resources, you develop frameworks suited to those tasks. When those frameworks are then projected back onto human intelligence and human values, something is lost—but the loss is hard to see from within the framework.
Mind as Machine
The computational metaphor carried metaphysical implications, whether or not they were explicitly acknowledged. If the mind is a computer, then mental states are computational states. Consciousness becomes either identical with certain computations, or epiphenomenal to them, or an illusion generated by them. In any case, it is not fundamental.
This framework made certain questions hard to ask. If intelligence is computation, and computation is substrate-independent (can run on any physical system that implements the right formal relations), then what matters is the functional organization, not the intrinsic nature of what is organized. Consciousness as something over and above functional organization becomes theoretically superfluous—a wheel that turns without engaging the machinery.
The computational framework did not argue that consciousness was epiphenomenal or illusory. It simply made those positions natural defaults within its conceptual space. To insist that consciousness was something more than computation was to sound like someone who didn’t understand how computers worked—or who was clinging to pre-scientific intuitions that rigorous analysis had superseded.
VIII. The Contemporary Situation
Physicalism as Air
Today, physicalism functions as the ambient atmosphere of serious intellectual discourse. It is not typically defended because it does not typically appear to require defense. It is what remains when metaphysical speculation has been cleared away, when we have learned to ask questions properly, when we attend to evidence rather than intuition.
This appearance is the culmination of the history traced above. Methodological success, political pressure, industrial transformation, institutional selection, the failure of introspectionism, the rise of computation—each contributed to a trajectory in which one metaphysical position gradually became invisible as a position.
The result is an asymmetry that now shapes inquiry across domains. Physicalist assumptions are built into research frameworks, funding priorities, and criteria for what counts as serious work. Alternatives must justify themselves against a baseline that does not experience itself as a baseline—that appears simply as the way things are.
The Costs
This essay has traced how physicalism became dominant, not whether it is true. But the invisibility of physicalism’s metaphysical status carries costs regardless of its truth value.
Premature closure. If physicalism is simply assumed, alternatives are never seriously examined. Questions that might be productive are never asked. Possibilities that might illuminate are never explored.
Distorted research programs. In consciousness studies, the physicalist assumption shapes what counts as progress. Finding neural correlates is progress; questioning whether correlation implies production is not. In AI, the assumption shapes what “alignment” means. Aligning systems with human values presupposes we know what values are; if values are not simply preferences or utility functions, the framing may be inadequate.
Self-misunderstanding. If humans are fundamentally conscious beings inhabiting a framework that treats consciousness as derivative or suspect, we are alienated from our own nature. The “hard problem” of consciousness—explaining why there is something it is like to be—appears hard precisely because the framework within which we try to solve it has already excluded the resources that might dissolve it.
The Possibility of Examination
None of this demonstrates that physicalism is false. The point is that its truth or falsity should be a question, not an assumption. And it can only become a question if its status as a metaphysical commitment—rather than as the absence of metaphysical commitment—is recognized.
Genealogy enables this recognition. By tracing how physicalism became dominant through historical contingencies rather than philosophical proof, we create space for examination. What was invisible becomes visible. What felt inevitable is revealed as one trajectory among possible others.
Conclusion: Contingency and Responsibility
The history traced here is not a conspiracy. No one decided to impose physicalism on an unwitting culture. The trajectory emerged from countless local decisions, each reasonable in its context: scientists restricting their claims to avoid persecution, engineers developing frameworks suited to their problems, psychologists abandoning methods that weren’t working, institutions funding research that produced results.
The outcome—a metaphysical framework so pervasive it no longer appears as a framework—was not intended. It was the cumulative effect of choices made for other reasons, reinforced by success, until the contingent came to feel necessary.
Recognizing this contingency does not tell us what to believe. It tells us that the question of what to believe remains open. Physicalism might be a historically conditioned framework that, having served certain purposes well, now obscures as much as it reveals.
The responsible stance is neither to defend physicalism reflexively nor to reject it reactively, but to examine it honestly—to ask whether the framework within which we think might itself require revision.
That examination cannot happen as long as physicalism remains invisible. The first step is seeing it. This essay has tried to help with that first step.
References
Burtt, E. A. (1924). The metaphysical foundations of modern physical science. Kegan Paul, Trench, Trubner.
Daston, L., & Galison, P. (2007). Objectivity. Zone Books.
Dupré, J. (1993). The disorder of things: Metaphysical foundations of the disunity of science. Harvard University Press.
Edwards, P. N. (1996). The closed world: Computers and the politics of discourse in Cold War America. MIT Press.
Goff, P. (2019). Galileo’s error: Foundations for a new science of consciousness. Pantheon.
Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge University Press.
Koyré, A. (1957). From the closed world to the infinite universe. Johns Hopkins University Press.
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.
Mirowski, P. (2002). Machine dreams: Economics becomes a cyborg science. Cambridge University Press.
Nagel, T. (2012). Mind and cosmos: Why the materialist neo-Darwinian conception of nature is almost certainly false. Oxford University Press.
Richardson, A. (1998). Carnap’s construction of the world: The Aufbau and the emergence of logical empiricism. Cambridge University Press.
Shapin, S. (1996). The scientific revolution. University of Chicago Press.
Smith, R. (1997). The Norton history of the human sciences. W. W. Norton.
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20(2), 158–177.
Whitehead, A. N. (1925). Science and the modern world. Macmillan.
Related Essays in This Project
Available at: https://returntoconsciousness.org/
Myth of Metaphysical Neutrality (mmn) — The conceptual companion to this historical essay
Return to Consciousness (rtc) — The core framework this essay supports
License
This work is made freely available under the Creative Commons Attribution 4.0 International License (CC BY 4.0). You are free to share and adapt the material for any purpose, even commercially, provided you give appropriate credit, provide a link to the license, and indicate if changes were made. To view a copy of this license, visit creativecommons.org/licenses/by/4.0.