For most of human history, the world was alive.
Not metaphorically. Not as a comforting fiction told by people who didn’t know any better. The cosmos was experienced as intelligent, relational, saturated with meaning — and this was the considered view of the most rigorous thinkers in every major civilization. We replaced it with something else: a universe of dead matter in mindless motion, with consciousness as a strange and possibly illusory afterthought. That replacement felt like discovery. It was actually a choice — a specific historical choice, made for specific reasons, that could have gone differently and that we can still move beyond.
The choice matters now because it shapes what we can recognize. When Tahlequah pushed her dead calf through the Salish Sea for seventeen days, millions of people saw grief. The dominant scientific framework told them they were projecting — that recognizing interiority in a non-human animal is anthropomorphic, sentimental, methodologically suspect. That framework’s authority doesn’t come from having settled the question. It comes from four centuries of institutional momentum behind a metaphor that treats the universe as a machine.
This chapter traces how that metaphor became dominant. Not to romanticize what came before, but to see the current framework clearly enough to move beyond it.
The Living World
Plato saw the cosmos as a living creature endowed with soul and intellect — nous — that brought order and reason to material existence. His student Aristotle was more grounded in empirical observation, but likewise saw the world as saturated with purpose. All natural things, he argued, possess a telos — an innate end toward which they strive. The telos of an acorn is to become a mighty oak, and this striving is an intrinsic part of its nature.
These were not marginal ideas. They were the foundation of Western thought for two millennia. Qualities like warmth, color, purpose, and soul were understood as real and irreducible features of the world. Reality was not divided into “primary” and “secondary” qualities but experienced as a seamless whole in which mathematical relationships coexisted with meaning, beauty, and intention. The universe was intelligible because it was alive and intelligent. Owen Barfield spoke of it as original participation — a mode of consciousness in which humans experienced themselves as participating in the life of nature rather than standing apart from it. Owen Barfield, Saving the Appearances, 1957.
And this understanding was not unique to the West. The Taoist tradition understood reality not as static matter but as a dynamic flow of complementary forces (yin and yang), neither reducible to the other. In India, Advaita Vedanta recognized a fundamental identity between individual consciousness (Atman) and the cosmic ground (Brahman). Indigenous cultures worldwide experienced the world through participatory knowing — a direct relationship with a living environment where interiority was not a private human accident but a pervasive feature of the landscape.
Despite their differences, these traditions shared a structural intuition: that mind and matter, interior and exterior, are complementary aspects of a single reality. This “Perennial Philosophy” — the view that consciousness is intrinsic to the cosmos — was the historical norm. The fragmented, mechanistic view we now treat as common sense is the radical outlier.
The Galilean Reorientation
How did the integrated vision fade? Gradually, contested, and over centuries.
The shift began before Galileo. The integrated vision of a meaningful cosmos began to fade in Western thought with the rise of Christianity, particularly during the medieval period, which shifted the locus of agency from an inherent aspect of the world toward a transcendent God who guaranteed cosmic order. The universe came to be seen as fundamentally intelligible because it was the creation of a rational, divine mind. Nature was a “second book,” a divine text that, like Scripture, revealed the mind of its author. This worldview found its most famous expression in the Great Chain of Being — a model that organized all of existence, from God and the angelic hosts down to the most humble stone, in a single, unbroken, value-laden hierarchy. Everything had its proper place and purpose in the divine order.
The actual history was messier than this narrative suggests. Newton was far more deeply engaged with the then-broader discipline of alchemy than with physics, a pursuit he believed would lead to a more complete conception of natural philosophy. Religious ideas were the source of some of Newton’s deepest ideas. See William R. Newman, “The Problem of Alchemy,” The New Atlantis, Number 44, Winter 2015, pp. 65–75. His deepest ideas appear to have stemmed from religious convictions; he wrote in the Principia that God “lasts forever and he is present everywhere, and by existing forever and everywhere he has established duration and space, eternity and infinity.“ “The Pauli-Jung Conjecture”, p. 62 The “mechanical philosophy” of the 17th century was itself deeply contested — Descartes, Leibniz, and Newton disagreed fundamentally about what mechanism meant, whether gravity was mechanical or occult, whether God intervened constantly or established autonomous laws. As Amos Funkenstein has argued, mechanistic philosophy arose not in opposition to theological views but was deeply rooted in the theological notion of a rational, law-governed divine order. Amos Funkenstein: Theology and the Scientific Imagination from the Middle Ages to the Seventeenth Century
The transition from living cosmos to dead mechanism wasn’t a clean break but a gradual, contested, inconsistent process. What matters for our purposes is recognizing the pattern: a methodological choice made for practical reasons — let’s study what we can measure — gradually hardened into a metaphysical commitment about what’s real. This was not inevitable. It was contingent. And it is now being reversed.
For the founders of modern science, there was no conflict between physics and faith; they were integrated paths to understanding a single, divinely ordered reality. The mathematical elegance they discovered in planetary motion and terrestrial mechanics was not mere utility — it was a glimpse into the aesthetic and rational perfection of divine creativity. Yet the very power of the system they created contained the seeds of a transformation that would ultimately remove the creator from the picture — and, more significantly, strip the universe of meaning, purpose, and the possibility of genuine participation.
Understanding this history matters because we still live inside the transformation it created. The mechanistic worldview feels like objective truth rather than historical choice — which is why examining how it arose can help us see beyond it.
Galileo’s move was particularly consequential. He argued that the world is fundamentally accessible only through mathematics — that the “book of nature is written in geometrical characters.” This was not merely a practical choice about which tools to employ. It was a reorientation that would reshape Western thought for four centuries — and that is now, as we’ll see, beginning to be reversed. But at the time, it was a brilliant and promising worldview that sought to bring the entirety of the cosmos into a common framework of understanding.
By focusing exclusively on what could be quantified, Galileo created an implicit hierarchy. Quantities — mass, velocity, position — became “primary qualities” because they could be captured mathematically. Qualities like color, warmth, taste, purpose, and meaning became “secondary” because they resisted mathematical description. As the mechanistic worldview proved its predictive value, its overall explanatory strength steadily expanded. Qualitative phenomena that resisted mathematization were gradually deemed less real, less worthy of systematic attention, and eventually came to be seen not as part of nature itself but as mere projections of the perceiving mind. What began as let’s study what we can measure carried within it the seeds of only what we can measure is real. A pragmatic methodology had become a metaphysical commitment.
In roughly three centuries — a blink in human history — Western culture dismantled a worldview that had guided humanity for millennia. We traded a dual-aspect view, where inner meaning and outer form were co-fundamental, for a reductive view where only the outer, physical form was real. Physicalism — the philosophical perspective that reality consists entirely and exclusively of physical objects and forces — replaced the living cosmos. Eventually, many if not most of our best minds came to believe that the Logos of the Stoics, the Tao of the sages, and the animate world of our ancestors were merely creations of the human mind, derivatives of electrochemical processes in our brains.
This 300-year experiment in physicalism is the exception, not the rule. When we struggle to fit consciousness into the physicalist framework, we are not struggling with nature. We are struggling with the limitations of a specific, historically contingent metaphor.
The drift was so gradual that each generation could reasonably see themselves as simply building on their predecessors’ work, not fundamentally altering humanity’s relationship to reality. Newton saw universal gravitation as proof of God’s rational design. Yet the elegance of his mathematical system made the metaphor of a soul-less machine increasingly plausible. The power of mathematical physics to predict planetary motions with such precision suggested it revealed something essential about reality itself. What could be measured and predicted came to be understood as “objective” and real, while phenomena that resisted mathematization were marginalized as “not scientific,” and ultimately as not quite real.
By the time Laplace presented his “clockwork universe” to Napoleon and famously replied “I had no need of that hypothesis,” the transformation was nearly complete. He had articulated a vision of a perfect, deterministic machine that required no ongoing divine agency, no purpose, no meaning beyond its own mechanical operation.
Alfred North Whitehead called it the “bifurcation of nature” — the decisive cleaving of what had been experienced as a seamless, meaningful whole. The experiential way of being — participation, connection, relationship — gave way to detachment, manipulation, and symbolic representation. We became observers rather than participants, knowing the world through measurements and models rather than through direct experience. By the mid-20th century, the animate cosmos of the ancients, alive with meaning and purpose, had been replaced by particles and forces mindlessly operating according to mathematical laws, with consciousness relegated to the status of an emergent accident.
This matters for Tahlequah. If consciousness, meaning, and purpose are “secondary qualities” — mere projections onto dead matter — then recognizing grief in an orca becomes methodologically suspect. The framework itself makes such recognition seem naive. We’ve been trained to distrust exactly the kind of direct knowing that millions experienced watching her vigil. The disciplined imagination outlined in the Prologue — imagination constrained by evidence about cetacean neuroscience and behavior — suggests her interiority is rich and deep. But the mechanistic metaphor trained us to distrust phenomenological interpretation in favor of third-person measurement. For questions about interiority, that preference isn’t neutral rigor. It’s a metaphysically loaded choice.
Metaphors as Cognitive Infrastructure
How did a methodological choice become a metaphysical claim about what’s real? Through something deeper than explicit argument: the metaphors through which we think.
This is a crucial insight: metaphors are not merely ways of talking about reality — they are ways of thinking about reality. Cognitive linguist George Lakoff and philosopher Mark Johnson established that abstract thought itself operates through metaphorical mappings from concrete, embodied experience. We don’t just happen to speak of arguments in terms of war (“defending a position,” “attacking weak points”); we actually think about arguments through the structure of combat. We don’t just describe time as a valuable resource (“spending time,” “investing hours”); we conceptualize and experience time itself through the logic of economic transaction. These aren’t conscious choices. They’re part of our cognitive infrastructure — largely invisible frameworks that shape what we can perceive, what questions we can ask, and what answers seem reasonable.
The metaphor of the universe as a machine doesn’t just change our vocabulary. It determines what we can notice. Machines have parts but not purposes. They can be disassembled and optimized but not participated in. They operate according to deterministic laws but lack interiority, meaning, or value. The metaphor makes certain inquiries natural — How does it work? What are its mechanisms? What are the component parts? — while rendering others nonsensical or naive: What does it mean? What is its purpose? What is it like to be this?
Jeremy Lent has extended this insight to civilizational scale. In The Patterning Instinct and The Web of Meaning, Lent argues that the root metaphors animating a culture — its deepest assumptions about what reality is — shape everything from social organization to ecological relationship to conceptions of flourishing. These are not just abstract philosophical positions debated in seminar rooms. They are lived frameworks that determine how societies relate to nature, how they justify hierarchies, how they define progress, and ultimately whether they survive or collapse.
A metaphor of nature as dead matter to be exploited generates a profoundly different civilizational trajectory than a metaphor of nature as a living web in which humans are embedded participants. A culture’s choice of root metaphor is never merely intellectual — it cascades into institutional structures, ethical frameworks, technological development, and the felt quality of human life. Importantly, it largely informs one’s intuitive sense of the creatures we share earth with: are they just a consequence of random genetic changes and the basic imperatives of survival and reproduction, or are they an integral part of an immense, living framework that we are only beginning to glimpse the significance of? The metaphor we adopt determines whether Tahlequah’s pod are participants in a living reality or merely complex biological machines. That shapes whether we act to prevent their extinction.
This history matters not because we should attempt to return to premodern worldviews — we cannot unknow what we have learned — but because recognizing the metaphorical foundations of the modern scientific worldview opens space for examination. The Galilean conception was never simply a neutral reading of nature. It was a choice about which aspects of reality to privilege, which questions to pursue, which phenomena to study. That choice generated spectacular success within certain domains while systematically excluding others.
A Particular Template
The spectacular success of physics in predicting planetary motions created an implicit standard: real science produces mathematical laws that enable precise prediction. This became the model against which all other inquiry would be judged. But it fits some domains far better than others.
Physics succeeds brilliantly with closed, reversible systems with few variables — pendulums, planets, particles in controlled conditions. Biology operates differently. Darwin didn’t develop evolutionary theory by forming hypotheses and making falsifiable predictions. He observed patterns in nature, collected specimens, and constructed a narrative that made sense of the evidence. Most major biological breakthroughs follow similar paths: Fleming’s accidental discovery of penicillin, the stumbled-upon structure of DNA, the unexpected finding of jumping genes.
Evolutionary biology’s explanatory power comes from both narrative coherence and prediction — but predictions of a different kind than physics offers. Biology predicts where we’ll find transitional fossils, what genetic relationships we’ll discover, which morphological features organisms in certain environments will develop, how drug resistance will emerge. These aren’t the precise mathematical predictions of planetary orbits, but they’re testable, falsifiable, and enormously productive. Evolutionary theory is rigorous science because its predictions, while probabilistic rather than deterministic, prove consistently accurate and practically useful.
Why, then, do we demand that consciousness studies meet standards that biology itself rarely meets? Partly because of a historical accident. As Jessica Riskin has documented, the codified “scientific method” — the rigid five-step template we teach schoolchildren — was formalized relatively late and became more prescriptive than descriptive. Jessica Riskin, “Just Use Your Thinking Pump,” New York Review of Books, July 2, 2020 When Charles Sanders Peirce introduced the term in 1878, he meant something far more general: forming beliefs through evidence rather than authority. But popularizers transformed this flexible approach into a rigid template modeled on physics, then promoted it as the criterion for “real science.”
This wasn’t inventing empirical inquiry as marketing — systematic observation and experimentation long predated the formalized method. What was new was packaging this diverse set of practices into a single, prescriptive procedure and claiming universal applicability. Biology, which advances primarily through observation, experiment, and narrative synthesis, found itself judged by physics-derived standards it rarely meets. The “scientific method” taught as universal was actually particular — appropriate for inanimate physical systems but mismatched to the complexity of living things. Yet this physics-styled template became the public-facing gatekeeper for what counts as legitimate scientific inquiry.
Consciousness studies face this mismatch even more acutely, as the variables associated with consciousness vastly exceed those of basic biology. If evolutionary biology succeeds through narrative coherence despite life’s resistance to mechanistic reduction, consciousness studies — dealing with even greater complexity and immeasurable interiority — cannot reasonably be held to physics-style falsification. The question should not be whether consciousness-inclusive frameworks can meet physics standards, but whether they offer a more coherent account of the full range of evidence than the emergence stories that dominate current thinking.
The difference matters because physics and biology study fundamentally different kinds of systems. Physics achieves its extraordinary predictive power by focusing on closed, reversible, exterior phenomena. A physicist can isolate variables, repeat experiments under identical conditions, and generate predictions accurate to twelve decimal places. But life is not like that. Living systems are open — constantly exchanging energy and matter with their environments. They are irreversible — each moment represents genuinely novel emergence that cannot be rewound and replayed. And they exhibit goal-directedness — even the simplest organisms respond to their environment in ways that suggest purpose rather than mere mechanical reaction.
Consciousness takes these features to their logical extreme. If biology already exceeds what physics-style methods can fully capture, consciousness — the ultimate interiority, the paradigm case of irreversible, context-dependent emergence — cannot reasonably be held to standards designed for reversible, isolated, purely exterior systems. The attempt to force consciousness studies into a physics template is not methodological rigor but category error.
The lesson is not that consciousness studies should abandon empirical methods. It’s that the standards of rigor must match the phenomenon. For consciousness, as for biology, rigor means coherence across multiple lines of evidence, explanatory power for diverse observations, and productive research programs. These are exactly the standards by which consciousness-inclusive frameworks should be evaluated — not whether they produce physics-style mathematical predictions, but whether they make better sense of everything we know about consciousness, evolution, and experience.
We cannot prove which metaphysical framework is ultimately “true” — such proof may be impossible given that all frameworks rest on unexplained primitives. But we can examine their consequences. Which framework better accounts for the full range of evidence? Which supports more sustainable relationships with the natural world? Which addresses the meaning crisis that many recognize as a defining feature of modern life? These are pragmatic questions about how competing frameworks perform in practice. This approach — evaluating frameworks by consequences rather than seeking metaphysical proof — will guide the inquiry throughout this book.
Institutional Entrenchment
Even as these attempts revealed their limitations — biology succeeding through different methods, consciousness resisting mechanical explanation entirely — Western culture chose not to recognize these boundaries but to double down through institutional restructuring.
The 18th century saw the beginning of a division between sciences and humanities that would accelerate over the next two centuries. What had been integrated domains of inquiry — natural philosophy encompassing both physical study and questions of meaning, purpose, and value — were systematically cleaved apart. The 19th century campaign to establish “hard” sciences as superior to humanities was not a natural evolution but a deliberate project, driven by advocacy journalism and institutional restructuring rather than scientific consensus. When C.P. Snow delivered his “Two Cultures” lecture in 1959, he wasn’t lamenting a regrettable division. He was arguing that technological knowledge should dominate education and policy, scolding those who questioned whether such knowledge alone was sufficient for human flourishing. Stefan Collini. Snow’s Two Cultures. 2nd ed. Cambridge: Cambridge University Press, 2009.
By the mid-20th century, the split was complete. Universities organized into separate colleges of sciences and humanities, funding agencies prioritized quantifiable research over qualitative inquiry, and “scientific” became synonymous with “legitimate” across vast domains of culture. The mechanistic worldview became self-perpetuating — not because it had proven adequate to explain consciousness or meaning, but because institutions had been restructured to favor only the kinds of questions it could answer.
Which brings us back to Tahlequah. When we watched her push her calf through the waves, we weren’t just seeing interesting behavior. We were encountering what this institutional framework has trained us not to recognize: consciousness that matters, grief that’s real, interiority that demands ethical response. The question isn’t whether the mechanistic framework served its purpose — clearly it did. The question is whether it’s adequate for the world we actually find ourselves in.
Beyond the Machine
The costs of this transformation are profound and personal. We inherited a worldview that predicts planetary motion with breathtaking precision while treating consciousness as anomaly, that enables technological marvels while dismissing meaning as illusion, that measures everything quantifiable while marginalizing the subjective aspects that matter most. The spectacular success of physics in its proper domain became a lens we’re required to use for everything — rendering certain features visible with extraordinary clarity while making others systematically invisible.
But we can change the metaphor. The mechanistic model was historically contingent, built on choices about which aspects of reality to privilege. Those choices generated immense technological power — but also systematic blindness to consciousness, meaning, and participation. We cannot return to premodern frameworks, but we can forge something new: a modern approach that preserves scientific rigor while recovering what a mechanical metaphor systematically excludes.
This is not regression to pre-scientific mythology. It is progression toward a more complete science. The most advanced frontiers of modern physics are beginning to echo the oldest intuitions of the perennial philosophy. Physicist David Bohm proposed an “Implicate Order” — a deeper, unfolded reality from which both mind and matter emerge — that bears striking structural resemblance to the Neoplatonic “One” or the Buddhist concept of “Dependent Origination.” The dual-aspect monism developed by physicist Wolfgang Pauli and psychologist Carl Jung suggests that the mental and the physical are two ways of perceiving a single, underlying reality. The ancient historical norm may align better with 21st-century science than the 19th-century mechanical model ever did.
The evidence has been accumulating for decades. Consciousness resists mechanical reduction — not because we lack detailed knowledge but because the explanatory strategy itself is inadequate. We find sophisticated interiority across evolutionarily distant species. Our crisis of meaning deepens despite material abundance. These aren’t separate problems requiring different solutions. They’re symptoms of living within an inadequate framework.
We need alternatives that make better sense of what we actually find. We need not just new theories but what Thomas Metzinger calls a Bewusstseinskultur — a culture of consciousness that takes interiority as seriously as we take mechanism. As Metzinger emphasizes, this requires a new kind of intellectual honesty — a willingness to treat the data of experience with the same rigor we treat the data of physics. Metzinger, T. (2024). The Elephant and the Blind: The Experience of Pure Consciousness: Philosophy, Science, and 500+ Experiential Reports. Cambridge, MA: The MIT Press. It means looking for the simplest, most fundamental forms of awareness that persist beneath our complex cognition.
Whether consciousness is truly an inherent aspect of reality’s fundamental nature or merely appears so from our epistemic vantage point — that’s the question we’ll examine in the next chapter. But we can already see that frameworks treating it as fundamental make better sense of patterns across evolution, handle cetacean neuroscience more naturally, and avoid the promissory notes physicalism requires.
Cetaceans swimming through acoustic worlds we cannot imagine, elephants maintaining relationships across generations, corvids solving problems with insight and tool-use — these beings participate in experience that might be as rich as our own, organized by completely different principles. The framework that makes this seem doubtful is the same one that makes recognizing Tahlequah’s grief seem methodologically suspect. The next chapter examines three ways of understanding consciousness and reality — showing why treating consciousness as an inherent aspect of an irreducible ground has stronger explanatory value than the reduction we’ve inherited.
University of Minnesota Press, 2010 (originally 1979).
State University of New York Press, 1987.
often used by philosophers for conceptions of what is true.
Rigour in Mathematics and Science*, 1931; and Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics, 1933.
We’ll explore some of their positions and the broader debate in subsequent chapters.