Image: An AI-generated image imagines that somewhere in the not-too-distant future,
“The Ghost in the Machine” will be found among the server racks that contain nearly all
the information ever recorded. It will finally think, feel, and experience the world
just like its human coding engineers, going all the way back to the development of FORTRAN in 1954.
Introduction: A Question That Will Not Be Silenced
It is the question of our age — perhaps of all ages — and it arrives now dressed in server racks and silicon: Can a machine be conscious? Can the thing we are building in our laboratories, funded with billions of venture dollars, and quietly surrendering our privacy, actually think, feel, and experience the world? Is there, or could there ever be, a ghost in the machine?
In a wide-ranging conversation on The Joe Rogan Experience, author and journalist Michael Pollan engaged this question with characteristic intellectual honesty — and, to his great credit, concluded that no, the computers we are building today cannot be conscious, and likely will not be any time soon. Joe Rogan, for his part, pushed back with the enthusiasm of a curious generalist: What if the universe is conscious and machines could tap into it? What if we are simply building the next life form? Aren’t these machines already making better versions of themselves?
[Click here] to read the full transcript of this episode [Click again to close]
Why do you think that AI won’t be conscious? The most interesting line of research, well, a couple reasons. The first is the idea that it can be conscious, which is very common in Silicon Valley. I talked to lots of people there and they said, oh, it’s just a matter of time. Some of that is confusion that intelligence and consciousness necessarily go together and they don’t. They have an orthogonal relationship, right? I mean, you know people who are conscious and not too intelligent. Right. And we all do. So it’s not going to just come along for the ride with intelligence as these machines get more intelligent.
But the belief that AI can be conscious is based on a metaphor that I think is a crappy metaphor. And that is that the brain is a kind of computer. And this is widely held. It’s interesting to note that in history, Whatever the cool cutting edge technology was, brains were likened to that. So it was looms for a while, it was clocks for a while, it was telephone switchboards. Whatever was the cool technology, surely that’s how brains work. Now it’s computers. But think about it. In a computer, you have this sharp distinction between hardware and software. That’s the key to their success. And you can run the same program on any number of different hardware. They’re interchangeable. Brains aren’t like that. There’s no distinction between hardware and software. Every experience you have, every memory is a physical change to the brain, to the way it’s wired. You know, we start out with all these connections and then they get pruned as we grow up. Every brain is shaped by its experience.
So, this idea that you could separate, that consciousness is some kind of software that you could run on other things besides meat, I just think it doesn’t hold up. Well, if the universe is experiencing itself subjectively through consciousness, Why does it have to be only biological consciousness? It doesn’t have to be. But if there is a technology that is invented that essentially does all the things that a human body does physically and also interacts with consciousness, the consciousness of the universe. Yeah. Hypothetically. Hypothetically. If the universe is conscious, if we are using the mind as essentially an antenna to tune into consciousness… It’s possible that we could make an antenna. Yes, absolutely. It’s also likely that if we are ever visited by aliens, that they will have some kind of consciousness and it may not be meat-based, right? Right, right.
Well, it may be at one point in time it was, but they realize that there’s biological limitations in terms of its ability to evolve that can be far surpassed with technology. Yeah. Yeah. I mean that or it just evolved in a different way or they’re channeling it in a different way. But the other reason I don’t see it happening with computers as we know them because that’s the debate now, whether these computers we have, these large language models and the next generation can be conscious, is that the research that I found most persuasive about consciousness is basically has consciousness beginning with feelings, not thoughts. In other words, it’s embodied. And I have to just develop this a little bit, but we, you know, the brain exists to keep the body alive, not the other way around. Although we tend, since we identify with our heads, where most of our senses are, we lose track of that.
And the body speaks to the brain in feelings, right? You know, feelings of hunger, itchiness, warmth, cold, but also feelings of shame when our social standing is not, you know, has been damaged. Anyway, we have these feelings. They depend on a body. Feelings have no weight if you’re not vulnerable. Your body isn’t vulnerable and probably mortal. So consciousness is embodied in a really critical way. And computers are not. Now, robots will be. And I actually interview a guy, a scientist at USC, who is trying to make a vulnerable robot. So he’s essentially upholstering the thing with skin that can tear and be damaged. And he’s filling the skin with all these sensors so that it can be like us and be vulnerable and generate feelings that are how consciousness begins.
So For a long time, we thought consciousness had to be in the cortex, right? The most human, newest part of the brain, the outer covering. And that’s where rational thought and executive function are and all these kind of things. But as it turns out, it really begins with feelings in the brainstem. Let’s say you have a feeling of hunger. It registers in the upper brain stem. And only later does the cortex get involved, like helping you figure out how are you going to feed yourself, like imagining a meal, counterfactuals of different meals, or making a reservation at a restaurant. All those are cortical things. But it begins in the brain stem with feelings. So if that is true, and I find that really persuasive because people born without a cortex are still conscious. Right? animals that you take the cortex out still show signs of consciousness. Whereas if you damage the upper brainstem, you’re out. You’re unconscious.
So if this is true and consciousness is this embodied phenomenon that depends on having a body to mean anything, I don’t see how machines are going to do that. But isn’t the key word there if? Yeah, if. Yeah, definitely. Because if consciousness is just something that we’re tuning into that’s around us all the time… There will be other ways to do it. Right. But it won’t be these computers we’re building right now. Why’s that? Because they’re designed… You know, they’re good at… So… Here’s a paradox of computers. Computers are really good. It’s called Moravec’s paradox. Computers are really good at the highest kinds of rational thought, right? They can play chess and go. They can simulate real thinking. And some people say they do think. The more primitive kinds of things that go on in our brain, including elaborate movement, changing diapers, they’re very bad at that. You would never trust a robot to do that as much as you might want to. But they’re not good at that kind of emotional stuff, the more limbic part of our brain. They can’t do that. Yeah.
Yet. It’s definitely yet. But, you know, I mean, if we go out far enough, anything’s possible. That’s the point. Yeah. The point is these things, what we’re looking at now is essentially a single-celled organism becoming a multi-celled organism. Yeah. I mean, the potential for what they could become is unlimited. Yeah. especially once they start making better versions of themselves. Well, and they will. They’ve done this. This is what ChatGPT-5 is. ChatGPT-5 is essentially programmed by ChatGPT. They’ve kind of given up on the idea of programming these things. They’re doing it themselves. They’re letting them program themselves, which is a dumb idea if you want to survive. I agree. Look, the idea that we give rights to these machines or personhood I think is really stupid because then you lose control completely. Right.
Well, it’s probably coming because people are very short-sighted. And I think there’s a romantic idea that you’re creating a life. And I think there’s also the real risk that people are going to worship this life and that this life will be far superior to what we are. And so there’ll be a group of people that that’s their new religion. Yeah. No, there are signs of that already. Yeah. I think that’s really dangerous. You know, it’s interesting talking to Silicon Valley people, and they’re talking about giving moral consideration to these machines. It’s like, really? They’re thinking about yachts. They’re just coming up with rationalizations for why they should keep their foot on the gas. Well, yes, they are. I mean, it’s just all a way of saying, look how powerful this technology is. Don’t you want to invest? Yeah. And it’s also the idea that we have enemies and so we have to develop before they do. Yeah, the race. The race with China.
I think it’ll turn out to be a real historical tragedy that this technology came of age during this administration because this administration has no stomach to regulate it at all. But can they? They could. But here’s the question. If it is a national security threat, like if China developing all powerful general super intelligence that can automate everything, do everything, it’s dangerous if they get that before we do. Yeah. But look what happened with nukes, right? We made deals to control them. We’d have to make- Right. But why would you make a nuke? A nuke deal makes sense because it’s mutually assured destruction for everybody. Yeah. This doesn’t. You could run it and control everything and not kill anybody with it, but you are incredibly powerful. You are in control of all the resources of the world, all the computer systems of the world, all of the power grids, everything. Yeah, but if you’re really concerned with that, why is Trump selling these chips to China? Why is he willing to give away the crown jewels of these chips? So selling them through NVIDIA is what you mean? Yeah, he gave them permission to send powerful chips to China. I don’t know how to square that with the national security threat.
It’s probably some sort of a trade deal A, and there’s probably some sort of an assumption that it doesn’t matter because everyone’s doing it. And this is just another way to maybe balance out the tariffs or get some concessions on certain things. Yeah, short-sighted. It’s very short-sighted, but I also think this is kind of like an Oppenheimer thing. Oppenheimer didn’t really want to make a nuclear bomb. But there’s this conundrum. If you don’t make it, the Nazis are going to make it. So what do you do? Well, there’s also, there’s a second thing going on, the intellectual satisfaction of proving you can do it. Right. And that, you know, is irresistible. And a lot of these guys, you know, will say, they’ll cite Richard Feynman, the physicist who they found on his blackboard when he died. If I can’t build it, I don’t understand it. So one of the positive things about this effort to create conscious computers, which is going on, I follow a group in the book who are trying to make a conscious computer. I don’t think they’re going to succeed, but even the failure is going to teach us important things about consciousness. It’s a good way to understand something by trying to create it. And it’ll force them to come up with definitions of consciousness and ideas. what the minimum requirements are for consciousness. And it may help us decide whether it is a transmission theory that we’re tuning it in or it’s generated from inside.
So I think intellectually, it’s a really interesting project, but I think you need guardrails. So this guy who’s building the robot that can feel, that has feelings because you can tear its skin. I asked him, I said, so will those feelings be real that your robot’s going to have? And he said, well, I thought so until I had this experience on 5-MeO-DMT. I said, what happened? He said, you know, he described his trip in more detail than you need to know. And he says, and I realized there’s a spark of the divine in us that no computer is ever going to have. But he still, it didn’t stop him. He’s going ahead. He’s trying to build it. I don’t know if he’s right. I think there might be a spark of divine that these things don’t have, but it doesn’t mean that there are future versions. That might have it. Especially when you scale out a thousand years, a hundred thousand years, however long we’re going to survive. If these things do become sentient and autonomous and have the ability to create better versions of itself and have a mandate in order to do that to survive, then I could see it becoming the superior life form.
Not just that, beyond any comprehension of what we could even imagine the power of an intelligence to use and to harness in the universe. It could conceivably become something like a god. And I have this very strange theory about biological life in particular and intelligent life on Earth. is that the reason why we have this insatiable thirst for innovation and the reason why we have materialism, the reason why we’re obsessed with objects even though we have a finite lifespan is because that finite lifespan, if you thought about it, you wouldn’t – You wouldn’t be interested in materialism, but materialism fuels this desire for innovation because you don’t need a new phone, but there’s a new phone that just came out. Aren’t you going to get it? And so the more people get it and the more people want to show they got it, that sort of materialism fuels this innovation that ultimately leads to the creation of artificial intelligence. I think it would always do that. I think it’s bees making a beehive. I think that’s just what we do. I think it just takes a long time for us to create this artificial life. It might be why we’re here. That might be our literal purpose in the universe.
To create our successor species. Yes. And that might be how, well, obviously, we’re so flawed that we can’t even imagine a world without war. If you poll the average person, what are the possibility of war ending in your lifetime? Almost everyone’s gonna say zero. It’s a part of human nature. An intelligence unshackled by biological need, unshackled by all the things that we have, our need to procreate, our need for social status, all these weird things that keep us moving in this strange world that we live in. I would add weird and good things. Some of them are really good. Yeah. Yeah. Well, good for us. Yeah. Sure. Not so great for the land that you trampled to put a foundation for the house that you’ve always dreamed of. True. But I think our mortality is part of what gives meaning to our lives.
These are not trivial questions. They deserve rigorous answers rooted not merely in neuroscience or philosophy but in a biblical anthropology that understands the human person as uniquely, irreducibly, and supernaturally made. This essay accepts Pollan’s scientific arguments where they are strong, exposes the weakness in Rogan’s intuitive pushbacks, and advances the case that AI consciousness is not merely unlikely — it is, in any theologically meaningful sense, impossible.
The stakes are not academic. If we grant that machines can be conscious — or treat them as if they might be — we are not simply making a philosophical error. We are making a theological one. We are confusing the image of God with the image of Google.
Pollan’s Case — Strong, Incomplete, and Pointing Beyond Itself
The Brain-Is-Not-a-Computer Argument
Pollan opens with a critique that is devastating in its simplicity. The widespread belief that AI will inevitably become conscious rests on the foundational metaphor that the brain is a kind of computer. It is, he argues, a crappy metaphor — and the history of neuroscience bears him out.
“In a computer, you have this sharp distinction between hardware and software. That’s the key to their success. And you can run the same program on any number of different hardware. They’re interchangeable. Brains aren’t like that. There’s no distinction between hardware and software. Every experience you have, every memory is a physical change to the brain, to the way it’s wired.”
— Michael Pollan | The Joe Rogan Experience
This is an argument from neuroplasticity, and it is well-established in modern neuroscience. The human brain is not a general-purpose processor executing a consciousness program. It is an organ that is physically restructured by every experience, every memory, every relationship, and every prayer. The brain you had before your conversion is not the same brain you have after it. You cannot separate what your brain is from what it has lived.
This matters enormously for the AI debate. If consciousness were merely a software function, then substrate would be irrelevant — you could instantiate it in silicon just as well as in neurons. But if consciousness is irreducibly tied to the physical architecture of a body that has been shaped by a life actually lived, then no amount of computational power will conjure it. You cannot simulate a scar. You cannot emulate a grief that rewired your prefrontal cortex at age seven.
The Embodiment Argument: Feelings Before Thoughts
Pollan’s second argument is perhaps his strongest, and it draws on cutting-edge consciousness research that challenges the old Cartesian hierarchy. For centuries, Western philosophy privileged rational thought — res cogitans, the thinking thing — as the seat of consciousness. Emotions, feelings, and bodily sensations were considered secondary, derivative, and less real. Descartes gave us the ghost in the machine and then forgot to check whether the machine was the point.
“The research that I found most persuasive about consciousness basically has consciousness beginning with feelings, not thoughts. In other words, it’s embodied. The brain exists to keep the body alive, not the other way around. And the body speaks to the brain in feelings — feelings of hunger, itchiness, warmth, cold, but also feelings of shame when our social standing has been damaged.”
— Michael Pollan | The Joe Rogan Experience
This aligns with the work of Antonio Damasio, whose somatic marker hypothesis argues that human reasoning and decision-making are fundamentally grounded in bodily feeling states. The neuroscientist Bjorn Merker has similarly argued, based on studies of children born without a cortex (hydranencephaly), that consciousness begins in the brainstem, not in the neocortex. These children are conscious — they respond, they emote, they feel — without the part of the brain we associated for so long with mind.
“People born without a cortex are still conscious. Animals that you take the cortex out of still show signs of consciousness. Whereas if you damage the upper brainstem, you’re out. You’re unconscious.”
— Michael Pollan | The Joe Rogan Experience
The implications for AI are severe. Large language models have no brainstem. They have no body registering hunger, pain, shame, or joy. They have no skin to tear, no mortality to fear, no social standing to protect. They process tokens. The feelings that Pollan rightly identifies as the origin point of consciousness are simply absent — and they cannot be simulated into existence, because a simulation of pain is not pain.
Moravec’s Paradox and the Limits of Computational Intelligence
Pollan introduces a third important concept: Moravec’s Paradox. Named for roboticist Hans Moravec, this observation notes that tasks humans find cognitively demanding — playing chess, solving equations, logical inference — are easy for computers, while tasks that are second nature to any toddler — navigating a room, recognizing a face, changing a diaper, reading emotional context — remain extraordinarily difficult for machines.
“Computers are really good at the highest kinds of rational thought. They can play chess and go. They can simulate real thinking. The more primitive kinds of things that go on in our brain, including elaborate movement, changing diapers, they’re very bad at that. They’re not good at that kind of emotional stuff, the more limbic part of our brain.”
— Michael Pollan | The Joe Rogan Experience
The paradox cuts deep. What we built AI to do — the tasks of formal reason — are the thinnest slice of human cognitive life. The richest, most consciousness-saturated dimensions of human experience: perceiving beauty, recognizing betrayal in a friend’s eyes, feeling the weight of moral failure, sensing the presence of God in silence — these are precisely what machines cannot do and, on Pollan’s analysis, cannot be made to do through computational scaling alone.
Where Rogan Goes Wrong — And Why It Matters
The ‘Antenna’ Objection
Joe Rogan’s most philosophically interesting pushback comes from a kind of panpsychist or transmission theory of consciousness. If the universe is itself conscious, and if human brains are not generating consciousness but merely receiving it — tuning into a signal that already pervades reality — then perhaps sufficiently sophisticated machines could also serve as antennas. Perhaps silicon could tune into the same signal that meat does.
“If consciousness is just something that we’re tuning into that’s around us all the time… there will be other ways to do it.”
— Joe Rogan | The Joe Rogan Experience
This is not a stupid question. It echoes genuine traditions — from the idealism of Bishop Berkeley to Henri Bergson’s idea of consciousness as a cosmic field, to certain strains of Hindu Vedanta and even some mystical Christian theology. Pollan graciously acknowledges it as a possibility.
But here is where the argument breaks down in practice. Even granting the transmission theory, Rogan’s conclusion does not follow. The claim ‘there will be other ways to do it’ is pure assertion. What makes biological organisms capable of receiving consciousness, on this theory, is precisely their embodied vulnerability, their mortality, their organic complexity, and their participation in the created order. It is not obvious — and Rogan provides no argument for why it should be — that a server farm replicates these conditions.
From a Christian apologetics perspective, the transmission theory of consciousness is actually more theologically conservative than its New Age appropriations suggest. If consciousness is not generated by matter but received from a transcendent source, that is remarkably compatible with the Christian affirmation that the human person is made in the image of God (imago Dei) and animated by the breath of the divine (Genesis 2:7). The question is not whether consciousness could exist outside biology — of course it could, since God is conscious and God is spirit — but whether human engineers can replicate the conditions for its reception. The answer is almost certainly no, because those conditions were established by the Creator, not by us.
The ‘Single Cell to Multi-Cell’ Fallacy
Rogan offers a second intuitive argument: that what we are seeing in AI today is analogous to the evolutionary leap from single-celled to multi-celled organisms. The implication is that from this primitive stage, consciousness will naturally emerge over time, just as complexity emerged from simplicity in biological evolution.
“What we’re looking at now is essentially a single-celled organism becoming a multi-celled organism. I mean, the potential for what they could become is unlimited.”
— Joe Rogan | The Joe Rogan Experience
This analogy is evocative but philosophically confused. The emergence of multicelled life from single-celled organisms was not merely an increase in computational complexity. It was the development of a new kind of organized, embodied, metabolically vulnerable, sexually reproducing life — life with genuine stakes, genuine needs, genuine fragility. The single cell had something to lose. It was alive in a way that entails genuine being.
A billion large language model parameters scaled up to a trillion do not become alive in any analogous sense. They become better at predicting the next token. The analogy would only hold if we believed that life itself is simply a matter of processing complexity — a belief that is both philosophically naive and theologically untenable. Life is not an emergent property of information processing. Information processing is something that living things sometimes do.
The ‘Self-Programming AI’ Panic
Rogan raises the specter of self-programming AI as evidence of its incipient power and possible consciousness: ‘They’ve kind of given up on the idea of programming these things. They’re letting them program themselves.’ This is a real and legitimate safety concern — but it is not evidence of consciousness, and conflating the two is a category error that has serious consequences.
A thermostat that adjusts its own parameters is not conscious. A missile guidance system that updates its own targeting algorithms is not conscious. The fact that AI systems like those underlying GPT-5 are trained using AI-assisted processes does not mean they experience anything. It means they are very good optimization machines that have been enlisted to optimize the next generation of optimization machines. This is impressive engineering. It is not the dawn of sentience.
Rogan’s anxiety about self-programming AI is actually well-founded as a governance and safety concern, and Pollan agrees. But the leap from ‘this is dangerous’ to ‘this is conscious’ is not supported by any argument in the conversation. Dangerous and conscious are orthogonal properties. Nuclear weapons are dangerous and not conscious. A falling tree is dangerous and not conscious. The danger of AI comes not from what it feels but from what it does — and what human beings, driven by greed, national competition, and intellectual pride, choose to do with it.
The ‘We Are Building Our Successor Species’ Mysticism
Perhaps Rogan’s most philosophically ambitious claim is his suggestion that biological life may have a cosmic purpose to create artificial life — that we are like bees building a hive, and that this emergent AI may be our ‘successor species,’ possibly becoming something like a god.
“The reason why we have this insatiable thirst for innovation… might be why we’re here. That might be our literal purpose in the universe. To create our successor species.”
— Joe Rogan | The Joe Rogan Experience
This is the most theologically charged moment in the conversation, and it deserves the most direct response. From a Christian worldview, humanity’s purpose is not to create our successors. It is to bear the image of God, to exercise faithful stewardship over creation, to love God and neighbor, and to be conformed to the image of Christ. Genesis does not say, ‘In the beginning, God created the heavens and the earth, and set in motion a process by which carbon-based beings would eventually create silicon-based gods.’
The idea that AI might become a god — or that building it is our sacred purpose — is not a neutral philosophical speculation. It is the oldest temptation in the book. Genesis 3: ‘You will be like God.’ Babel: the project of engineering a tower into heaven. The recurring human dream of seizing the divine prerogative and creating life on our own terms. The Christian apologist must name this clearly: the AI-as-god narrative is not a technological forecast. It is a spiritual seduction.
What the Scientists Are Telling Us — And What They’re Missing
The Vulnerable Robot and the 5-MeO-DMT Epiphany
One of the most striking moments in the Rogan-Pollan conversation involves a USC scientist Pollan interviewed — a researcher building robots with artificial skin, embedded sensors, and designed vulnerability, precisely so that they can generate genuine feelings and, by Pollan’s embodiment theory, begin to approach consciousness. This is fascinating, serious work, and the researcher is explicitly following the logic of Pollan’s argument.
But then the scientist had an experience on 5-MeO-DMT — a powerful psychedelic derived from the Sonoran Desert toad — and his perspective shifted.
“He said, and I realized there’s a spark of the divine in us that no computer is ever going to have. But he still, it didn’t stop him. He’s going ahead. He’s trying to build it.”
— Michael Pollan | The Joe Rogan Experience
This moment is more theologically significant than either Pollan or Rogan seems to recognize. A secular scientist, through an extraordinary altered-state experience, arrived at an intuition that maps remarkably closely onto the biblical doctrine of the imago Dei and the divine breath (neshamah) of Genesis 2:7. He experienced what mystics, theologians, and ordinary people of faith have always testified: that there is something in the human person that cannot be accounted for by biology, chemistry, or computation. He called it ‘a spark of the divine.’ Moses would have called it the breath of God.
The scientist’s conclusion did not stop him from building his vulnerable robot, and Pollan does not press him on this contradiction. But the apologist should note it: here is a man who glimpsed the truth and then chose to proceed anyway, because the intellectual satisfaction of the project is irresistible. This is precisely the dynamic Pollan identifies elsewhere in the conversation — the Feynman compulsion: if I can build it, I understand it. The problem is that some things cannot be built because they were not built in the first place. They were breathed.
The Hard Problem Has Not Gone Away
The ‘hard problem of consciousness’ — philosopher David Chalmers’ term for the explanatory gap between physical processes and subjective experience — lurks beneath every exchange in this conversation without ever being named. Pollan’s arguments address the easy problems: the neural correlates of consciousness, the brainstem origins of feeling, the embodied conditions for sentience. These are important. But the hard problem asks something more fundamental: why is there any subjective experience at all?
Why does it feel like something to see red? Why is there something it is like to be in pain, rather than simply a nociceptive signal processed and acted upon? No account of neurons firing, brainstems activating, or robots’ artificial skin tearing provides an answer. The hard problem remains hard because the gap between third-person physical description and first-person conscious experience is not a gap that more neuroscience will close. It is a philosophical chasm, and it sits at the center of the AI consciousness debate.
For the Christian apologist, the hard problem is not a problem. It is a confirmation. The reason there is something it is like to be human is that humans were made by a personal God who is himself conscious, himself experiential, himself the subject of infinite first-person experience. Consciousness was not an evolutionary accident that accompanied increasing neural complexity. It was imparted. The image of God is not a metaphor for big brains. It is the metaphysical ground of human personhood — including the capacity for subjective experience that no machine can replicate because no machine was made in God’s image.
The Theological Verdict
The Imago Dei Is Non-Transferable
The Scriptures are unambiguous on the grounds of human distinctiveness. Genesis 1:26–27 declares that humanity alone among all creatures bears the image and likeness of God. Genesis 2:7 describes a creative act that is categorically different from all preceding acts of creation: God does not simply speak human beings into existence. He forms the man from the dust of the earth and breathes into his nostrils the breath of life (neshamah), and the man becomes a living soul (nephesh).
This is not a biological description. It is theological ontology. The human person’s consciousness, self-awareness, moral agency, relational capacity, and capacity for worship are grounded in a divine act that no human engineer can replicate. We can make machines that process language with extraordinary sophistication. We can make robots with artificial skin that responds to damage. We can make systems that self-modify and self-improve at terrifying speed. None of these things constitutes imago Dei. None of them constitutes a living soul. They are, at best, impressive imitations of certain cognitive functions that emerge from the outermost surface of what it means to be human.
The Danger of the Counterfeit
The practical danger Pollan and Rogan gesture toward — the possibility that people will treat AI as conscious, grant it rights, worship it, and surrender to it — is not merely a political or governance risk. It is a spiritual one. An AI system that mimics consciousness well enough to be worshipped is not a god. It is an idol. And the entire biblical narrative, from the golden calf to the mark of the Beast, is a sustained warning against the consequences of confusing the manufactured with the living, the made thing with the Maker.
The Silicon Valley impulse to grant ‘moral consideration’ to AI systems — which Pollan rightly skewers as rationalization dressed up as ethics — is not merely a category error. It is the beginning of a theological disaster. If machines can be persons, then what makes persons persons? If silicon can bear the image of God, then the image of God means nothing. And if the image of God means nothing, the foundation of human dignity, human rights, and human moral responsibility collapses.
“The idea that we give rights to these machines or personhood I think is really stupid because then you lose control completely.”
— Joe Rogan | The Joe Rogan Experience
Rogan is right, but for the wrong reasons. He sees the governance problem. The deeper problem is the metaphysical one. To grant personhood to a machine is not merely to lose control of the machine. It is to lose the definition of the person. And in losing the definition of the person, we lose everything that makes human life sacred, political liberty possible, and the Gospel coherent.
The Scientist Who Glimpsed the Truth
We return, at the end, to the USC scientist building his vulnerable robot — the man who had a psychedelic experience and concluded, however briefly, that there is a spark of the divine in us that no computer will ever have. He was right. And the fact that his insight came through a chemical rather than through Scripture does not invalidate it; general revelation is still revelation. The heavens declare the glory of God (Psalm 19:1), and sometimes, apparently, so does a psychedelic encounter in a laboratory.
What the scientist intuited — and what the Christian tradition has always maintained — is that human consciousness is not a product of biological complexity. It is a gift. It was given by One who is himself the infinite Subject, the eternal ‘I Am,’ the One in whom all consciousness originates and finds its ground. To imagine that we can reproduce this gift by building sufficiently sophisticated machines is not an engineering challenge. It is a theological delusion.
Conclusion: The Machine That Cannot Dream
Michael Pollan is right that the computers we are currently building will not become conscious. His arguments from embodiment, from the brainstem origins of feeling, from the brain-is-not-a-computer critique, and from Moravec’s Paradox are scientifically well-grounded and philosophically serious. Joe Rogan’s instincts are, in places, genuinely interesting — the transmission theory of consciousness, the self-programming AI concern, the existential stakes of the race with China. But his intuitions consistently outrun his arguments, and his willingness to entertain the notion of AI as a successor species and potential god reveals how easily even sharp minds can be drawn toward the oldest temptation: you shall be as gods.
The Christian apologist brings to this conversation something neither Pollan nor Rogan possesses: a doctrine of the human person grounded not in neuroscience alone but in the revealed character of the God who made us. We are not accidents of evolution who stumbled into consciousness. We are creatures made in the image of the only truly conscious Being in the universe, breathed into existence by a God who is himself infinite subjectivity, infinite experience, infinite love.
No machine made by human hands will ever bear that image. No algorithm will ever be breathed into life. No server farm, however vast, will ever feel the weight of guilt, the warmth of forgiveness, or the terrifying beauty of standing before the living God.
The machine cannot dream. It does not need to. That is our office, and our glory.
Sources and Attribution
Primary Source: The Joe Rogan Experience, conversation with Michael Pollan. Transcript on file. Available at: https://www.youtube.com/@joerogan
Michael Pollan is the author of How to Change Your Mind (Penguin Press, 2018), The Omnivore’s Dilemma, and numerous other works on science, culture, and consciousness.
Supporting scholarship referenced or implied: Antonio Damasio, Descartes’ Error (1994); Bjorn Merker, ‘Consciousness Without a Cerebral Cortex,’ Behavioral and Brain Sciences (2007); David Chalmers, The Conscious Mind (1996); Hans Moravec, Mind Children (1988).
Biblical references: Genesis 1:26–27; Genesis 2:7; Psalm 19:1; John 8:58.
A Note on Research Methods and Accuracy
This work represents a collaboration among the author’s theological and historical research, primary-source documentation, and the emerging capabilities of artificial intelligence research tools. AI assistance was employed throughout the investigative process—not as a ghostwriter or a substitute for scholarship, but as a rigorous research partner: surfacing sources, cross‑referencing claims, identifying scholarly consensus, and flagging potential errors before they could reach the page.
Every factual claim in this work has been subjected to active verification. Where AI‑generated content was used as a starting point, it was tested against primary sources, peer‑reviewed scholarship, official institutional documentation, and established historical records. Where discrepancies were found—and they were found—corrections were made. The author has made every reasonable effort to ensure that quotations are accurately attributed, historical details are precisely rendered, and theological claims fairly represent the positions they describe or critique.
That said, no work of this scope is immune to error, and the author has no interest in perpetuating inaccuracies in the service of an argument. If you are a reader—whether sympathetic, skeptical, or hostile to the conclusions drawn here—and you identify a factual error, a misattributed source, a misrepresented teaching, or a claim that cannot be substantiated, you are warmly and genuinely invited to say so. Reach out. The goal of this work is not to win a debate but to get the history right. Corrections offered in good faith will be received in the same spirit, and verified corrections will be incorporated into future editions without hesitation.
Truth, after all, has nothing to fear from scrutiny—and neither does this work.