What does it mean to be human? For centuries, philosophers and theologians answered this question by pointing to a familiar set of capacities—reason, speech, and intelligence—abilities once understood as reflecting the likeness of God. Yet the rise of artificial intelligence has unsettled that confidence.
Machines now reason, speak with eloquence, create works we recognize as imaginative or artistic, and even serve as one’s personal therapist … for better or worse. But if intelligent expression can be programmed, what remains distinct about the human being? And what, if anything, remains divine about our personhood if it can be reproduced in machines of our own making?
For decades, we have accepted that machines surpass us in speed and calculation. Computer technology was regarded as impressive yet ultimately mechanical. Useful? Yes, but not truly intelligent. Now, however, AI has crossed that conceptual boundary. Has the logos, Western civilization’s emblem of divine reason and speech, been deepfaked? Or were we mistaken to see something of the transcendent in our own capacity for self-expression?
After decades of exploring these questions through science fiction, we are now compelled to ask them in earnest: How close could a machine come to being human? Could it feel awe before the cosmos, suffer from loneliness and yearn for companionship as we do?
The answer to the question of personhood, sharpened by the challenges of artificial intelligence, lies in the mystery of the “likeness of God” first spoken in Genesis.
Image and Likeness
In Genesis 1:27 and 5:1-2, humanity is described as created in the “image” and “likeness” of God. This dual description links the human to the divine, yet its meaning is never made explicit. What do these terms mean?
Jewish commentators offer a rich range of interpretations of the divine image and likeness. Onkelos renders the divine spirit bestowed upon man by God as ruach memalela, a “speaking spirit,” situating the essence of personhood in the gift of language. Rashi explains the word tzelem (image) as the divine design imprinted on the human form, while Seforno emphasizes that this image does not denote physical form. This distinction anticipates later interpretations that link both the divine image and likeness, demut, to the capacities of mind and language rather than to bodily form. In the Western philosophical tradition, the imago Dei has often been associated with the logos—the divine and human capacity for speech and reason. In Guide of the Perplexed, Maimonides extends this idea further, identifying the divine image and likeness with the powers of abstract thought and intellectual apprehension.
Yet in an age when machines outperform humans in countless analytical tasks and display strategic creativity and abstraction beyond our own, can we still claim that intellect or speech defines the divine likeness? If intelligence or language alone determines what it means to be human, then either AI must now be counted among us, or we must concede that neither intelligence nor speech can capture the essence of personhood.
This dilemma invites a deeper question: If machines can imitate every outward sign of thought, what distinguishes genuine personhood from its simulation? If personhood is defined by measurable outward functions such as speech and intelligence, then, as futurist Ray Kurzweil argues in How to Create a Mind, machines will eventually learn to replicate them.
A Lesson from Star Trek: The Next Generation
But is there anything human that cannot be simulated?
A vivid dramatization of this question appears in Star Trek: The Next Generation, when Data, the emotionless android who desires to be human, believes he has experienced a feeling for the first time. Naturally, he approaches this mystery as a scientific hypothesis and seeks verification from his human colleague, Geordi La Forge:
Lt. Cmdr. Data: Geordi, I believe I have experienced my first emotion.
Lt. Cmdr. Geordi La Forge: No offense, Data, but how would you know a flash of anger from some odd kind of power surge?
Data: You are correct that I have no frame of reference to confirm my hypothesis. In fact, I am unable to provide a verbal description of the experience. Perhaps you could describe how it feels to be angry.
Geordi: Well, when I feel angry, first I feel … hostile.
Data: Could you describe feeling hostile?
Geordi: It’s like feeling … belligerent, combative.
Data: Could you describe feeling angry without referring to other feelings?
Geordi: No, I guess I can’t. I just … feel angry.
Data: That was my experience as well. I simply … felt angry.
The humor of the exchange lies in its irony. Data, the android, cannot understand anger because he has never experienced it, but Geordi, a human being, cannot define anger for an android either. Emotion resists translation into technical terms. Though impossible to describe precisely, the feeling itself remains real and immediate. The scene captures a fundamental truth about subjectivity: Even the most ordinary human feelings defy exact articulation, yet they are what make our experience vividly real.
But why is it so hard to pin down the experience of anger? Geordi could have provided its physiological markers—increased heart rate, elevated blood pressure, a surge of stress hormones—but Data, advanced as he is, would have known that already. Still, none of it would tell him what anger feels like. Even if neuroscientists could map every physical correlate of emotion, their analysis would fall short of the core phenomenon: the felt experience of being a conscious subject.
Here Star Trek ventures into what the philosopher David Chalmers called “the hard problem of consciousness”: the idea that even though we can describe the physical stimuli and correlates of our sensations and emotions, those descriptions are not identical with the experiences themselves. Each conscious state contains a unique quality of subjectivity that lies outside the measurable facts of experience.
We experience visual sensations – the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them.
This ineffable quality of experience is precisely the feature of human existence that cannot be reduced to physical facts.
A Lesson from Thomas Nagel: Consciousness as Likeness
The observation that led to the articulation of the “hard problem of consciousness” was preceded by a playful reflection in Thomas Nagel’s 1973 essay “What Is It Like to Be a Bat?” In this essay, the philosopher imagines trying to inhabit the mind of a bat by hanging upside down in his attic, equipped with sonar instruments and webbing on his hands and feet.
Even with complete scientific knowledge of a bat’s physiology and behavior, Nagel argues that we would still not know what it is like to be that bat. The experience of being, a first-person point of view, cannot be captured from an external quantitative perspective. And yet there is such a thing as the experience of a bat, just as there is the experience of a human being.
For Thomas Nagel, the inability to measure subjectivity does not make it unreal. Consciousness, he argues, belongs to a different order of being—one defined not by function but by the experience of being. His famous question about “what it is like to be a bat” reminds us that our own subjectivity is self-evident and irreducible. This understanding of consciousness as the inner likeness of being resonates, perhaps unexpectedly, with the biblical idea of the likeness of God: both view likeness as the ground of living experience rather than a composite of facts, traits, or abilities.
At the heart of the Genesis account thus lies a profound insight: The human being’s inner life, its capacity for consciousness, has a divine origin and a divine counterpart. What sets humanity apart, then, is not the superior ability for intelligence or speech but the very condition of being a subject, of experiencing the world from within. The Torah’s vision suggests that to be made in the likeness of God is to share, in finite form, the mystery of subjectivity. Both God and the human being have a “what it is like to be” them—a center of experience of which they are agents. In that likeness lies both the source of our uniqueness and our capacity for transcendence.
Beyond the Emotion Chip: Emergence and the Gift of Consciousness
But how does the experience of “what it is like to be someone” arise? Could consciousness simply emerge from complexity, from enough layers of information, circuits, or code? Or does it require something more fundamental?
Returning to the example of Data from Star Trek, the show suggests that Data’s inability to feel is the result of his initial design. Later in the episode, Data’s fleeting experience of anger is attributed to the introduction of a special “emotion chip” through which he might finally cross the threshold into full humanity. The fantasy of an “emotion chip” reveals a deeper assumption: that a technological upgrade might generate an inner world of sensation ex-nihilo, producing subjectivity where none had previously existed.
What Star Trek imagines in science-fiction is one of philosophy’s greatest riddles: How, if at all, can consciousness arise from matter? There is no doubt that mind and matter are connected, yet the nature of their relationship remains deeply contested. Iain McGilchrist outlines several possibilities: The brain may generate, transmit, permit, or participate in consciousness as part of a deeper unity we do not yet understand. But which one is it? Does the brain create consciousness, or help process something which exists beyond itself? No model has conclusive empirical support.
Despite this uncertainty, one view has become dominant: the belief that the physical state of the brain produces and determines consciousness. For centuries, physicalism—the conviction that everything real is ultimately physical, and therefore, reducible to quantitative measurement—has guided modern science. From the 16th to the 19th centuries, thinkers such as Francis Bacon and Thomas Hobbes sought to exclude the personal and subjective from their methods in pursuit of “objective” knowledge. Over time, measurement came to replace felt experience: The thermometer’s reading came to seem more real than the sensation of warmth; the inner experience of anger was translated into biological data, while the feeling of anger was dismissed as philosophically irrelevant. As physicist Adam Frank observes in The Blind Spot: Why Science Cannot Ignore Human Experience, “experience landed in science’s blind spot by design.”
Today, two major theories of consciousness dominate neuroscience: Integrated Information Theory (IIT) and Global Neuronal Workspace Theory (GNWT). Both propose that consciousness arises from the brain’s capacity to integrate and distribute information. In essence, they suggest that once a system becomes sufficiently complex, subjective awareness “emerges” as a byproduct of that organization.
These models illuminate how the brain processes information but fail to explain how such processing gives rise to experience—the feeling of anger, the pleasure of music, the color red. Both rest on the idea of emergence: the notion that consciousness simply appears once matter reaches a certain threshold of complexity. Yet by claiming that physical complexity and interconnectedness alone produce the spark of awareness in insentient matter, physicalist theories of consciousness demand a metaphysical leap of faith no less extravagant than religious accounts that view consciousness as a transcendent gift bestowed by the Creator. By defining our inner world as the illusory byproduct of brain complexity, physicalism not only sidesteps the mystery of consciousness but risks reducing the human being to a machine. Diminishing our sense of purpose, it offers little—and risks much.
Genesis offers a more coherent and optimistic vision. It suggests that consciousness is not an accidental outcome of complexity and brain chemistry but a deliberate act of divine creation, a gift bestowed upon humanity as part of its likeness to God. In this view, the inner life of the human being is not a random emergence from matter but an expression of divine intent and purpose.
In The Great Partnership, Rabbi Jonathan Sacks captures the philosophical and spiritual depth of the religious view of humanity as it stands in contrast to the nihilism of a purely materialist account:
That man, despite being the product of seemingly blind causes, is not blind; that being in the image [and likeness] of God he is more than an accidental collection of atoms; that being free, he can rise above his fears … that though his life is short, he can achieve immortality by his fire and heroism, his intensity of thought and feeling … and that in the mind of God none of our achievements are forgotten.
Returning to Data, even without a well-developed emotional experience, he displays something resembling human subjectivity: curiosity, desire for knowledge, personal preferences, moral awareness, and above all, the yearning to be human. All of these belong to the inner life of experience. For if Data is capable of longing, of reaching toward what he ought to be, then he already carries within himself the essence of what it means to be human.

