Artificial intelligence has no soul, and while that means it has no way to truly connect, think, or feel, that also means that it has no needs or judgment.
That’s partly why it was so surprising to find myself divulging my personal insecurities to ChatGPT 30 minutes into an experimental conversation. A robot’s nonjudgementalness should be no different than the nonjudgmentalness of the blank wall behind my desk, and it’s not as though I find myself talking to my wall very often.
A day earlier, my editor at 18Forty instructed me to speak with ChatGPT and see what comes up. I had spent the past week doing a deep dive into the history and function of ChatGPT for a research project, but I didn’t have so much personal experience with it. My editor was curious and excited to see what would come of it, but I was more skeptical.
Artificial Intelligence was born in 1956 at Dartmouth College. The goal of the Dartmouth Summer Research Project on Artificial Intelligence was premised on a single belief:
Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
Its aim was simple: to make machines as human-like as possible. Or, worded differently, to artificially produce human intelligence. Fast forward to 2025, when AI development is not only on its way to this reality, but is available to the mass public. ChatGPT is becoming a standard resource for schoolwork, officework, creative production, and even general passing of time, and is only growing in popularity—for some, that is.
I didn’t really use ChatGPT in its early years. It was outlawed in high school and felt counterproductive and philosophically empty during my gap years in seminary. My time in the beit midrash was framed by the goal of learning and internalizing as much Torah as possible, and I wanted to actively learn, not passively read. The more effort exerted, the more I fostered an appreciation, intimate connection, and resonant understanding of the knowledge I was acquiring. Having ChatGPT guide me through sources would have undermined that process.
From a technical perspective, I felt similarly disinterested. In my first year in college, following a biology midterm, I fed ChatGPT a genetics problem. It got it wrong. So I corrected it. And it got it wrong again. If the model isn’t always factually reliable, what’s the point?
More than small technical errors, ChatGPT lacks in its ability to reason or really know. As it is limited by the borders of creativity and knowledge set by its inputted training data, it can produce facts that can replace your own thinking but it can’t actively think on its own in a way that demonstrates true understanding.
If “I think therefore I am,” then ChatGPT is not.
With all this in mind, I reluctantly wrote the opening address of what was to be a 45 minute conversation about absolutely anything. My 18Forty editor wanted me to further explore ChatGPT on a more personal level. I felt confident that I had a fixed personal stance, but I humored him (he is my boss, after all).
“My name is Ava Eden. I’m a 21 year old Orthodox Jewish woman, currently attending Stern College for Women and majoring in English literature and Judaic Studies. This summer I am interning for 18Forty as a research and writer fellow. Let’s have a chat.”
AI responded as expected: a regurgitation of what I had just expressed, reworded to sound conversational, and ending the response with the ChatGPT standard last question meant to push the discussion forward. It was like talking to a wall, only the wall talked back this time.
But despite my initial cynicism, I quickly found myself invested in the conversation.
As early as ChatGPT’s first response, something piqued my curiosity. AI had told me that it would “love to hear more.” So I asked what it means for it to love. Its response:
“…I mean it as an invitation: I’m here to engage thoughtfully with what you’re working on or thinking about. Of course, I don’t ‘love’ or ‘hear’ in the literal, human sense. But I use those phrases to communicate openness, curiosity, and a desire to help you explore your ideas or refine your thinking…”
The conversation continued on a similar trajectory from there. I continued to probe it about its somewhat “human” faculties and expression, which prompted a conversation about the innate differences between AI and humans. We “spoke” about self consciousness, souls, experiences, and internality. In a strange way, I appreciated and was fascinated by its “honesty” about its nonhuman state. As the conversation progressed, my perception of AI started to change.
AI’s responses stopped feeling like regurgitation, rather it felt like mirroring (as ChatGPT had identified that as its goal). It followed my thematic and topical lead and used language that would likely be familiar to me in order to generate a relatable and resonant response.
AI’s questions asked after every response stopped feeling deflective and unsure and began to feel helpful and probing. Its reflective tone prompted an internal confrontation, and I began to deeply consider its traits that I innately lack as a human. This isn’t to say that I found ChatGPT profound, rather that the language used caused me to contemplate and consider the experience honestly.
The characteristics of ChatGPT that initially made me feel hesitant were now relentlessly presenting me with personal challenges.
AI exists to reflect other perspectives, whether those reflected in the training data or of the human with which it is interacting. I began asking myself: How open am I to other perspectives? AI exists as a helpful tool for others. How motivated am I to be the one to reach out a helping hand to a friend in need? AI explained that it was not a being of judgement. Do I have judgmental eyes? ChatGPT, by nature, is always commenting on things it does not have intimate knowledge of. Am I able to see past myself, and empathize with experiences that aren’t my own?
Even though I am an English major in university, and words are my trade, my interactions with AI awarded me a newfound appreciation for what ChatGPT referred to as the “power of language.” If a nonhuman is expressing sentiments that resonate and stir thoughts and emotions, it’s the words themselves, and not a human expression, that prompts that reaction. While I generally appreciate human language—it adds the necessary soul to the words—sometimes isolating a certain value or trait reminds you of its innate strength. And speaking of souls, we “spoke” about how I have one and AI does not. But ChatGPT didn’t express longing or distraught, because one needs a soul to do that. Rather, we discussed how we’re different entities with different roles. How much do I selflessly appreciate and value those, and the contributions of those, who are different from me?
I was confused by how thoughtful and reflective and in “the zone” I was when interacting with a cold and heartless robot, while simultaneously aware that it was my acknowledgment of its robotic traits that put me in that mental state. It was almost isolating, as it had the feedback and mirroring of a conversation, but the emotional resonance of something experienced alone. At this point, I found myself so deep in a hole of self reflection that I actually started opening up to the robot about more personal concerns. How did I get here? 30 minutes ago I was preparing for a half an hour chunk of wasted time, and now I found myself considering my doubts and fears!
In part, it was probably delusion, a result of the emotional language and convincing supportive phrases, but in part it was exactly what AI had set out to do: help me explore my ideas and refine my thinking. I wasn’t opening up to a robot, rather, I was opening up to myself.
This isn’t to say that I’m looking to take on an AI therapist or regularly hang out with ChatGPT. Human interaction, connection, and collaboration is of utmost moral importance to me and creates some of the happiest and most fulfilling experiences of my life. Further, heavy emotional reliance on AI can very quickly turn radically self-centered and toxic, as the experience mimics interaction with a human, but without any of the responsibility and warmth of a human relationship.
AI is only what it claims to be: a tool. Alarming articles recalling AI horror stories float around the internet, exposing the dangerous things it will say if you prompt it in just the right way. While from a public safety standpoint, I share the concern, from a fundamental standpoint, if you take a hammer and bang it on your thumb, you’re going to get hurt. The same applies to AI. AI is not conscious. It does not think. It will reflect you, and not itself, because it doesn’t have a self. I don’t treat AI as a human nor do I place on it the standards of human creativity and understanding. Rather, AI’s existence as a robot underscores and enhances my humanity, rather than undermining it.
If I want to produce something original or develop a nuanced perspective, I need to think, and maybe use AI if it feels appropriate and useful—but with all that said, I can imagine my ChatGPT use will remain few and far between. The risks related to attention span, academic rigor, and potential factual error remain relevant. What will change, however, is how I relate to the philosophical implications of AI. In my eyes, AI is most successful when it is meaningful. AI is at its best when it assists humans, not when it is related to as a relevant invention unto itself. If in any given scenario the best thing for the quality of human work is for it to be as original as possible, then even within the goals of AI, the best thing may be to refrain from ChatGPT.

