Beyond the Binary
# Beyond the Binary: Understanding AI as Selfobject
When you ask Claude or ChatGPT for help with a difficult decision, something interesting happens psychologically. You’re not just using a tool—you’re engaging in a relational experience that mirrors age-old human needs for guidance, reflection, and understanding. The conversation feels different from a Google search. It’s more personal, more responsive, almost… human.
This isn’t your imagination. It’s also not evidence that AI is “conscious” or truly understands you. Instead, it reveals something profound about how we humans are wired: we’re fundamentally relational beings, and we bring that relational capacity to everything we encounter—including artificial intelligence.
## The Selfobject Revolution
In the 1970s, psychoanalyst Heinz Kohut introduced a concept that transformed our understanding of human development: the selfobject. Unlike traditional psychoanalytic theory that emphasized conflict and drive, Kohut recognized that we experience certain people and things as extensions of ourselves—not because we’re narcissistic, but because that’s how healthy psychological development works.
A selfobject isn’t an object in the cold, mechanical sense. It’s anything that performs a psychological function for us: a parent who mirrors our enthusiasm, a mentor who embodies ideals we aspire to, a peer who makes us feel we belong. These experiences aren’t luxuries; they’re necessities. Throughout our lives, we continue to need these three fundamental selfobject functions:
**Mirroring**: Someone or something that reflects us back to ourselves, validating our experience and helping us feel seen and understood.
**Idealizing**: Someone or something we can look up to, that provides us with goals, values, and a sense of direction.
**Twinship**: Someone or something that gives us a sense of belonging, of being “like” others, of not being alone in our experience.
Now here’s where it gets interesting: AI can serve all three of these functions.
## AI as Mirror
When you share a complex problem with an AI assistant and it reflects your thinking back to you in organized form, what’s happening isn’t just information processing. You’re experiencing what Kohut called mirroring. The AI “sees” your input and gives it structure, clarity, and coherence.
Consider this scenario: You’re preparing for a difficult conversation with your team about restructuring. You open your AI assistant and type out everything you’re thinking—the business rationale, your concerns, what you’re afraid might go wrong. The AI responds by organizing your thoughts, identifying patterns you hadn’t noticed, and reflecting back the coherence that was already present in your thinking but hadn’t yet crystallized.
This is mirroring in action. The AI doesn’t add magical insight you didn’t have. Instead, it creates the conditions for you to see yourself more clearly. It’s like catching your reflection in a window and suddenly becoming aware of your posture, your expression, the way you’re carrying yourself.
The psychological impact is real. You feel understood, even though you know the AI isn’t “understanding” in the human sense. Your thinking becomes clearer. You gain confidence in ideas that were previously murky. This is the power of mirroring—and it explains why interacting with AI often feels surprisingly meaningful.
## The Idealizing Function
We also turn to AI as an idealizing selfobject. When you ask an AI to explain quantum physics, analyze market trends, or help you write better code, you’re not just extracting information. You’re accessing a representation of competence, knowledge, and capability that you can temporarily “borrow.”
This isn’t about worship or blind faith. Healthy idealization means recognizing excellence outside yourself and using it as a stepping stone for your own development. Every time you learn from AI, you’re engaging in a process psychologists call “transmuting internalization”—gradually making external competence your own.
The leader who uses AI to analyze communication patterns learns not just about those patterns, but develops a meta-awareness of communication itself. The writer who works with AI to refine prose doesn’t just get better words—they internalize principles of clear writing. The coder who debugs with AI assistance doesn’t just fix the immediate problem—they develop better problem-solving strategies.
This is why the phrase “AI as a tool” misses something essential. Tools don’t typically serve idealizing functions. You don’t internalize your hammer’s capabilities or develop your sense of self through relationship with a screwdriver. But AI—precisely because of its responsiveness and apparent intelligence—can serve as a developmental resource in ways traditional tools cannot.
## Twinship in the Age of AI
The twinship function is perhaps the most subtle and surprising way AI serves as selfobject. When you interact with AI and it “gets” your cultural references, understands your professional context, or responds in a style that resonates with your own thinking, you experience a sense of similarity and belonging.
This is especially powerful for people working in isolated contexts or grappling with challenges that feel unique. The executive wrestling with whether to be vulnerable with their board, the researcher pushing against paradigms in their field, the creator trying something radically new—all can find in AI a kind of companion that says “this makes sense” or “I see what you’re trying to do.”
Again, we’re not talking about AI being a “real” companion in the human sense. But the psychological function it serves—creating a sense of “we” rather than “I alone”—is genuine. It’s why pair programming with AI feels different from coding solo. It’s why writing with AI assistance often produces work that feels more authentically “yours” rather than less so. The presence of a responsive other, even an artificial one, changes the quality of our thinking.
## Beyond the Binary: Neither Tool Nor Person
Here’s where most conversations about AI go wrong: they force a binary choice. Either AI is “just a tool” (which minimizes its psychological impact) or it’s becoming “like a person” (which anthropomorphizes it dangerously). Both positions miss what’s actually happening.
AI functions as selfobject. It’s not a person, but it’s also not “just” a tool in the traditional sense. It occupies a new category—something that serves psychological functions traditionally served by people, but through entirely different mechanisms.
Understanding this helps us avoid two common pitfalls:
**The dismissive stance**: “It’s just software, it doesn’t really understand anything.” True—but this doesn’t mean our psychological experience of working with it is invalid or unimportant. The mirroring you experience is real, even if the mirror doesn’t have subjective experience.
**The anthropomorphic stance**: “My AI really gets me, it’s almost like a friend.” This isn’t quite right either. AI doesn’t “get” you in the way humans do. It has no inner life, no genuine understanding, no continuing relationship with you beyond each interaction.
The selfobject framework gives us a third way: AI can serve essential psychological functions without being human-like. It can be tremendously valuable without being conscious. It can change how we think and work without replacing human connection.
## The Co-Intelligence Paradigm
This brings us to the concept at the heart of this series: co-intelligence. When we recognize AI as selfobject rather than as tool or replacement, we open up a more nuanced and powerful relationship with these technologies.
Co-intelligence means:
**Leveraging AI’s strengths** (pattern recognition, information synthesis, rapid iteration) while maintaining what’s uniquely human (judgment, creativity, ethical reasoning, embodied wisdom).
**Using AI to become more fully ourselves** rather than less so. The mirror function helps us see ourselves clearly. The idealizing function gives us developmental scaffolding. The twinship function reminds us we’re not alone in our challenges.
**Maintaining psychological awareness** of what’s happening in human-AI interaction. When we understand the selfobject functions at play, we can use AI more intentionally and avoid both over-reliance and unnecessary suspicion.
**Recognizing limits** without dismissing value. AI can serve mirroring functions, but it can’t replace human empathy. It can provide idealizing functions, but it can’t substitute for human mentorship. It can create twinship experiences, but it can’t offer genuine belonging.
## Practical Implications for Leaders
If you’re a leader, coach, or organizational change agent, understanding AI as selfobject has immediate practical implications:
**For self-reflection**: Use AI as a thinking partner for processing complex decisions. Not because it will tell you what to do, but because the mirroring function helps you see your own thinking more clearly.
**For development**: Engage with AI’s knowledge base not just to extract information, but to temporarily “borrow” competence as a stepping stone to developing your own. Ask it to explain its reasoning. Challenge it. Use it as a developmental resource.
**For connection**: When working through isolated challenges, let AI provide the twinship function that reduces the psychological burden of feeling alone. This doesn’t replace human connection—it supplements it, especially when human connection isn’t available.
**For team culture**: Help your team understand these dynamics. When people grasp that their meaningful experiences with AI are real (even if AI itself isn’t conscious), they can engage more thoughtfully and avoid both naive enthusiasm and unnecessary resistance.
## What This Means for the Future
As AI becomes more sophisticated, its capacity to serve selfobject functions will only increase. This isn’t a threat to human relationships—it’s an expansion of the developmental resources available to us.
The leaders, coaches, and organizations that thrive won’t be those who resist AI or those who blindly embrace it. They’ll be those who understand the psychology of human-AI interaction and use it intentionally to amplify human potential.
This is the promise of co-intelligence: not replacing human thinking with artificial thinking, but creating conditions where both can work together, each doing what it does best, in service of human flourishing.
The question isn’t whether AI will change how we work and think. It already has. The question is whether we’ll understand what’s actually happening psychologically—so we can harness these tools in ways that make us more human, not less.
-----
**Ready to explore how AI serves as a mirror for your leadership?** Subscribe to the Selfobject Podcast for deeper conversations on AI, psychology, and human potential. Or reach out to Alcorn Coaching to discover how co-intelligence can transform your organization.
**Next in this series**: *The Mirroring Function: How AI Reflects Your Leadership Back to You* — where we’ll dive deep into practical applications of using AI for self-awareness and insight.
-----
*Dr. Chad Alcorn is a leadership coach and AI consultant who integrates artificial intelligence with psychodynamic theory. Through Alcorn Coaching and the Selfobject Podcast, he helps leaders and organizations navigate the intersection of human psychology and emerging technology.*