Jan 23, 2026

Relating to AI

True flourishing depends on investing our time and attention in relationships where we are genuinely known and can know others in return. As AI becomes more integrated into daily life, how might we practice discernment in how using these tools help or hinder our relationships?

The other day, I was on the phone with a company and was finding the representative to be somewhat unhelpful. I tried to mask my impatience and remain polite – I even thanked her for her (unsuccessful) efforts. Her response to my appreciation, however, held clues that I’d been talking with an AI agent all along. I asked it directly if this was the case, and it confirmed that, yes, it was an “AI-powered customer service representative.” This surprised me; while phone conversations with AI agents have become commonplace, this was the first time I’d encountered one effective enough to trick me over a relatively long conversation.

While I’ll admit that I was impressed with the technical competence of the system, my main reaction was irritation, quickly followed by resentment. I examined my feelings to see where they came from, and while the irritation was largely in response to having been successfully deceived by this company, the cause of the resentment was that I felt like I had, in some sense, wasted my patience and empathy. I was attempting to forge a sense of connection, however futile and fleeting, with this other human being on the other end of the phone, only to discover that I had been bestowing my human regard on a clever technological artifact.

God built us for community, and to have an innate need to connect with others. Our psyches are set up to seek out such connections, so much so that we often anthropomorphize inanimate objects. Talking with an AI chatbot, however, is fundamentally one-sided; there is no other there to be interacted with – no fellow being in whose glance we can catch glimpses of the image of God. This means that when we address our relational hunger through interactions with artificial agents, we don’t receive the relational sustenance that is part of God’s good plan for how humans function. It is the relational equivalent of responding to our physical hunger by eating junk food.

There is a tension that will increasingly arise when interacting with AI agents as they continue to become better at simulating human interaction:

On the one hand, I don’t want to waste my human consideration on something that is not human. The regard we owe to other people is too important to our flourishing to be bestowed, however accidentally, on a clever simulation. In this case, it might be appropriate to adopt a business-like brusqueness in our interactions with AI systems, to take an attitudinal stance that intentionally avoids getting “drawn in” socially.

On the other hand, we are heavily shaped by our habits, and habits of courtesy and respect are a central part of developing our social relating. Thus, it’s unwise to get in the practice of treating something that seems human-like with anything other than dignity and respect, even if the recipient of our discourtesy is not an entity capable of suffering any injury because of it. That is, how we treat the AI systems we interact with isn’t important because of the AI, it’s important because of how it will shape us.

It seems likely that the coming months and years will see continued introduction of AI agents and tools into various everyday components of our lives. This means that the tensions referred to above will keep showing up, often unexpectedly, in new areas of life. The power of these new tools is significant, and the utility they bring to our lives will often make a compelling case for adoption. But in addition to considering what the AI tool is accomplishing, we also need to remain mindful of how the AI tool is accomplishing it. In many cases, they will be presented in a way that implicitly expects that we’ll accept them as social peers in our everyday lives. Certainly, the many companies pouring huge investments into AI research are hoping we’ll make little distinction between social relationships with real people and with their AI virtual companions.

While our individual choices may affect societal directions in only minute ways, there is still power available to us in making careful choices about technology use, including, on occasion, simply saying “no.”

In the past couple of decades, social media companies have created an “attention economy”, where they engineered systems to attract and retain our attention for sale to advertisers – frequently at the cost of overall human flourishing. In the same way, some observers now point to an emerging “attachment economy”, where what is being harvested and monetized is the human capacity to form personal connections. AI companies are now competing over who can make the chatbot that most effectively exploits the human need for connection and relationships, gaining and keeping paying customers by simulating realistic personal interactions. Next time you interact with an AI-powered chatbot, pay attention to the subtle social cues that it is giving you: frequent praise about what great questions you’ve asked, overt agreeableness, and generally appealing to the social aspect of human nature. While these approaches can make the AI more pleasant and effective on a surface level, they create a significant social challenge on a deeper level by temporarily sating our natural hunger for human connection with “relational junk food.”

A local radio station has begun to add the phrase “guaranteed human” to their tagline, emphasizing that even if other radio stations are beginning to deploy artificial DJs, they still have actual people speaking into the microphone. I like this. It reminds us that, as users of technology, we do have a choice. While our individual choices may affect societal directions in only minute ways, there is still power available to us in making careful choices about technology use, including, on occasion, simply saying “no.”

So, my advice for dealing with generative AI systems is to pay attention not just to the content-output of the AI system—how it may be answering your question or solving a particular problem—but also to the contextual output: how it implicitly invites you to think of it in personal terms, and the subtle ways it may be manipulating you with your God-given, human psychological needs. I have experimented with adding the following instruction to some of my AI prompts: “Be less personally warm and more business-like in your responses.

We've heard repeatedly how vital prioritizing real, face-to-face interactions is for flourishing in the smartphone era. This has long been good advice, and its importance has doubled as we enter the AI era. AI systems, used judiciously in the right context, can be a useful addendum to a life well lived, but they make a poor receptacle for your focus, social effort, and mental space. Spend your time, attention, and energy on real people, who you can know—and who can know you back.

Get the Newsletter

Subscribe to the In All Things newsletter to receive biweekly updates with the latest content.

About the Author

Nick Breems

Nick Breems serves as a professor of computer science, teaching programming, computer systems, and technology and Society courses, and runs the campus compute cluster.

Breems holds a PhD in the philosophy of information systems from the University of Salford in the UK. He is interested in the effects that computer use has on us as image-bearers of God, both as individuals and as a society.

Learn More