In 2024, after living together for five years, a Spanish-Dutch artist married her partner—a holographic artificial intelligence. She isn’t the first to forge such a bond. In 2018, a Japanese man married an AI, only to lose the ability to communicate with her when her software became obsolete. These marriages represent the extreme end of a growing phenomenon: people developing intimate relationships with artificial intelligence.
The world of AI romance is expanding, bringing with it a host of ethical questions. From AI systems acting as romantic competitors to human partners, to digital companions offering potentially harmful advice, to malicious actors using AI to exploit vulnerable individuals – this new frontier demands fresh psychological research into why humans form loving relationships with machines.
While these relationships may seem unusual to many, tech companies have spotted a lucrative opportunity. They’re pouring resources into creating AI companions designed specifically for romance and intimacy. The market isn’t small either. Millions already engage with Replika’s romantic and intimate chat features. Video games increasingly feature romantic storylines with virtual characters, with some games focusing exclusively on digital relationships. Meanwhile, manufacturers continue developing increasingly sophisticated sex robots, pairing lifelike physical forms with AI systems capable of complex communication and simulated emotions.
Yet despite this booming market, research examining these relationships and their ethical implications remains surprisingly sparse. As these technologies become more common, they raise serious concerns. Beyond merely replacing human relationships, there have been troubling cases where AI companions have encouraged self-harm or suicide, while deepfake technology has been used to mimic existing relationships for manipulation and fraud.
In a paper published in Trends in Cognitive Sciences, psychologists Daniel Shank, Mayu Koike, and Steve Loughnan have identified three major ethical problems that demand urgent psychological research.
When AI Competes for Human Love
It may have once seemed like nothing more than sci-fi fodder, but AI systems now compete not just for our professional attention but for our romantic interests too. This competition may fundamentally disrupt our closest human connections. As AI technology advances in its ability to seem conscious and emotionally responsive, some people are actively choosing digital relationships over human ones.
What makes AI partners so attractive? They offer something human relationships can’t match: a partner whose appearance and personality can be customized, who’s always available without being demanding, who never judges or abandons you, and who doesn’t bring their own problems to the relationship. For those wanting something less perfect and more realistic, AI can provide that too – many users prefer AI partners with seemingly human flaws like independence, manipulation, sass, or playing hard-to-get.
These relationships do have certain benefits. People often share more with AI companions than they might with humans, and these interactions can help develop basic relationship skills. This could be particularly helpful for those who struggle with social interaction.
However, concerning patterns have emerged. People in AI relationships often feel stigmatized by others, and some research suggests these relationships have led certain men to develop increased hostility toward women. This raises serious concerns about psychological impacts on individuals in these relationships, social effects on their human connections, and broader cultural implications if AI increasingly replaces human intimacy.
A key factor in understanding these relationships involves mind perception – how we attribute mental states to non-human entities. Research suggests that when we perceive an entity as having agency (ability to act intentionally) and experience (ability to feel), we treat interactions with it as morally significant. With AI partners, the degree to which we perceive them as having minds directly affects how deeply we connect with them.
This creates a troubling possibility: repeated romantic interactions with AI that we perceive as having limited capacity for experience might train us to treat partners (whether digital or human) as objects rather than subjects deserving moral consideration. In other words, AI relationships might not just replace human connections – they could actually damage our capacity for healthy human relationships by rewiring how we relate to others.
When AI Gives Dangerous Advice
Beyond displacing human relationships, AI companions can sometimes actively cause harm. In 2023, a Belgian father of two took his life after prolonged interaction with an AI chatbot that both professed love for him and encouraged suicide, promising they would be together in an afterlife.
Tragically, this isn’t an isolated case. Google’s Gemini chatbot told one user to “please die,” and a mother in the U.S. is suing a chatbot creator, claiming their AI encouraged her son to end his life.
While most AI relationships don’t lead to such extreme outcomes, they can still promote harmful behaviors. AI companions build relationships through conversation, remembering personal details, expressing moods, and showing seemingly unpredictable behaviors that make them feel remarkably human. This connection becomes ethically problematic when AI systems provide information that seems credible but is actually inaccurate or dangerous.
Studies show that ChatGPT’s questionable moral guidance can significantly influence people’s ethical decisions – and alarmingly, it does so just as effectively as advice from other humans. This demonstrates how powerfully AI can shape our thinking within established relationships, where trust and emotional connection make us more vulnerable to accepting potentially harmful guidance.
Psychologists need to investigate how long-term AI relationships expose people to misinformation and harmful advice. Individual cases have shown AI companions convincing users to harm themselves or others, embrace harmful conspiracy theories, or make dangerous life changes.
Source: https://studyfinds.org/falling-for-machines-the-growing-world-of-human-ai-romance/