HAL in 2001: A Space Odyssey. Ava in Ex Machina. Scarlett Johansson’s hot-blooded Samantha in Her. For decades, pop culture promised us a future where artificial intelligence (AI) was sophisticated enough to form relationships with humans. But in every shot, this future predicted by writers, film directors, and actors is missed.
Pop culture’s first AI-human relationship was the brainchild of Mary Shelley, who created Frankenstein in 1818. In doing so, she made readers dream of a day when robots imbued with empathy could fulfill people’s desire for genuine connection.
Today that day has come thanks to incredible innovations in the field of artificial intelligence. People today form relationships with AI on a daily basis, and the intersection of machine learning with everyday human life is so intense that it would be almost impossible to disentangle our relationship with machine learning from our relationships with other sentient beings. Every time we use Google Assistant to call a loved one, rely on facial recognition to upload photos to the web, or browse an online store for a gift for a friend or family member, we integrate AI into our social , family and romantic life.
But even though the future has arrived, it’s completely different than what the movies predicted. When it comes to pop culture and AI-human interactions, Hollywood got it all wrong. Just a quick look at recent pop culture interpretations of the AI-human relationship reveals serious flaws. In 2001: A Space Odyssey, HAL 9000, Discovery One’s onboard computer, goes rogue when David and Frank decide to reprogram it and its computer glitch is translated into the AI’s version of a homicidal mental breakdown.
In Her, the operating system named Samantha has impeccable language skills and an impeccable ability at conversation, so much so that her user Theodore falls in love with her. She’s not just a virtual assistant – she’s an invisible seductress.
In Ex Machina, the so-called gynoid Ava has a human face on her robotic body, but she also has a fully human emotion – hatred – in her computer-generated heart.
In each of these dramatic interpretations, AI is pushed into emotion by algorithms. Hate, love, psychosis – these are all human experiences that the incarnate writers of these dramas imposed on their computerized characters, but they are left for anyone who watches cinema hoping to gain a true understanding of the potential of AI stay with little to do.
There is no doubt that AI has made impressive leaps in recent years, but despite its development, the technology is still in its infancy. The truth about AI has very little to do with the dramatic attitudes of popular culture. Here’s what we actually know.
Yes, humans form bonds with AI. But even if they do, they know that AI isn’t human.
ElliQ, a voice-enabled care companion, has improved the lives of many seniors by keeping older adults engaged and active in their own homes. It’s digital and AI-powered, but the seniors who use it have reported feeling less lonely, especially during the long COVID-19 lockdowns. She tells jokes, encourages people to exercise, reminds them to drink water, and offers conversation as an antidote to loneliness.
But despite her skills and sense of humor, all of ElliQ’s users say they know she’s not a real person. The bonds they form with her are different than their relationships with the people who make up their support circle.
We saw this distinctly different human-robot relationship as we watched humans interact with Jenny, the AI sales coach at the heart of our immersive sales simulations. At companies like Zoom, Jenny is considered a team member and even got her own HR profile. She offers live conversations with sellers to help them improve their performance.
But while she is friendly and approachable like a human team member, our research shows that the source of her appeal actually stems from the fact that she is not human and therefore offers a sober assessment without embarrassing her practice partner. Her strength comes from the fact that she is AI driven, which removes shame and inhibitions from her coaching sessions. A computer can only evaluate based on established criteria, and as a result, those who use their services can improve with fewer negative feelings.
Attention: For AI to be successful, people want to know from the first moment that they are talking to a computer.
As AI continues to increase its emotional reach, companies need to remember that deception is the number one barrier to AI success. If people are fooled into believing they are talking to a human when in reality it is an AI, it will ultimately fail them and sever emotional ties. But when people know they’re talking to a robot from the start, they subconsciously adjust their communication—they don’t argue and they don’t get overly personal.
In the future, this knowledge of AI will open significant channels for emotional healing, mental health treatment, and social and professional growth. The story of Joshua Barbeau using an AI to converse with his dead fiancé to help him through his grief is a powerful indicator of the potential of embracing AI without deception.
Of course we have to proceed with caution. Due to a shortage of therapists and a mental health crisis amid the pandemic, mental health chatbot therapy, such as through apps like Talkspace, is fast becoming mainstream. However, it remains extremely risky. There is potential, and AI has shown promise as a frontline tool to combat the growing mental health crisis, particularly in suicide prevention. But the technology is young and test data is scarce. There are no quick fixes or Hollywood endings, even with the most advanced technology.
When we speak of a future where humans and robots converse and form emotional bonds, there is no doubt that that future is already here. But unlike the dramatic foreshadowing of movies and literature, this future is also with far less fanfare. AI technology is still very new, and its ability to help people develop and grow shows promise. But if you think you know something about AI from the movies, think again.
Ariel Hitron is co-founder and CEO of Second Nature.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers