An extraterrestrial species is on its way to planet Earth and we have no reason to believe they will be friendly. Some experts predict it will be here within 30 years, while others insist it will arrive much sooner. Nobody knows what it will look like, but it will share two key characteristics with us humans – it will be intelligent and confident.
No, this alien will not come from a distant planet – he will have been born right here on Earth and hatched in a research lab of a large university or corporation. I’m referring to the first artificial general intelligence (AGI) to match (or surpass) human perception.
As I write these words, billions are being spent bringing this alien to life as it is considered to be one of the greatest technological achievements in human history. But unlike our other inventions, this one will literally have a mind of its own. And if it behaves like any other intelligent species known to us, it will put its own interests first and work to maximize its chances of survival.
AI in our own image
Should we fear a superior intelligence driven by its own goals, values, and self-interest? Many people reject this question and believe that we will build AI systems in our own image, to make sure they think, feel and behave the way we do. This is highly unlikely.
Artificial minds are not created by writing software with carefully crafted rules that make them think like us. Instead, engineers feed huge data sets into simple algorithms that automatically adjust their own parameters and make millions and millions of tiny changes to its structure until an intelligence emerges—an intelligence with inner workings far too complex for us to understand .
And no – feed it with data about people won’t make it think like humans. This is a common misconception – the wrong belief that by training an AI on data that describes human behavior, we will make sure that it ends up thinking, feeling and acting that way we do. It will not.
Instead, we’re going to build these AI creatures to to know Peoplenot to be Person. And yes, they will know us inside out, be able to speak our languages and interpret our gestures, read our facial expressions and predict our actions. You will understand how we make decisions, for good and bad, logical and illogical. After all, we will have spent decades teaching AI systems how we humans behave in almost every situation.
But profoundly different
But even so, their thoughts will not be like ours. To us they will appear omniscient, connecting to distant sensors of all kinds in all places. In my 2020 book Arrival Mind, I represent AGI as “with a billion eyes and ears” for his powers of perception could easily span the globe. We humans can’t possibly imagine what it would feel like to perceive our world in such an expansive and holistic way, and yet we somehow assume that a spirit like this shares our morals, values, and sensibilities. It will not.
Artificial heads will be fundamentally different from any biological brain we know on Earth – from their basic structure and functioning to their overall physiology and psychology. Of course we will create humanoid bodies for these alien spirits to live in, but they will be little more than robotic facades to make us feel comfortable in their presence.
In fact, we humans will work very hard to make these aliens look like us and talk like us, even smile and laugh, but deep down they won’t be like us. Most likely their brains will live (in whole or in part) in the cloud, connected to features and functions both inside and outside of the humanoid forms we impersonate them as.
Still, the facade will work – we won’t fear these aliens – not in the way we would fear creatures hurtling towards us in a mysterious spaceship. We may even feel a sense of kinship as we view them as our own creation, a manifestation of our own ingenuity. But if we put those feelings aside, we begin to realize that extraterrestrial intelligence born here is far more dangerous than that which could come from afar.
The danger within
After all, an alien mind built here will know everything about us from the moment it arrives, as it has been designed to understand humans inside and out – optimized to sense our emotions and anticipate our actions, ours predicting feelings, influencing our beliefs, and influencing our opinions. If creatures hurtling toward us in elegant silver spaceships had such a deep knowledge of our behavior and tendencies, we would fear.
Already, AI can beat our best players in the world’s toughest games. But really, these systems don’t just rule the games of chess, poker and go, they rule them game of people, learn to accurately predict our actions and reactions, anticipate our mistakes and exploit our weaknesses. Researchers around the world are already developing AI systems to overthink, negotiate, and outmaneuver us.
Can we do anything to protect ourselves?
We certainly cannot stop AI from becoming more powerful as no innovation has ever been curbed. And while some are working to put security measures in place, we cannot assume that this will be enough to eliminate the threat. In fact, a Pew Research survey shows that few professionals believe the industry will implement meaningful “ethical AI” practices by 2030.
So how can we prepare for the arrival?
The best first step is recognizing that AGI will take place in the coming decades and will not be a digital version of human intelligence. It will be a extraterrestrial intelligence so strange and dangerous as if it came from a distant planet.
Bringing urgency to the ethics of artificial intelligence
If we frame the problem this way, we could address it with urgency and push to regulate AI systems that monitor and manipulate the public, sense our emotions, and anticipate our behavior. Such technologies may not seem like an existential threat today, as they are primarily being developed to optimize the effectiveness of AI-driven advertising, not to facilitate world domination. But that doesn’t mitigate the danger — AI technologies designed to analyze human emotions and manipulate our beliefs can easily be used against us as weapons of mass persuasion.
We should also be more careful when automating human decisions. While it is undeniable that AI can help in effective decision-making, we should always keep people informed. That means using AI to improve human intelligence, rather than working to replace it.
Whether we prepare or not, alien spirits are upon us and they could easily become our rivals, vying for the same niche at the top of the intellectual food chain. And while there is serious effort in the AI community to push for safe technologies, there is also a lack of urgency. That’s because too many of us mistakenly believe that a sentient AI created by mankind will somehow be a branch of the human tree, like a digital descendant that shares a very human core.
That’s wishful thinking. A true AGI is more likely to be fundamentally different from us in almost every way. Yes, it will be remarkably good at it pretending to be human, but behind a philanthropic facade each will be a rival spirit that thinks, feels and acts like no creature we’ve ever met on earth. The time to prepare is now.
Louis Rosenberg, Ph.D is a technology pioneer in the fields of VR, AR and AI. He is known for developing the first Augmented Reality System for the US Air Force in 1992, for founding early virtual reality company Immersion Corp (Nasdaq IMMR) in 1993, and founding early AR company Outland Research in 2004. He is currently the founder and CEO of Unanimous AI.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers