
Artificial General Intelligence (AGI) is back in the news thanks to DeepMind’s recent launch of Gato. AGI primarily relies on imagery of the Skynet (from Terminator lore), which was originally developed as threat analysis software for the military but quickly came to view humanity as the enemy. While this is fictional, it should give us pause, especially since militaries around the world are pursuing AI-based weapons.
However, Gato doesn’t seem to voice any of those concerns. The Deep Learning Transformer model is described as a “general agent” and envisions performing 604 distinct and mostly mundane tasks with different modalities, observations, and action specifications. It has been dubbed the Swiss army knife of AI models. It is clearly much more general than other AI systems developed so far and seems to be a step towards AGI in that respect.
Multimodal Neural Networks
Multimodal systems are not new – as evidenced by GPT-3 and others. What is probably new is the intention. GPT-3 was designed to be a large language model for text generation. That it could also generate images from subtitles, generate programming code and other functions were additional benefits that surfaced after the fact and often to the surprise of AI experts.
In comparison, Gato is intentionally designed to address many discrete functions. DeepMind explains, “The same network with the same weights can play Atari, annotate images, chat, stack blocks with a real robotic arm, and more, and based on its context decide whether to output text, joint torques, button presses, or other tokens.”
Although DeepMind claims that Gato outperforms humans on many of these tasks, the first iteration returns less than impressive results on several activities. Observers have noted that it doesn’t perform very well on many of the 604’s tasks, with one observer summing it up as: “An AI program that does a mediocre job at many things.”
But this dismissal misses the point. Up to now there has only been “narrow AI” or “weak AI”, defined as only fit for a single purpose, where “single purpose” means a few things:
- An algorithm designed for one thing (e.g. developing beer recipes) cannot be used for anything else (e.g. playing a video game).
- Anything that one algorithm “learns” cannot be effectively transferred to another algorithm designed to serve a different specific purpose.
For example, AlphaGo, the neural network also by DeepMind, which has surpassed the human world champion in the game of Go, cannot play other games, although these games are much simpler, and cannot meet other needs.
Strong AI
The other end of the AI spectrum is referred to as “strong AI” or alternatively AGI. This would be a single AI system – or possibly a group of interconnected systems – that could be applied to any task or problem. Unlike narrow AI algorithms, the knowledge gained through general AI can be shared and preserved between system components.
In a general AI model, the algorithm that can beat the world’s best at Go would be able to learn chess or any other game and take on additional tasks. AGI is designed as a general intelligent system that can act and think similar to humans. Murray Shanahan, professor of cognitive robotics at Imperial College London, said in the Exponential View podcast that AGI is “in some ways as smart as humans and capable of the same level of generalization that humans are capable of and common sense people have for that.”
However, unlike humans, it works at the speed of the fastest computer systems.
A matter of scale
Nando de Freitas, a researcher at DeepMind, believes Gato is effectively a AGI demonstration, it just lacks the sophistication and scope that can be achieved through further model refinement and additional computational power. The size of the Gato model is relatively small at 1.18 billion parameters, essentially a proof of concept that offers a lot of performance benefit with additional scaling.
Scaling the AI models requires more data and more computing power for algorithm training. We are awash in data. Last year, industry analyst firm IDC said, “The amount of digital data that will be created in the next five years will be more than double the amount of data that has been created since the advent of digital storage.” Computing power has grown exponentially for decades. Although there are indications, this pace is slowing due to limitations on the physical size of semiconductors.
Nevertheless, the Wall Street Journal notes that chipmakers have pushed the technological frontiers and found new ways to achieve more computing power. This is mostly done through heterogeneous design, in which chips are built up from a large number of special modules. This approach is proving effective, at least in the short term, and will continue to drive model scaling.
Geoffrey Hinton, a professor at the University of Toronto and pioneer of deep learning, told Scale: “There are a trillion synapses in one cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require a trillion synapses.”
AI models with more than a trillion parameters – the equivalent of synapses in neural networks – are emerging, with Google having developed a model with 1.6 trillion parameters. However, this is not an example of AGI. The consensus of several surveys of AI experts suggests that AGI is still decades away. Either Hinton’s assessment is only part of the issue for AGI, or the expert opinions are conservative.
Perhaps the economies of scale are best illustrated with the progression from GPT-2 to GPT-3, where the main differences were more data, more parameters – 1.5 billion for GPT-2 to 175 billion for GPT-3 – and more processing power – z , more and faster processors, some of which are specially designed for AI functions. When GPT-3 came out, Arram Sabeti, a San Francisco-based developer and artist, tweeted “Playing GPT-3 feels like looking into the future. I’ve managed to write songs, stories, press releases, guitar tab, interviews, essays, technical manuals. It’s frighteningly good.”
However, AI deep learning skeptic Gary Marcus believes that “there are serious gaps in the scaling argument”. He claims that scaling measures that others have explored, like predicting the next word in a sentence, “do not equate to the kind of deep understanding real AI has [AGI] would need.”
Yann LeCun, senior AI scientist at Facebook’s owner Meta and a previous Turing Award for AI winner, said in a recent blog post after the release of Gato that there is no such thing as AGI. In addition, he does not believe that scaling-up models will reach this level, that additional new concepts are required. However, he acknowledges that some of these concepts, like generalized self-supervised learning, “may be around the corner”.
MIT Assistant Professor Jacob Andreas argues that Gato can do many things at once, but that’s not the same as the ability to meaningfully adapt to new tasks that differ from what it was trained for.
While Gato isn’t an example of AGI, there’s no denying that it offers a significant step beyond narrow AI. It’s further evidence that we’re entering a twilight zone, an ill-defined area between narrow and general AI. AGI, as discussed by Shanahan and others, could still be decades in the future, although Gato may have sped up the timeline.
Gary Grossman is Senior VP of Technology Practice at Edelman and Global Head of the Edelman AI Center of Excellence.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers