We look forward to presenting Transform 2022 in person again on July 19th and virtually from July 20th to 28th. Join us for insightful conversations and exciting networking opportunities. Register today!
Algorithms have always been at home in the digital world, where they are trained and further developed in perfectly simulated environments. The current wave of deep learning is facilitating AI’s leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.
For traditional AI specialists, deep learning (DL) is old hat. The breakthrough came in 2012 when Alex Krizhevsky successfully used convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. It is neural networks that have enabled computers to see, hear and speak. DL is why we can talk to our phones and dictate email to our computers. Still, DL algorithms have always played their role in the secure simulated environment of the digital world. Pioneering AI researchers are working hard to introduce deep learning to our physical, three-dimensional world. Yes the real World.
Deep learning could do a lot to improve your business, whether you’re an automaker, a chipmaker, or a farmer. Although the technology has matured, the leap from the digital to the physical world has proven more challenging than many anticipated. That’s why we’ve been talking about smart fridges doing our groceries for years, but nobody has one yet. When algorithms leave their cozy digital nests and fend for themselves in three very real and raw dimensions, there is more than one challenge to overcome.
Automate annotation
The first problem is accuracy. In the digital world, algorithms can get away with accuracies of around 80%. That’s not quite enough in the real world. “If a tomato harvesting robot only sees 80% of all tomatoes, the grower is missing out on 20% of their sales,” says Albert van Breemen, a Dutch AI researcher who developed DL algorithms for agriculture and horticulture in the Netherlands. His AI solutions include a robot that cuts leaves from cucumber plants, an asparagus harvesting robot, and a model that predicts strawberry harvests. His company is also active in the world of medical manufacturing, where his team has created a model that optimizes the production of medical isotopes. “My customers are used to 99.9% accuracy and expect the same from AI,” says Van Breemen. “Every percent loss of accuracy will cost them money.”
To reach the desired level, AI models need to be constantly retrained, which requires a flow of constantly updated data. Data collection is both expensive and time-consuming, as all of this data requires human annotation. To meet this challenge, Van Breemen has equipped each of its robots with features that let them know if they are doing well or badly. If they make mistakes, the robots only upload the specific data they need to improve on. This data is automatically collected across the entire robot fleet. So instead of thousands of images, the Van Breemen team only gets about a hundred images, which are then labeled and tagged and sent back to the robots for retraining. “A few years ago everyone said data is gold,” he says. “Now we see that data is actually a giant haystack hiding a gold nugget. So the challenge is not just collecting a lot of data, but the right kind of data.”
His team has developed software that automates relearning new experiences. Your AI models can now train independently for new environments and effectively exclude humans from the loop. They also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: “It’s a bit paradoxical because you could argue that a model that can annotate photos is the same model that I need for my application. But we train our annotation model with a much smaller data size than our target model. The annotation model is less accurate and can still make mistakes, but it’s good enough to create new data points that we can use to automate the annotation process.”
The Dutch AI specialist sees great potential for deep learning in the manufacturing industry, where AI could be used for applications such as error detection and machine optimization. The global smart manufacturing industry is currently valued at $198 billion and has a projected growth rate of 11% by 2025. The Brainport region around the city of Eindhoven, where Van Breemen’s company is headquartered, is teeming with world-class manufacturing companies such as Philips and ASML. (Van Breemen has worked for both companies in the past.)
The sim-to-real gap
A second challenge in applying AI to the real world is the fact that physical environments are much more diverse and complex than digital ones. A self-driving car trained in the US will not automatically work in Europe with its different traffic rules and signage. Van Breemen faced this challenge when he had to use his DL model, which cuts cucumber plant leaves, in another grower’s greenhouse. “If this was going to happen in the digital world, I would just take the same model and train it on the new producer’s data,” he says. “But this particular grower ran their greenhouse with LED lighting, which gave all of the cucumber images a bluish-purple cast that our model didn’t recognize. So we had to adjust the model to correct for this real-world variation. There are all these unexpected things that happen when you take your models from the digital world and apply them to the real world.”
Van Breemen calls this the “sim-to-real gap,” the discrepancy between a predictable and unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned Stanford AI researcher and co-founder of Google Brain, who is also trying to apply deep learning to manufacturing, speaks of “The Proof of Concept to Production Gap”. This is one of the reasons why 75% of all AI projects in manufacturing do not start. According to Ng, paying more attention to cleaning your data set is a way to solve the problem. The traditional view in AI has been to focus on building a good model and letting the model deal with the noise in the data. However, in manufacturing, a data-centric view can be more useful because the dataset size is often small. Improving the data then immediately translates into improving the overall accuracy of the model.
Aside from cleaner data, another way to bridge the gap between sim and real is by using cycleGAN, an image translation technique that connects two different domains that has been made popular by aging apps like FaceApp. Van Breemen’s team researched cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment in which three simulated cameras observed a simulated robotic arm picking up a simulated object. They then developed a CycleGAN-based DL algorithm that translated the real-world images (three real cameras watching a real robotic arm picking up a real object) into a simulated image that could then be used to recreate the simulated model to train. Van Breemen: “A robotic arm has many moving parts. Normally you would have to program all these movements beforehand. However, if you give it a clearly defined goal, such as picking up an object, it first optimizes the movements in the simulated world. Through cycleGAN you can then use this optimization in the real world, saving many man hours.” Each separate factory using the same AI model to operate a robotic arm would need to train their own cycleGAN to adapt the generic model to its own specific real-world parameters to adjust.
reinforcement learning
The field of deep learning is growing and evolving. His new frontier is called reinforcement learning. Here, algorithms are transformed from mere observers into decision-makers who give robots instructions on how to work more efficiently. Standard DL algorithms are programmed by software developers to perform a specific task, such as B. moving a robotic arm to fold a box. An amplification algorithm could figure out that there are more efficient ways to fold boxes outside of their pre-programmed range.
It was Reinforcement Learning (RL) that led an AI system to beat the world’s top Go player in 2016. Now RL is slowly finding its way into production. The technology is not yet mature enough to be used, but according to experts it is only a matter of time.
Albert Van Breemen uses RL to plan the optimization of an entire greenhouse. It does this by having the AI system decide how the plants can grow most efficiently so the grower maximizes profit. The optimization process takes place in a simulated environment where thousands of possible growth scenarios are tried out. The simulation plays around with different growth variables like temperature, humidity, lighting, and fertilizer, then chooses the scenario in which the plants grow best. The winning scenario is then translated back into the three-dimensional world of a real greenhouse. “The bottleneck is the sim-to-real gap,” explains Van Breemen. “But I really expect that these problems will be solved in the next five to 10 years.”
As a trained psychologist, I am fascinated by the transition of AI from the digital to the physical world. It shows how complex our three-dimensional world really is and how much neurological and mechanical dexterity is required for simple actions like cutting leaves or folding boxes. This transition makes us more aware of our own internal, brain-driven “algorithms” that help us navigate the world, which have taken millennia to develop. It will be interesting to see how AI will compete with this. And when the AI finally catches up, I’m sure my smart fridge will order champagne to celebrate.
Bert-Jan Woertman is the director of the Mikrocentrum.
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.
If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers