Artificial intelligence (AI) comes in many forms, from pattern recognition systems to Generative AI. However, there is another type of AI that can respond to real data almost instantly: embodied AI.
But what exactly is this technology and how does it work?
Embedded AI typically combines sensors with machine learning to respond to real-world data. Examples include autonomous drones, self-driving cars, and factory automation. Robot vacuums and lawn mowers use a simplified form of embodied AI.
These autonomous systems use AI to learn to overcome obstacles in the physical world. Most embodied AI uses an algorithmically coded map that, in many ways, is akin to the mental map of London’s labyrinthine network of roads and landmarks used by the city’s taxi drivers. Actually, research into how London taxi drivers determine a route has been used to inform the development of such embedded systems.
Some of these systems also incorporate the type of embodied group intelligence found in swarms of insects, flocks of birds, or flocks of animals. These groups unconsciously synchronize their movements. Imitating this behavior is a useful strategy for developing a network of drones or warehouse vehicles that are controlled by an embodied AI.
History of Embodied AI
The development of embodied AI began in the 1950s, with cybernetic turtlewhich was created by William Gray Walter of the Burden Neurological Institute in the United Kingdom. But it would take decades for embodied AI to take on its full meaning. While cognitive and generative AI learns major language modelsEmbodied AI learns from its experiences in the physical world, just as humans respond to what they see and hear.
However, the sensory inputs of embodied AI are very different from those of human senses. The embedded AI can detect X-rays, ultraviolet and infrared light, magnetic fields or GPS data. Computer vision algorithms can then use this sensory data to identify and respond to objects.
Building a global model
The central element of an embodied AI is its global modeldesigned for its operating environment. This global model is similar to our own understanding of the environment.
The global model is based on different learning approaches. An example is reinforcement learningwhich uses a policy-based approach to determining a route – for example, with rules such as “always do X when you encounter Y”.
Another solution is active inference, which is inspired by the functioning of the human brain. These models continuously integrate data from the environment and update the global model based on this feed in real time – the same way we react based on what we see and hear. In contrast, some other AI models do not evolve in real time.
Active inference starts with a basic level of understanding of the environment, but it can scale quickly. As such, any autonomous vehicle relying on active inference requires extensive training to be safely deployed on the roads.
Incorporated AI could also help chatbots deliver a better customer experience by reading a customer’s emotional state and tailoring their responses accordingly.
Although embodied AI systems are still in their infancy, research is evolving rapidly. Improvements in generative AI will naturally inform the development of embodied AI. Embedded AI will also benefit from improvements in the accuracy and availability of the sensors it uses to determine its surroundings.