Massachusetts Institute of Technology (MIT) unveiled a new method to train robots last week that uses generative artificial intelligence (AI) models. The new technique relies on combining data across different domains and modalities and unifying them into a shared language which can then be processed by large language models (LLMs). MIT researchers claim that this method can give rise to general-purpose robots that can handle a wide range of tasks without needing to individually train each skill from scratch.
MIT Researchers Develop AI-Inspired Technique to Train Robots
In a newsroom post, MIT detailed the novel methodology to train robots. Currently, teaching a certain task to a robot is a difficult proposition as a large amount of simulation and real-world data is required. This is necessary because if the robot does not understand how to perform the task in a given environment, it will struggle to adapt to it.
This means for every new task, new sets of data comprising every simulation and real-world scenario are needed. The robot then undergoes a training period where the actions are optimised and errors and glitches are removed. As a result, robots are generally trained on a specific task, and those multi-purpose robots seen in science fiction movies, have not been seen in reality.
However, a new technique developed by researchers at MIT claims to bypass this challenge. In a paper published in the pre-print online journal arXIv (note: it is not peer-reviewed), the scientists highlighted that generative AI can assist with this problem.
For this, data across different domains, such as simulations and real robots, and different modalities such as vision sensors and robotic arm position encoders, were unified into a shared language that can be processed by an AI model. A new architecture dubbed Heterogeneous Pretrained Transformers (HPT) was also developed to unify the data.
Interestingly, the lead author of the study, Lirui Wang, an electrical engineering and computer science (EECS) graduate student, said that the inspiration for this technique was drawn from AI models such as OpenAI’s GPT-4.
The researchers added an LLM model called a transformer (similar to the GPT architecture) in the middle of their system and it processes both vision and proprioception (sense of self-movement, force, and position) inputs.
The MIT researchers state that this new method could be faster and less expensive to train robots compared to the traditional methods. This is largely due to the lesser amount of task-specific data required to train the robot in various tasks. Further, the study found that this method outperformed training from scratch by more than 20 percent in both simulation and real-world experiments.