Researcher algorithms design sensing soft robots

IMAGE

IMAGE: MIT researchers have developed a deep learning neural network to aid in the design of body robots, such as these representations of a robotic elephant. view more

Credit: Courtesy of Alexander Amini, Andrew Spielberg, Daniela Rus, Wojciech Matusik, Lillian Chin, et. al

There are some functions that are not cut out for traditional robots – the hard and metallic type. On the other hand, body robots may be able to interact with humans more safely or slip into tight spaces easily. But for robots to perform their programmed duties reliably, they need to know where their body parts are. That is a high task for a soft robot that can deform in an almost infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that gather more useful information about their surroundings. The deep learning algorithm suggests advanced positioning of sensors within the body of the robot, allowing it to better interact with its environment and complete named tasks. The advancement is a step towards robot design automation. “Not only does the system learn a specific task, but also how best to design the robot to solve that task,” says Alexander Amini. “Sensor positioning is a very difficult problem to solve. So this solution is very interesting.”

The research will be presented at the IEEE International Conference in April on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. The co-authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Creating soft robots that perform tasks in the real world has been a long-standing challenge in robots. Their tight peers have an established advantage: limited range of motion. The finished set of tight robots of joints and members usually makes for easy-to-manage calculations with the algorithms that control mapping and motion design. Soft robots are not as easy to find.

Body robots are flexible and pliable – they usually feel more like a kicking ball than a bowling ball. “The main problem with soft robots is that they are infinite in dimensions,” Spielberg says. “Any point on a body robot can, in theory, deform in any way possible.” That makes it difficult to design a soft robot that will be able to map the position of its body parts. Previous attempts have been made to use an external camera to record the robot’s position and return that information to the robot’s control program. But the researchers wanted to create a soft robot without a connection from external support.

“You can’t put an infinite number of sensors on the robot itself,” says Spielberg. “So the question is: How many sensors do you have, and where do you put those sensors to get the biggest bang for your buck? “The team turned to deep learning for response.

The researchers developed a novel neural network architecture that both optimizes sensory positioning and learns to perform tasks efficiently. First, the researchers divided the robot’s body into segments called “fragments.” The snoring rate of each member was given as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to perform tasks, such as grasping objects of different sizes. At the same time, the network monitors the most commonly used grains, and thins the less used grains from the set of inputs for network testing. after that.

Making the most of the most important items, the network also suggests where sensors should be placed on the robot to ensure efficient performance. For example, in a simulated robot with a gripping hand, the algorithm may suggest that sensors be focused in and around the fingers, where precisely controlled interaction with the environment is critical for capability the robot handles objects. While that may seem obvious, it turns out that the algorithm was much better than people’s understanding of where to find the sensors.

The researchers countered their algorithm against a series of expert predictions. For three different soft robots, the team asked robots to manually select where sensors should be positioned to allow them to perform tasks efficiently such as grabbing various objects. They then ran simulations comparing the human-sensing robots with the algorithm-aware robots. And the results were not near. “Our model was much better than humans for every action, even though I looked at some of the robot groups and felt very confident where the sensors should go,” says Amini. There seems to be a lot more subtlet in this problem than we initially thought. “

Spielberg says their work could help automate the robot design process. As well as developing algorithms to control the movements of robots, “we also need to think about how we perceive these robots, and how that interacts. with other parts of that system, “he says. And business applications could be at a better sensing position, especially where robots are used for delicate functions such as grip.” where you need a strong, fully developed feeling, “says Spielberg.” So there’s the potential for immediate impact. “

“The automation of design-conscious soft robots is an important step towards the rapid creation of intelligent devices that help people with physical activities,” says Rus. enabling the soft robot to “see” and understand the world and its relationship to the world. “

###

This research was partly funded by the National Science Foundation and the Fannie and John Hertz Foundation.

Written by Daniel Ackerman, MIT Press Office

Paper: “Joint Learning of Function and Sensitivity Positioning for Soft Robotics”

https: //ieeexplore.ieee.org /stamp /stamp.jsp? tp =& arnumber =9345345

Disclaimer: AAAS and EurekAlert! they are not responsible for the accuracy of press releases posted to EurekAlert! by sending institutions or for using any information through the EurekAlert system.

.Source