Robots sense human touch using camera and shadow

ITHACA, NY – Soft robots may not communicate human emotions, but they do improve on human touch sensation.

Cornell University researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without the need for friction at all. Instead, a USB camera housed inside the robot captures hand-held shadow movements on the robot’s skin and sorts them out with machine learning software.

The group’s paper, “ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification,” was published in the Proceedings of the Association for Computer Device on Interactive, Mobile, Accessible and Ubiquitous Technologies. The lead author of the paper is a doctoral student, Yuhan Hu.

The new ShadowSense technology is the latest project from the Human and Robot Collaboration Lab, led by the paper’s senior author, Guy Hoffman, an associate professor in Sibley’s School of Mechanical and Aerospace Engineering.

The technology began as part of an effort to develop accessible robots that could guide people to safety during an emergency evacuation. Such a robot needed to be able to communicate with humans in real situations and environments. Imagine a robot physically guiding someone down a noisy, smoky corridor by finding the weight of the person’s hand.

Instead of installing a large number of communication sensors – which would add pressure and complex wiring to the robot, and would be difficult to set up in a deforming skin – the team took a countermeasured approach. To measure the friction, they looked into sight.

“By placing a camera inside the robot, we can find out how the person is touching them and what the person’s purpose is just by looking at the shadow images, “Hu said.” We think there’s interesting potential there, because there are a lot of social robots that aren’t able to detect friction movements. “

The prototype robot features an inflatable soft vein of nylon skin stretched around a cylindrical skeleton, about four feet high, located on a movable base. Under the skin of the robot is a USB camera, which connects to a laptop. The researchers developed a neural network-based algorithm that uses previously recorded training data to differentiate between six friction movements – palm rubbing, punching, two-handed rubbing, courting, celebrating and not rubbing at all – with an accuracy of 87.5 to 96%, depending on the light.

The robot can be programmed to respond to certain gestures and movements, such as rolling away or sending a message through a loudspeaker. And the skin of the robot has the ability to turn it into an interactive screen.

By collecting enough data, a robot could be trained to recognize an even broader interaction terminology, designed specifically to be appropriate for the robot’s work, Hu said.

The robot doesn’t even have to be a robot. ShadowSense technology can be incorporated into other products, such as balloons, turning them into friction-sensitive devices.

In addition to providing a simple solution to a complex technical challenge, and making robots easier to use for a shoe, ShadowSense offers comfort that is increasingly rare in the times. this high-tech: privacy.

“If the robot can only see you in the shape of your shadow, it can detect what you are doing without building trustworthy images of your appearance,” Hu said. “That will give you physical filth and protection, and it will give you psychological comfort.”

###

The research was supported by the National Science Foundation’s National Robotic Initiative.

Disclaimer: AAAS and EurekAlert! they are not responsible for the accuracy of press releases posted to EurekAlert! by sending institutions or for using any information through the EurekAlert system.

.Source