Robots sent in to clean up disaster zones like the Fukushima nuclear plant in Japan need a strong, secure grip. A robotic hand that can “see” in three dimensions could help.
London-based Shadow Robot is testing its Dexterous Hand with a Kinect depth-sensing camera that would allow it to analyse the 3D shape of any object a mobile robot is focusing on – or which is being held out to it by a human. Software then builds a 3D computer model of the approaching object and works out the arrangement of the four fingers and thumb that will most securely grip the object.
Developed by Shadow Robot and King’s College London with funding from the UK Technology Strategy Board, the technology is being demonstrated at theAutomatica 2014 robotics exhibition in Munich, Germany, this week.
In an exclusive demonstration for New Scientist, it was fascinating to see the visual “thought” processes underway on a screen beside the hand. As an object approached the hand – whether a delicate light bulb, a tough metal drinks flask or a copy of New Scientist (see video) – the software not only scanned its shape but also estimated its position, with a large 3D arrow representing the way up it thought the object was.
“Once it has seen an object, and worked out its orientation with respect to itself, it works out the best way to grasp it,” says Shadow Robot’s head of operations, Gavin Cassidy. “Even when it is holding the object it continually monitors the stability of the grasp using its pressure and touch sensors.”
|This means that if a small piece of fruit is offered to it, the system will just use two fingers and a thumb to hold it in a light, almost genteel fashion. But a larger object will get the full four fingers and thumb in a wraparound grasp. Once an object has been identified by the system it can be placed in an archive to speed recognition the next time around, says Mark Addison, a Shadow Robot software developer.|
For test purposes, the system uses an external depth-sensing camera close to the hand. But the aim is to build a microchip-sized high resolution depth camera into the hand itself, says Cassidy.
That makes sense, says Tony Belpaeme, a robotics researcher at Plymouth University, UK. He says that similar systems using a depth camera at a distance sometimes can’t get the full picture of the target object. “So your grasp might be right for the part of the object you can see, but wrong for the ‘dark side’ of the object,” he says.
p class=”infuse”>”Having a depth sensor on the hand offers a lot of promise, as the hand could scan the object from all sides and then compute an optimal grasp.”