Building Robots with a Better Grasp of the Real World

ROBOTICS VISION

An apple is an apple, right? Well, not quite because it can be a red or green apple, or even a good or bruised apple. It takes a human being to be able to differentiate and sort apples and other fruits accordingly. While the process is simple for a person, it’s not as easy for a robot.

The robot must be able to identify and pick up the apple, then sort it according to its colour and quality. This process of grasping and manipulating is what the Australian Centre for Robotic Vision, which is headquartered at Queensland University of Technology, and involves four universities in Australia, is working on. When the project comes to fruition, farmers may no longer have to contend with a shortage of manual labour – they can employ robots to help do the labour-intensive tasks.

“Robust pick-and-place is one of the most common manual tasks that is performed millions of times every day in every industry, from warehousing to agriculture. We are trying to solve some of the issues that arise once we actually have a robotic system that tries to pick and place objects in the real world,” said Dr. Jurgen ‘Juxi’ Leitner, a research fellow who is leading the centre’s manipulation efforts.

There are also certain jobs where robotics is needed because humans probably shouldn’t be doing them the first place because they are so unsafe or hazardous to their health. Cleaning nuclear reactors is an example of a task that makes sense to be done by robots instead of humans.

Leitner said that the Centre aims to automate the most dangerous, dirty and repetitive tasks out there.

Seeing and Doing Things in the Real World

The centre is employing machine learning algorithms along with computer vision techniques, and integrating it with novel hardware, including NVIDIA GPUs, to create some of the world’s smartest pick-and-place robots that can see and do things in the real world.

From fruit sorting to recycling to warehousing, a lot of objects are processed by people. Many of these tasks are repetitive and can be handled by smart robots.

Australia has a vast expanse of farmland that is home to more than 85,000 farms that employs more than 300,000 workers. Yet, there is simply not enough them.

“In Australia, there is a labour shortage for farming. It is painful and manual labour-intensive. Robotic systems will not just benefit the farmers but the people working on the farms,” said Leitner.

Though robots have been deployed in industrial applications, these tend to be rigid and programmed to do the same tasks repeatedly, such as manufacturing a car or a smartphone.

However, that’s not the case with agriculture farming because each type of produce, such as an apple, banana or capsicum, is slightly different than the one that was picked up a second ago.

“Robotic systems need to become smarter and be able to deal with uncertainties. It’s more than just grasping but teaching the robot what to do after it picks up an object,” said Leitner.

“For instance, if you want it to pick up a knife and put it down, it does not matter if the robot picks the knife up at the handle. However, if the intention is to hand the knife to a human being, then it’s better to pick the knife up at the blade. This sort of reasoning about grasping is something that the centre is looking at,” he added.

He admitted that there are hardware limitations as robots do not have very complex human-like hands and cannot see or perceive exactly what a human eye can.

As such, the centre is trying to combine visual feedback and understanding with robotic systems, a process that involves more than “just sticking a camera on a robotic system.”

It is aiming to build robotic vision systems that can close the loop system. For example, if you take a picture of a cat, the computer or phone recognises it as a cat, but for robotic systems, that’s only the first step. Then it must figure out what to do next. For a self-driving car, after it detects the cat, it must determine its next course of action, such as braking or turning to avoid the cat.

Dealing with Uncertainties

The Amazon Robotics Challenge, which the Centre won last year, is an example of how robotic systems can be taught to take the next step and handle uncertainties.

For the competition, the Centre built Cartman, its own robotic system that tightly integrated hardware, vision and learning subsystems.  Powered by four NVIDIA GPUs, the robotic system was the only finalist that was able to successfully pick all the items placed in a storage box earlier.

NVIDIA GPUs are used for training robots to recognise objects and reason. The Centre collected seven to eight images and was able to train the perception system on specific problems, thanks to the combined power of deep neural networks and GPUs.

Since that win, the centre has added to its computational armoury two NVIDIA Tesla V100 accelerators that are used in a task involving learning from the robotics system.

“I think that embedded systems and GPUs are going to be a large step forward for autonomous systems,” said Leitner.

Towards Commercialisation

More start-ups are trying to build prototype platforms from ongoing robotics systems research and the Australian Centre for Robotic Vision is no different.

“After the Amazon Robotics Challenge, we had in-depth discussions with industry partners. We are working on commercialising some of our technologies to see how far we can push them into applications that can deal with industry problems,” said Leitner.

From a research point of view, he believes that only when we have robotic systems that can reason, interact and pick things up, will we be able to create real AI that is able to interact with the daily world.

The possibilities for this type of research are endless. Currently, the Centre’s researchers based in Australian National University in Canberra are in the early stages of developing an asparagus picking robot.

“Eventually, as a robotic system evolves, it would be very interesting to create a home cleaning robot too. Personally, I think I am just a bit lazy and want a robot to clean up after me.”

Leitner will be at NVIDIA’s AI Conference in Sydney, Australia from September 3-4 where attendees can hear him speak on the topic of “Learning to Grasp the World.” Register for the event today.