Scientists have created an algorithm that allows robots to evaluate chaos within any environment several orders of magnitude quicker than previous attempts.
This new algorithm brings us a step closer toward home robots that would be able to quickly assess disordered and unpredictable spaces.
Evaluating Common Events Like Bottlenecks
“Robot perception is one of the biggest bottlenecks in providing capable assistive robots that can be deployed into people’s homes,” said Karthik Desingh, who is a graduate student in computer science and engineering from the University of Michigan and also the author of the research posted in Science Robotics.
“In industrial settings, where there is structure, robots can complete tasks like build cars very quickly. But we live in unstructured environments, and we want robots to be able to deal with our clutter.”
In the past, robots have performed most effectively in environments that are structured, most in situations where they are behind cages or rails in order to keep people safe and the workspace of robots more orderly and clean. Conversely, the environment of a typical person, either at home or work, is usually a bunch of objects in various chaotic states, papers strewn across a desk, a purse hiding our car keys, and half-open cabinet doors.
Incredible Precision
These researchers are referring to this new algorithm as “Pull Message Passing for Nonparametric Belief Propagation”. Within 10 minutes, it will compute an amazingly accurate comprehension of something’s pose—both its position and its orientation—to accuracy levels that took previous methods over 90 minutes.
The research team demonstrated this very task with a Fetch robot. They revealed that their algorithm is able to accurately perceive and operate sets of drawers, even if it is partially covered with blankets, or whenever a drawer remains half-open, or even when the arm of the robot is hiding its sensor view of these drawers. The algorithm can also operate more complex objects that have several complicated joints. They demonstrated that their robot is able to perceive its gripper arm and body with amazing precision.
“The concepts behind our algorithm, such as Nonparametric Belief Propagation, are already used in computer vision and perform very well in capturing the uncertainties of our world. But these models have had limited impact in robotics as they are very expensive computationally, requiring more time than practical for an interactive robot to help in everyday tasks,” said Chad Jenkins, who is a professor of computer science and engineering from the Robotics Institute.
Push Messaging
Scientists first reported the Nonparametric Belief Propagation method in 2003. They are quite effective in computer vision, which seeks to obtain a complete understanding of a situation using video and images. This is due to 2D images or video that uses less computer power and time than 3D scenes that are involved in robot observation.
Previous attempts to comprehend a scene were to transform it into a graphical model of edges and nodes, which represented every component of a given object along with their relationships to one another. These algorithms can now hypothesize—or develop beliefs of—component orientation and locations when presented with a given set of constraints. These developed beliefs, which scientists refer to as particles, differ over a range of probabilities.
To trim down the most probable orientations and locations, the components use “push messaging” to transmit likely location info over nodes and back. The algorithm will then compare that location info with data from the sensor. This entire process requires many iterations to finally arrive at a precise belief of a given scene.
Pull Messaging
To simplify computing demands, the team of researchers used what is referred to as “pull messaging.” This approach takes a series of back-and-forth, info-laden messages, and boils them into a specific conversation between the components of an object.
For instance, rather than having a dresser send location data to a drawer after computing info from all of the other drawers, the dresser will first check with the drawers. It will ask every drawer for its belief as to where it is located. Then it will compare that belief to data gathered from the other drawers. It will eventually converge on a very accurate assessment of a given through several iterations, just like the push method.