Semantic Navigation System and Some Information About It

Semantic Navigation System and Some Information About It

Today, Carnegie Mellon showed off new research in the world of robotic navigation. The university has designed semantic navigation with help from the team at FAIR (Facebook AI Research). This navigation helps robots move around while recognizing familiar objects.

The SemExp system won over Samsung to take first place in the recent Habitat ObjectNav Challenge. The system utilizes machine learning to train the system to recognize objects. However, that goes beyond simple superficial traits. The robot can distinguish an end table from a kitchen table in the example given by CMU. Thus, it can extrapolate in which room it is located. In other situations, it is even more straightforward. When seeing a fridge, which is both largely restricted and distinct to a single room, it can immediately tell where it is.

Devendra S. Chaplot is a Machine Learning Ph.D. student. She said that common sense says that if you are looking for a refrigerator, you had better go to the kitchen. Nevertheless, classical robotic navigation systems explore space by building a map showing obstacles. Eventually, the robot gets to where it needs to go. However, the route can be circuitous.

CMU notes that this is not the first attempt to apply semantic navigation to robotics. Previous efforts relied too heavily on having to memorize where objects were in specific areas, instead to tying an object to where it was likely to be.

Chaplot fixed that problem by making SemExp, a modular system. Chaplot is working with FAIR’s Dhiraj Gandhi, along with Ruslan Salakhutdinov, a professor in the machine learning department and Abhinav Gupta, associate professor in the Robotics Institute.

Semantic Navigation System

Chaplot said that the systems use semantic insights to figure out the best places to look for a specific object. You can use a classical style plan to get you there once you decide where to go.

It turns out that the modular approach is efficient in several ways. The learning process focuses on relationships between room layouts and objects rather than learning route planning. The semantic reasoning regulates the most efficient search strategy. Finally, classical navigation planning will get the robot where it needs to go as quickly as it can.

Ultimately, semantic navigation will make it easier for people to interact with robots. Thus, it will enable them to give the robot directions, such as ‘go to the second door on the left’ or fetch an item in a place.

A robot that travels from point X to point Y will be more efficient if it understands that point X is the living room couch, and point Y is a refrigerator. This is true even if it is in an unfamiliar place. That is the practical idea behind a “semantic” navigation system. FAIR (Facebook AI Research) and Carnegie Mellon University developed the system together.

Last month, the navigation system SemExp won the Habitat ObjectNav Challenge in the virtual Computer Vision and Pattern Recognition conference. It edged out over a team from Samsung Research China. This was the second consecutive first-place for the CMU team in the conference’s yearly challenge.