Article Preview
Top1. Introduction
Interacting with mobile robots has been one of the main concerns in the community of robotics research. The most natural way for the interaction might be with natural languages, just like humans do in everyday life. (Wang, Ren, & Li, 2016) proposed a scenario of directing remote mobile robots with Chinese natural spoken language. The intention was to get robots properly trained other than to train human operators for the purpose of communication and interaction in between, which should be more practical in emergent rescue situations where robots are deployed and domain experts, other than skilled operators, give instructions to robots for emergent operations. It is also the similar case for elder-care robots, or other home service robots, where the demand for interaction skill with robots should not become a barrier to the user. To remotely navigate a mobile robot inside of a building, the navigation instructions in Chinese natural spoken language were handled with the three-layer-CRF model, and the navigation elements, including Action, Start Place, End Place, Direction, Speed and Distance, in the instruction were extracted.
The centre of this paper is semantic map, in which the semantic meanings of navigation elements are investigated, especially the place names, in the instruction. By systematically defining the place names inside the building where the robot is navigated, and representing the corresponding facts and rules with the expressions based on predicate logic, the robot will have a conceptual system in accord with humans semantically. In this way, the robot will understand the place names with the meanings that the instruction really means in some sense, which makes it available to build an effective human-robot interface. Many navigation instructions, like ‘go to room 601’, ‘turn left at the intersection ahead’, ‘stop in front of the elevator ahead’, etc., have place names included which have coordinates that can be directly obtained from the map (like ‘room 601’), or indirectly inferred from the current position and the map (like ‘the intersection ahead’).
Basically, the place names in the instructions are considered as the prior knowledge about the building where the robot is located. This knowledge is like the map to humans who want to go around finding a place inside the building. Robots should know this knowledge in advance to navigate around according to the instructions, instead of randomly search for the goal. In this paper, the knowledge is represented with predicate clauses which function as a database of the map for robots to search a target place or to infer a shortest path to the target. Specially, the elements of semantic map are organized based on ontology (Van Harmelen, Lifschitz, & Porter, 2008; Lim, Suh, & Suh, 2010; Catania, Zanni-Merk, de Beuvron, & Collet, 2016) and expressed with the predicate clauses with SWI-Prolog format (O'Keefe, 1990; Wielemaker, Schrijvers, Triska, & Lager, 2012; Bramer, 2013).
With the semantic map as the database of building structure in the form of predicate clause, rooms, neighbourhood relations among rooms, corridors, staircases, etc. can be defined upon which logic operations are based for robot localization and path planning sort of navigation functions. Note that the intelligence here involves two levels, the logic inference in higher and the reflective inference in lower. Of the two levels, the higher one of artificial intelligence is the focus of this paper, which assumes the robot could autonomously move around locally.