-By Francisco J. Chiyah Garcia José Lopes Helen Hastie
School of Mathematical and Computer Sciences,
Heriot-Watt University Edinburgh,
United Kingdom
Paper Link
School of Mathematical and Computer Sciences,
Heriot-Watt University Edinburgh,
United Kingdom
Paper Link
ABSTRACT
Increasingly complex and autonomous robots are being deployed in real-world environments with far-reaching consequences. High stakes scenarios, such as emergency response or offshore energy platform and nuclear inspections, require robot operators to have clear mental models of what the robots can and can’t do. However, operators are often not the original designers of the robots and thus, they do not necessarily have such clear mental models, especially if they are novice users. This lack of mental model clarity can slow adoption and can negatively impact human-machine teaming. We propose that interaction with a conversational assistant, who acts as a mediator, can help the user with understanding the functionality of remote robots and increase transparency through natural language explanations, as well as facilitate the evaluation of operators’ mental models.
Robots and autonomous systems are being deployed in remote and dangerous environments, such as in nuclear plants or on offshore energy platforms for inspection and maintenance. These robots are important as they keep operators out of harm’s way. However, to date there exists no single robot that has all the functionality to perform the variety of tasks required in these domains
For example in an offshore emergency response scenario, robots need to firstly inspect the emergency area (e.g. with a ground robot with a camera and other sensors); secondly, resolve the emergency (e.g. with a heavy ground robot that can put out a fire); and finally, inspect the damaged area (e.g. with a drone collecting aerial images).
Figure 1: Figure showing examples robots used for remote operation
See Figure 1 for examples of such robots (where images a, b and d are images of robots from the ORCA Hub). Until a single robot can do all these tasks, the operator will be required to manage multiple robots, all functioning differently and performing tasks in different ways. This problem is confounded by the advent of robots that can adapt, with their functionality and behaviour changing continuously. Further more, remotely controlled robots often instil less trust than those co-located, thus it is essential that operators maintain an appropriate mental model of the robot. This is a huge burden on the operator to gain and maintain such clear mental models of each of these robots and to task and manage them effectively. This means that only highly skilled operators, using a variety of interfaces, would be able to control the robots and this could potentially hinder general adoption.
REMOTE ROBOTS AND MENTAL MODELS
A clear understanding of the actions and reasoning of a robot is crucial for the operator and increases the robot’s transparency, an important factor in explainability, preventing issues such as wrong assumptions, misuse or over-trust. It also helps the operator build a more faithful mental model of the robot, which comes with increased confidence and performance.
THE MIRIAM SYSTEM
The MIRIAM intelligent assistant is a typed chat or spoken dialogue system that uses natural language to interact with several remote autonomous vehicles, including drones, ground and underwater vehicles. It provides operators with status updates, alerts and explanations of events in mixed-initiative conversation and allows them to process the operator’s queries and act on them.
The system has been used to help with operator training and to investigate the effects of trust and situation awareness of operators with autonomous vehicles in the offshore domain. Explanations provided by the assistant marginally improved the mental model of operators in terms of what the autonomous vehicles were doing (functionally) and how the autonomous vehicles worked (structurally).
The system directly interacts with the autonomous robots illustrated in Figure 1a,b,d, and controls them, obtaining updates from them. Further processing of these updates enables the MIRIAM system to produce explanations that are then communicated to the operator (e.g. a robot with low battery or without a camera cannot be sent to inspect an area), thus increasing the transparency between the robots and the operator. It maintains a dynamic world view and is thus able to constrain the interaction and advise the operator on which robots to use based on their capabilities, standard operating procedures and current world view. This process in itself helps to develop and maintain the operator’s mental model.
Estimating one’s mental model and evaluating any increases in clarity is a very challenging task, even between humans. We can estimate the mental model of the participants by asking them to rate statements that measured several dimensions about their understanding of what was happening and why . These measurements were taken whilst the autonomous vehicles performed tasks, both before and after the intelligent assistant had provided information and an explanation.
Conclusion
Communicating what the robot/system can do explicitly is not always an effective method, as it may repeat what the user already knows unnecessarily. One method of subtly doing this is through social cues. Initial work has looked at this by embodying the conversational assistant into the form of a Furhat social robot (Figure 2). This enables the use of social cues that extend existing dialogue and pragmatic cues in the spoken dialogue system to include visual social cues, such as shared gaze and facial expressions, or auditory cues such as prosody.
Comments