Skip to main content

Natural Language Interaction to Facilitate Mental Models of Remote Robots

-By Francisco J. Chiyah Garcia José Lopes Helen Hastie
School of Mathematical and Computer Sciences,
Heriot-Watt University Edinburgh,
United Kingdom

Paper Link


ABSTRACT

Increasingly complex and autonomous robots are being deployed in real-world environments with far-reaching consequences. High stakes scenarios, such as emergency response or offshore energy platform and nuclear inspections, require robot operators to have clear mental models of what the robots can and can’t do. However, operators are often not the original designers of the robots and thus, they do not necessarily have such clear mental models, especially if they are novice users. This lack of mental model clarity can slow adoption and can negatively impact human-machine teaming. We propose that interaction with a conversational assistant, who acts as a mediator, can help the user with understanding the functionality of remote robots and increase transparency through natural language explanations, as well as facilitate the evaluation of operators’ mental models.

Robots and autonomous systems are being deployed in remote and dangerous environments, such as in nuclear plants or on offshore energy platforms for inspection and maintenance. These robots are important as they keep operators out of harm’s way. However, to date there exists no single robot that has all the functionality to perform the variety of tasks required in these domains

For example in an offshore emergency response scenario, robots need to firstly inspect the emergency area (e.g. with a ground robot with a camera and other sensors); secondly, resolve the emergency (e.g. with a heavy ground robot that can put out a fire); and finally, inspect the damaged area (e.g. with a drone collecting aerial images). 




Figure 1: Figure showing examples robots used for remote operation

See Figure 1 for examples of such robots (where images a, b and d are images of robots from the ORCA Hub). Until a single robot can do all these tasks, the operator will be required to manage multiple robots, all functioning differently and performing tasks in different ways. This problem is confounded by the advent of robots that can adapt, with their functionality and behaviour changing continuously. Further more, remotely controlled robots often instil less trust than those co-located, thus it is essential that operators maintain an appropriate mental model of the robot. This is a huge burden on the operator to gain and maintain such clear mental models of each of these robots and to task and manage them effectively. This means that only highly skilled operators, using a variety of interfaces, would be able to control the robots and this could potentially hinder general adoption. 

REMOTE ROBOTS AND MENTAL MODELS

A clear understanding of the actions and reasoning of a robot is crucial for the operator and increases the robot’s transparency, an important factor in explainability, preventing issues such as wrong assumptions, misuse or over-trust. It also helps the operator build a more faithful mental model of the robot, which comes with increased confidence and performance.



THE MIRIAM SYSTEM

The MIRIAM intelligent assistant is a typed chat or spoken dialogue system that uses natural language to interact with several remote autonomous vehicles, including drones, ground and underwater vehicles. It provides operators with status updates, alerts and explanations of events in mixed-initiative conversation and allows them to process the operator’s queries and act on them. 

The system has been used to help with operator training and to investigate the effects of trust and situation awareness of operators with autonomous vehicles in the offshore domain. Explanations provided by the assistant marginally improved the mental model of operators in terms of what the autonomous vehicles were doing (functionally) and how the autonomous vehicles worked (structurally). 

The system directly interacts with the autonomous robots illustrated in Figure 1a,b,d, and controls them, obtaining updates from them. Further processing of these updates enables the MIRIAM system to produce explanations that are then communicated to the operator (e.g. a robot with low battery or without a camera cannot be sent to inspect an area), thus increasing the transparency between the robots and the operator. It maintains a dynamic world view and is thus able to constrain the interaction and advise the operator on which robots to use based on their capabilities, standard operating procedures and current world view. This process in itself helps to develop and maintain the operator’s mental model.

Estimating one’s mental model and evaluating any increases in clarity is a very challenging task, even between humans. We can estimate the mental model of the participants by asking them to rate statements that measured several dimensions about their understanding of what was happening and why . These measurements were taken whilst the autonomous vehicles performed tasks, both before and after the intelligent assistant had provided information and an explanation. 

Conclusion



Communicating what the robot/system can do explicitly is not always an effective method, as it may repeat what the user already knows unnecessarily. One method of subtly doing this is through social cues. Initial work has looked at this by embodying the conversational assistant into the form of a Furhat social robot (Figure 2). This enables the use of social cues that extend existing dialogue and pragmatic cues in the spoken dialogue system to include visual social cues, such as shared gaze and facial expressions, or auditory cues such as prosody. 

Comments

Popular posts from this blog

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based similarity measures for text data. Object o is an out

TableSense: Spreadsheet Table Detection with Convolutional Neural Networks

 - By Haoyu Dong, Shijie Liu, Shi Han, Zhouyu Fu, Dongmei Zhang Microsoft Research, Beijing 100080, China. Beihang University, Beijing 100191, China Paper Link Abstract Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for tab

DEEP LEARNING FOR ANOMALY DETECTION: A SURVEY

-By  Raghavendra Chalapathy  University of Sydney,  Capital Markets Co-operative Research Centre (CMCRC)  Sanjay Chawla  Qatar Computing Research Institute (QCRI),  HBKU  Paper Link Anomaly detection also known as outlier detection is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions Hawkins defines an outlier as an observation that deviates so significantly from other observations as to arouse suspicion that it was generated by a different mechanism. Aim of this paper is two-fold, First is a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore the adoption of these methods