loading page

Explaining Autonomous Drones: An XAI Journey
  • +9
  • Mark Stefik,
  • Michael Youngblood,
  • Peter Pirolli,
  • Christian Lebiere,
  • Robert Thomson,
  • Robert Price,
  • Lester Nelson,
  • Robert Krivacic,
  • Jacob Le,
  • Konstantinos Mitsopoulos,
  • Sterling Somers,
  • Joel Schooler
Mark Stefik
Palo Alto Research Center Incorporated

Corresponding Author:[email protected]

Author Profile
Michael Youngblood
Palo Alto Research Center Incorporated
Author Profile
Peter Pirolli
Florida Institute for Human and Machine Cognition
Author Profile
Christian Lebiere
Carnegie Mellon University
Author Profile
Robert Thomson
West Point
Author Profile
Robert Price
Palo Alto Research Center Incorporated
Author Profile
Lester Nelson
Palo Alto Research Center Incorporated
Author Profile
Robert Krivacic
Palo Alto Research Center Incorporated
Author Profile
Jacob Le
Palo Alto Research Center Incorporated
Author Profile
Konstantinos Mitsopoulos
Carnegie Mellon University
Author Profile
Sterling Somers
Carnegie Mellon University
Author Profile
Joel Schooler
Florida Institute for Human and Machine Cognition
Author Profile

Abstract

COGLE (COmmon Ground Learning and Explanation) is an explainable artificial intelligence (XAI) system for autonomous drones that deliver supplies in mountainous areas to field units. The drone missions have risks that vary with topography, flight decisions, and mission goals in a simulated environment. Users must determine which AI-controlled drone is better for a mission. Narrative explanations identify the advantages of a drone’s plan (“What?”) and reasons that the better drone is able to do them (“Why?”). Visual explanations highlight risks from obstacles that users may have overlooked (“Where?”). A model induction user study showed that post-decision explanations produced a small effect on the participants’ abilities to identify the better of two imperfect drones and their plans for a mission, but they did not teach participants to judge the multiple success factors in complex missions as well as the AI pilots. In a decision support variation of the task, users would receive pre-decision explanations to help them to decide when to trust the XAI’s decision. In a fielded XAI application, every drone available for a mission may lack some competencies. We created a proof-of-concept demonstration of automatic ways to combine knowledge from multiple imperfect AIs to get better solutions that the individual AIs do not find on their own. This paper reports on the research challenges, technical approach, and findings of the project and also reflects on the multidisciplinary journey that we took.
05 Jun 2021Submitted to Applied AI Letters
18 Jun 2021Submission Checks Completed
18 Jun 2021Assigned to Editor
25 Jun 2021Reviewer(s) Assigned
31 Jul 2021Review(s) Completed, Editorial Evaluation Pending
12 Aug 2021Editorial Decision: Revise Minor
11 Sep 20211st Revision Received
11 Sep 2021Assigned to Editor
11 Sep 2021Submission Checks Completed
13 Sep 2021Reviewer(s) Assigned
21 Oct 2021Review(s) Completed, Editorial Evaluation Pending
25 Oct 2021Editorial Decision: Revise Major
03 Nov 20212nd Revision Received
08 Nov 2021Submission Checks Completed
08 Nov 2021Assigned to Editor
16 Nov 2021Review(s) Completed, Editorial Evaluation Pending
16 Nov 2021Editorial Decision: Revise Minor
16 Nov 20213rd Revision Received
17 Nov 2021Submission Checks Completed
17 Nov 2021Assigned to Editor
17 Nov 2021Review(s) Completed, Editorial Evaluation Pending
17 Nov 2021Editorial Decision: Accept
23 Dec 2021Published in Applied AI Letters. 10.1002/ail2.54