SLIDE 1
UDT 2020 Extended Abstract Integrating multiple AIs for submarine command teams
UDT 2020 – Experiences and insights of integrating multiple AIs for submarine command teams
Dr Darrell Jaya-Ratnam1, Paul Bass2
1Managing Director, DIEM Analytics Ltd, London, UK 2 Principal Engineering Manager, BAE Systems Submarines, Frimley, UK
Abstract — Last year, BAE and DIEM presented their work on ‘BLACCADA’ (the BAE Lateral-AI Counter- detection, Collision-avoidance & mission Activity Decision Aide); a proof-of-concept to test how AI can provide useful insight and challenge by thinking about things differently, but presented in a way that allows the command team to maintain accountability and responsibility by being able to ‘look behind the curtain’ (as observed by Rear
- Adm. David Hahn, Chief of Naval Research US Navy). This initial work looked at lateral AI for forward action plans
(FAP) and simple courses of action (COA). This work has now been extended to include target motion analysis (TMA) and the integration of BLACCADA, an anomaly detection and explanation AI application (MaLFIE), and a Red threat agent AI application (DR SO) into BAE’s ‘Concept Laboratory’ (ConLab). This suite allows us to test the benefit to command teams of having multiple decision aides working together, the challenges of integrating different types of AI onto a single network, and the challenges of providing a single user interface.
1 Introduction
In recent years many organisations have invested in the development of proof-of-concepts to explore the benefits
- f AI decision aides to command teams and operators for
specific decisions. Examples include: BLACCADA, developed with BAE Systems funding, which provides recommendations on FAP and COA for submarine command teams [1]; MaLFIE (Machine Learning and Fuzzy-logic Integration for Explainability) [2], developed with Defence and Security Accelerator (DASA) funding, which prioritises and explains surface vessel anomaly detection AI using doctrinal language and which is currently being implemented for use by the National Maritime Information Centre (NMIC) and the programme NELSON platform; Red Mirror [3] [4], funded by the Dstl Future of AI in Defence (FAID) programme, which generates rapid predictions of Red AI’s next action based purely on recent tactical observations; and DR SO (Deep Reinforcement Swarming Optimisation), developed by DIEM with internal funding, that trains Red agents to surround a Blue agent and trains the Blue agent to avoid being surrounded all in the presence of obstacles and with different levels of ‘experience’. These different AI decision aides, or ‘applications’, each relate to specific decisions. Naturally, there is now increasing interest in how these AI applications could work together and there are several ‘frameworks’ that allow multiple decision aides and AIs to be networked. Dstl, for instance, have invested in SYCOIEA (SYstem for Coordination and Integration of Effects Allocation), the Intelligent Ship AI Network (ISAIN), and the ‘Command Lab’, each of which has a different scope, purpose and functionality, whilst the Royal Navy (RN) has the programme NELSON architecture. The ‘Concept Lab’ (ConLab) is BAE’s framework for testing and maturing combinations of decision aides, initially for submarine command teams. In the previous work [1] we proposed a high-level architecture, focussed
- n the presentational and application-service layers (the
light blue boxes in figure 1) in order to demonstrate ‘lateral AI’ i.e. AI that seeks to gain trust through paralleling the human processing and providing explanation, rather than relying on statistical proof of being correct.
- Fig. 1. Areas of focus against the initial high-level architecture