UDT 2020 Consciousness in Autonomous Systems Hannah Thomas 1 , Emma - - PDF document

udt 2020 consciousness in autonomous systems
SMART_READER_LITE
LIVE PREVIEW

UDT 2020 Consciousness in Autonomous Systems Hannah Thomas 1 , Emma - - PDF document

UDT 2020 UDT Extended Abstract Template Presentation/Panel UDT 2020 Consciousness in Autonomous Systems Hannah Thomas 1 , Emma Parkin 2 1 MMathCompSci, L3Harris, Portsmouth, United Kingdom 2 Meng, L3Harris, Portsmouth, United Kingdom Abstract


slide-1
SLIDE 1

UDT 2020 UDT Extended Abstract Template Presentation/Panel

UDT 2020 – Consciousness in Autonomous Systems

Hannah Thomas1, Emma Parkin2

1MMathCompSci, L3Harris, Portsmouth, United Kingdom 2 Meng, L3Harris, Portsmouth, United Kingdom

Abstract — Standards of artificial intelligence have evolved rapidly with technology over time and today machines are expected to collaborate in more and more complex environments. Operating safely in dynamically changing surroundings and interacting with humans requires a certain level of consciousness. For the purpose of this paper consciousness is defined as 'the state of being aware of and responsive to one's surroundings'. We propose a knowledge pyramid to formally describe the steps required for an autonomous system to acquire consciousness. The pyramid consists of four steps:  Ingest (potentially huge amounts of) data from diverse sources which may contain different types of information in different formats  Convert each set of data into useful information about the surrounding environment, filtering out noise-  Combine this data to generate situational awareness  Convert this situational awareness into wisdom to evaluate the best course of action With over 20 years’ experience of developing Autonomous Surface Vehicles, ASV, a subsidiary of L3Harris understands the human-machine relationship intimately. With nearly 2000 days of on water testing the team has experienced first-hand how readily operators trust machines, as can be seen all around in everyday life. It is imperative to match this trust with trust-worthiness and in autonomous systems, this can only be achieved with - consciousness. Their autonomous control system, ASView, has been tested in a range of environments: missions have been conducted in day and night, calm and rough seas, open water, quiet locations and busy ports with dense traffic. Each of these settings poses its own challenges, and L3Harris’ systems are required to be consistently reliable. Recent advancements in technology at L3Harris have increased the level of consciousness achievable in the system. This paper describes the process of achieving consciousness in autonomous systems today, as well as discussing practical challenges, drawing

  • n examples from L3Harris’ implementation.

1 Introduction

Maritime autonomy, in particular Unmanned and Autonomous Surface Vehicles (USVs / ASVs), is not new, having been around since 1990s. What is new is the advancements in technology changing how autonomy can be achieved. We now have the ability to store and process vast amounts of data cheaply [1], making it possible for an autonomous system to process and interpret more data about its surrounding environment in real-time than

  • previously. In short, it is now possible to bring more

consciousness into autonomous systems than before. Over time, as new vessels have been introduced to the water, navigation practice has adapted to avoid collisions at sea. [2] notes how in mid-90s, the advent of steam- powered ships induced the need for updated conventions to handle their increased manoeuvrability over sailing

  • vessels. Today’s code of practice is set out in the

COLREGs (Convention on the International Regulations for Preventing Collisions at Sea), which currently have no specific rules for unmanned vessels. While this may well change in future, at present ASVs must abide by the same rules as manned vessels. ASVs must, therefore, exhibit the behaviour of a human operator. This means signalling clear navigational intent to others and exercising human-like judgement in situations where there is no obvious right course of action. In situations where there is no obvious ‘right’ answer a compromise may have to be made and people make decisions based on personal preference, for example, a preferred type of manoeuvre. In [3], Raymond et al propose an argumentation framework to aid conflict resolution in maritime navigation, in which they describe such a set of preferences as a “culture”. In order to act according to such a “culture” a system requires a level of consciousness beyond the minimum information necessary for following COLREGs. It must assess the situation according to the wider context of its own culture. If we look to aviation and automotive industries, it must be noted how much more their infrastructure is developed to accommodate autonomous systems. [4] describes how aviation routes are pre-defined ahead of a flight and governed through air traffic control. Road satellite navigation systems have access to crowdsource data to provide regular updates [5]. Nonetheless, even in more mature environments such as these, consciousness is still required for true autonomy in order to react to unexpected situations, such as obstacles appearing along the intended navigation path.

slide-2
SLIDE 2

UDT 2020 UDT Extended Abstract Template Presentation/Panel For the foreseeable future, there will likely always be a human operator somewhere in the communication loop of managing an ASV. For as long as this is the case, the

  • perator must be able to understand the system’s

decisions, particularly in complex scenarios. The field of Explainable AI (XAI) [6] has recently grown out of the Artificial Intelligence community with many researchers turning their focus to developing systems with enough consciousness to explain how and why their decisions are made [3]. This development could see the human-machine relationship transform. Furthermore the reaction speed of a machine is much faster than that of a human. In vehicle autonomy, this could be critical for safety in some situations. For this reason autonomous systems capable of making their own decisions and having trust from the operator to carry them

  • ut may become necessary as ASVs become more widely

used. L3Harris works to educate customers in the current state

  • f play and the responsibility of the user when interacting

with autonomy today. Whilst it remains to be seen how human-machine interaction, maritime infrastructure, regulations and resources will evolve, and the level of consciousness ultimately required in autonomous vessels is unknown, it is clear that some “basic” level of consciousness certainly is required. The rest of this paper discusses the steps and challenges to developing consciousness in autonomous systems today.

2 Achieving Consciousness in Autonomous Systems

For the purpose of this paper, consciousness is defined as: ‘the state of being aware of and responsive to one's surroundings’. In order to achieve any level of consciousness, an autonomous system must

  • btain

data containing information about its surrounding environment and use it to evaluate the best course of action in response to a situation. The field of data science has produced many variations of knowledge pyramids [7] describing how data can be turned into informed actions. We propose the following pyramid to describe the process of an autonomous system acquiring consciousness: Each step in the pyramid is described as follows: 1) Data are facts collected from on-board sensors and other sources, which are unorganised and unprocessed 2) Information is data that has been processed in a way that provides useful descriptions for understanding the surrounding environment 3) Situational awareness (SA) is the result of combining and connecting information and using it to understand the surrounding environment 4) Decision making is the process of applying situational awareness to take action Section 2 of this paper will outline each step in detail with examples. 2.1 Receiving Data An autonomous systems relies on its sensory inputs, any external data feeds and any reference data to provide facts about the surrounding environment. Different sources will provide different facts. The key is to make sure enough inputs are available such that their (combined) data will provide sufficient information for the level

  • f

consciousness required. The data available is dependent on the technology available, which may be dictated by the mission type or practical factors, for example, weight restrictions may limit how many sensors can be placed on a mast or lack of internet access may prevent access to external sources. Such restrictions may call for standardised, unified data services to be accessible for autonomous vessels in future. The L3Harris ASV system makes use of a range of sensors, each with their own strengths and weaknesses, among these: Radar (long- and short-range), LiDAR, EO/IR cameras, AIS receivers, IMUs and GPS antennas. Maps and chart models are referred to for additional data. Having multiple inputs with different utilities provides assurance the system will likely have enough information coverage as situations, such as environmental conditions, change. L3Harris’s autonomy also retrieves regular health data from various internal ship components. Decision Making Situational Awareness Information Data

slide-3
SLIDE 3

UDT 2020 UDT Extended Abstract Template Presentation/Panel Much of this data does not differ from that which would be recorded for a manned vessel. However, the combination of all of these data types and how they are used to provide consciousness to autonomy systems is

  • unique. This will be discussed in sections 2.2 – 2.4.

2.2 Data to Information Regardless of whether the role of the autonomy system is to provide situational awareness to a person or to itself, it must present the input data in such a way relevant for this

  • purpose. The process of deriving these relevant

descriptions is information extraction. Examples include deriving vessel tracks from consecutive radar returns, detecting the presence of obstacles from image data or determining the sea state from IMU returns. This is vital for filtering out irrelevant details and noise, ensuring only information necessary for situational awareness is passed through. Due to the high frequency processing capability available today and the amount of information required to derive situational awareness, it would be very easy for an autonomy system to suffer information overload, blurred by noise and superfluous data. For this reason this step is critical in any autonomous system. The process of extracting such information often involves identifying complex relationships

  • r

patterns in dynamically changing data. Often these patterns are far too complex to be evident to a person, so it is crucial for this process to be data-driven, independent of human bias. However, this also makes is harder for operators to understand the system’s alerts, and researchers in XAI are still occupied by the problem of justifying such a system’s results in a human-explainable way in real-time. This is discussed in more detail in section 3. Such systems only learn from data, so in order to be taken up on a global scale they must be tested on a global scale, in much the same way as face and voice recognition has been, and continues to be, today. This means operators must be prepared to work with the AI community to help improve these systems. This is discussed in more detail in section 3. 2.3 Information into Situational Awareness Data scientists often say the whole of combined information is greater than the sum of its parts, referring to how knowledge can be gained from combining several sources of information. In autonomous systems, information from multiple sensors and other data sources can be combined to organise alerts and highlight events that require a response. This generates knowledge of the surrounding environment from information. L3Harris’ system fuses tracks from different sensors to unite duplicates and produce one picture of the “truth”. This is then matched to chart data to further identify stationary

  • bjects

such as navigational marks. Understanding of regulations is applied to predict future vessel paths and reason about the intention of third-party

  • vessels. All of this derived knowledge is used to form a

“risk landscape” of the surrounding environment to identify areas the vessel should avoid when navigating. The risk landscape is a dynamic map and the autonomy system must decide on immediate and future actions. Information from different sources can be used to validate alerts from individual sensors, reducing false positive alarms, providing further analysis of each other’s alerts (e.g. if radar picks up something the vision system misses, it may be a wave or something the vision system has not be trained to recognise) and prioritising important alerts. Whilst information coming from different sources is an advantage in terms of coverage, there are a number of practical considerations to be taken into account when combining them to generate reliable situational awareness:  As different sensors have different utilities, the reliability of the information they provide is fluid, dependant on the operation and present environment e.g. in a foggy environment, radar returns are likely to be more reliable than visual alerts; however in a high sea state, this is

  • reversed. Applying some understanding of these

reliability weightings when interpreting the situation will make a difference to the usefulness

  • f the resulting SA picture

 Different data sources will likely provide information at different rates. This can cause complications when trying to form a complete picture at a given instant. Understanding this is important for working out how best to use the combined information for SA  Different sensors will likely provide data in different formats due to the fact that they are producing different types of information. Having a standardised format, enabling the system to compare alerts from different sensors is

  • invaluable. L3Harris have standard formats for

track data, reducing the burden on the fusion engine. 2.4 Situational Awareness into Decision Making Having achieved a state of being aware of surroundings, the final step in acquiring consciousness is being responsive to those surroundings. When the situational awareness sub-system highlights events that require a response, the decision-making component must choose an action in response. The complexity of the maritime environment means there are often situations where there is no obvious “right”

slide-4
SLIDE 4

UDT 2020 UDT Extended Abstract Template Presentation/Panel course of action. An example of such a situation may be if a sailing vessel and cargo ship are on a collision course. According to COLREGs the cargo ship should give way to the sailing vessel. However, in practice a cargo ship will

  • ften be constrained by draught and have much less

manoeuvrability than a sailing vessel. As a result, the sailing vessel in this scenario may decide to give way. In such cases, an autonomous system needs a culture [3] to choose a preferred action. Such culture is likely to depend on a complex rule set that will likely change as new situations are encountered. For this reason it would be preferable to not try to encode this culture in some simple, hand-crafted rules but rather have them (continually) learnt by the system, adapting with experience and feedback. Having such a culture defined would increase transparency in autonomous decision-making, allowing users to understand the autonomous decisions.

3 Challenges to developing Conscious Autonomous Systems

3.1 Data Availability The level of consciousness achievable is directly related to the information, and therefore the data, available to the system. At present, there are no standard data services or sensors designed specifically for autonomous systems, meaning there is no guarantee of what data may be available at a given time. There is a regulatory requirement that any vessel on the water has “basic lookout functions”, but how these are achieved is often bespoke for the specific vessel design and intended missions. Perhaps in future there will be data standards to mitigate the issues outlined in section 2.3: maybe there will be unified sources of data to aid autonomous decision making? Maybe sensors will become standardised to be useful for autonomy - standard frequency, standard formats? On-going discussions around the future of nautical charts are considering how maps can be more “autonomy- friendly”. Whilst people can work with some ambiguity, machines tend to require precision e.g. precise GPS locations of obstacles. Maybe one day charts will be updated speak in the language of machines. 3.2 The Need for Learning Of course, having a system that is only capable of responding to “known” situations is not enough in a dynamic environment with several moving parts. We have seen in section 2 how systems capable of learning are critical for sophisticated consciousness in each step moving up the pyramid. The AI community is developing approaches to make it easier for people to understand AI. But, in turn, people will need to be prepared to do their bit to help AI understand, and better simulate, our behaviour. In order to accomplish this safely, patience and trust is required as increasing levels of consciousness are gradually introduced in stages. L3Harris have noticed that, whilst expectations of autonomous systems are generally higher than is currently practical (due to the artificially intelligent software being readily available in everyday life today), customers’ views are often tempered after exposure to their systems in real missions. It can be seen from the technology communications industry that, with plenty of usage on a global scale, complex algorithms can be highly accurately trained for specific problems, for example, face recognition and voice recognition. However, in these scenarios, the initial acceptable error rate is higher than that in ASV decision-

  • making. For this reason, the industry has had to work to

achieve a basic level of autonomy before developing more sophisticated approaches. Now that ASVs have been developed, such as those of L3Harris, that are capable of basic decision-making with guaranteed emergency response, it is possible to get the level of testing required for teaching more advanced autonomous

  • systems. However this can only be practically

implemented if users have enough trust to work with autonomous systems making decisions.

4 Conclusion

Our description formalises the process of achieving consciousness in autonomous systems as a data science

  • problem. We have discussed some of the main challenges

to achieving this today. There is no doubt about the practicality of achieving some consciousness in autonomous systems. It remains to be seen what level of consciousness will ultimately be "enough" as USVs and ASVs grow in popularity and practices change, but even in the most basic usage, some level of consciousness is certainly required. In order for the challenges to achieving consciousness in autonomous systems to be overcome changes in current practice and expectations are required.

Acknowledgements

TBC

References

[1] https://developer.nvidia.com/hpc

slide-5
SLIDE 5

UDT 2020 UDT Extended Abstract Template Presentation/Panel

[2] J. Armstrong, D. M. Williams, The Steamboat, Safety

and the State: Government Reaction to New Technology in a Period of Laissez-Faire, The Mariner's Mirror, 89:2, 167-184, DOI: 10.1080/00253359.2003.10659284 (2003)

[3] A.

Raymond, H. Gunes, A. Prorova, arXiv:1911.10098 (2019) [4] https://www.caa.co.uk/Consumers/Guide-to- aviation/How-air-traffic-control-works/ [5] D. Barth, The bright side of sitting in traffic: Crowdsourcing road congestion data. Official Google

  • Blog. (2009)

[6] D. Gunning, Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017). [7] https://en.wikipedia.org/wiki/DIKW_pyramid

Author/Speaker Biographies

Hannah Thomas – Hannah Thomas is the data science lead at L3Harris, Portchester (formerly ASV Global) with expertise in machine learning, data analytics and big data

  • technologies. She has developed a variety of machine

learning and analytical solutions to tackle new problems across industries, including investment banking and

  • autonomy. Hannah has a Masters degree in Mathematics

and Computer Science from the University of Oxford and enjoys presenting on different aspects of her work in industry. Emma Parkin - Emma Parkin has been a technical sales engineer at L3Harris for almost 2 years. Emma has previously been seconded to the Royal Navy, DE&S and CMRE working on a range of maritime autonomy projects. She has a Masters degree in Chemical Engineering from the University of Sheffield.