intelligence surveillance reconnaissance and analysis
play

Intelligence, Surveillance, Reconnaissance, and Analysis Systems are - PowerPoint PPT Presentation

Intelligence, Surveillance, Reconnaissance, and Analysis Systems are Mostly Failing Users (Fortunately, its a Fixable Problem) A Talk for the Laboratory for Analytic Sciences, North Carolina State University 17 August 2016 Laura A. McNamara,


  1. Intelligence, Surveillance, Reconnaissance, and Analysis Systems are Mostly Failing Users (Fortunately, it’s a Fixable Problem) A Talk for the Laboratory for Analytic Sciences, North Carolina State University 17 August 2016 Laura A. McNamara, PhD Sandia National Laboratories Albuquerque, NM 87015 Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. 2011-XXXXP

  2. Topical Outline  Human Analytics at Sandia  What’s a Failed System?  Design Practices for Usable, Useful, and Adoptable Systems  Visual Cognition and Human-Information Interaction

  3. Human Analytics at Sandia National Laboratories 3

  4. Human Analytics (“Analytics for Humans”)   Mika Armenta, Cognitive Laura Klein, Physics and Radar Psychology (intern) Engineering   Leif Berg, Human-Computer Laura Matzen, Cognitive Interaction Neuroscience   Karin Butler, Cognitive Psychology Laura McNamara, Organizational Anthropology  Kerstan Cole, Human Factors  J. Daniel Morrow,  Kristin Divis, Cognitive Psychology Robotics/Computer Science  Michael Haass, Signal  Susan Stevens-Adams, Cognitive Processing/Physics Psychology  John Ganter, Geography and  David Stracuzzi, Computer Human Factors Science/Mathematics

  5. Human-System Interaction R&D…  Mixed-method design studies for human-information interaction system design  Operational interface design and evaluation for remote sensing systems in near-real time environments  Visual search performance in analytic workflows, from imagery analysis to cybersecurity  Modeling domain-specific top-down visual attention to understand expertise  Field research to develop interaction design models that support human perceptual and cognitive work in high-density, high-throughput work environments  Novel methods for capturing and studying visual workflows :  Trajectory analysis applied to gaze data  Gaze-informed information foraging models in dynamic, user-driven workflows

  6. ..in an electronics/RF engineering organization. Image from Sandia National Laboratories, www.sandia.gov/radar/

  7. 8

  8. Some Observations on Systems that Work and Systems that Don’t 9

  9. We’ve spent an enormous amount of money in technology to capture, relay, store, disseminate, and analyze data. We have failed to invest equally in making these systems usable, useful, and adoptable for their intended operators, analysts, users, teams, groups, organizations, institutions….

  10. What makes a system a failed system? 11

  11. A Failed System is Present-to-Hand Take for example, a hammer: it is ready-to-hand ; we use it without theorizing. In fact, if we were to look at it as present-at- hand , we might easily make a mistake. Only when it breaks or something goes wrong might we see the hammer as present-at- hand , just lying there. – Martin Heidegger, Being and Time (Emphasis added) 12

  12. A failed system is one that pulls us away from the problems we want to engage, requiring us to focus for the moment on the system itself. Failed systems are selfish technologies . They require we attend to their needs before we can pursue meaningful work. They frequently interrupt us with their demands. 13

  13. For example, ground control stations Image from General Atomics, http://www.ga-asi.com/ground-control-stations-library

  14. Ready-to-hand means… 15

  15. Good Technology is Read- to-Hand  Usable. Good technology leverages existing principles and standards for human-system interaction, so that systems are learable, discoverable, memorable.  Useful. Good technology enables people to perform work that is individually meaningful and organizationally impactful.  Adoptable. Good technology can be integrated into existing infrastructure, workflows, communication practices and organizational standards. 16

  16. Excel is Ready-to-Hand 17

  17. Citrus: Ready-to-Hand in the right contexts 18

  18. How do we get to “Ready to Hand?” 19

  19. Actually, it’s pretty much a solved problem  Usable. Look to the Nielsen heuristics and platform- specific design guidance (Microsoft, Apple).  Useful. Cognitive Work Analysis, Cognitive Task Analysis  Adoptable. Cognitive Work Analysis, enterprise analysis Re-Design Document Ideate for Evaluate and Context of Design Learn Work Implement as Prototype 1. Ditch waterfall. 2. Engage user community as designers 3. Prototype and evaluate frequently 4. Wash and repeat until there’s nothing left to wring out.

  20. Studying Work Environments for Better Design  Hmm. What are they actually doing? What are they using – and why?  What are the key tasks I should focus on?  How do I represent what I’ve learned about the work environment?

  21. “Talking to users” isn’t enough  Cognitive Work Analysis to decompose analytic domains  Systematic inventory of the work environment  Grouping work inventory items in terms of the tasks they’re used for  Documenting the functional purpose of all those little tasks people do every day  Documenting why those functions are so important – what do they produce, and why does it matter?  How do we know the domain is meeting its intended purpose?  Cognitive Task Analysis to elicit individual strategies for key tasks within that environment  Focus on ‘keystone tasks’ – the ones that really define what the work environment is all about  Instrument these to get lots of subjective and objective data  CTA studies are great for informing design and can provide a nice baseline for evaluation activities later 22

  22. Cognitive Work Analysis A great framework for systematically decomposing a work environment .

  23. Task Extraction for CTA

  24. If it’s a solved problem, why are there so many problems? It depends on the context of technology research, development, and transition. 25

  25. So there’s Acquisitions… 26

  26. But there’s also a lot of in-house research and development. The issues are different; I think there’s more opportunity for success, but it’s not easy. Let’s begin with a story. 27

  27. The Bring-Me-A-Pattern Game  CUSTOMER: “Tell me about patterns that I can use in my analysis.”  DEVELOPER (excited): We developed novel approach for flagging, tracking irregular elements in data and it worked!  CUSTOMER: “Umm, those patterns aren’t useful.” “We’re just glad someone likes our results. The customer wasn’t as thrilled as we would’ve liked.”

  28. An example from our work: Forward deployed imagery analysts have ~90 seconds to classify and report threat signatures  System renders radar returns as a set of complementary images  Analysts perform “visual inspection – plus”  Present imagery  History in imagery (normal vs. abnormal activity patterns)  Decide whether to act or move on 29

  29. What is really happening here?  Sensor data analytics brings together Visual Inspection and Information Foraging  Perceive and recognize  Iterative search, retrieve, review  Characterize, decide, act  Multi-source context key in figuring out Where & When  Foundational semantics are spatial and temporal  Gestalt grouping of abstract imagery features into recognizable objects – across multiple images within/across different image products  Data-to-Information Value Stream  From sensor to analyst  Sensor operator decisions can enable and constrain longer-term analysis processes and products  Understanding the context of collection is absolutely key in designing algorithms for pattern detection in large dataset 30

  30. 31

  31. If we can get systems people will use, we can do a lot of really neat human- information interaction research. 32

  32. 33 Department 1465 Review

  33. Distinguishing Strategies in Visual Foraging Trackble applied to gaze data

  34. Integrating gaze (eye tracking) events for an even richer dataset Stimulus Creation Data Wrangling Feature Extraction & Data Collection • Nuisance • Image content • Imagery/Ground truth • Quality control • Visual salience • Eye tracking • Preprocessing • Trajectory descriptors • Software events Regression Analysis Time • Training impacts Series • Experience impacts Data • Relative difficulty Feature Relevance Latent Dirichlet Graphical Models N-Gram Modeling Allocation • Cluster data • On viewed content • Structure model by • Image content (ground truth) • On scan patterns observation sequence or by as topics image properties • Generate scan pattern

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend