By Joeanna C. Arthur, Ph.D., Research Directorate, National Geospatial-Intelligence Agency (NGA); Kevin W. Ayers, Analysis Directorate, NGA; Jack W. Brandy, Office of Ventures & Innovation, NGA; and Carsten K. Oertel, DSc., The MITRE Corporation

“..By taking away the easy parts of human tasks, automation can make the difficult parts … more difficult.”
—Bainbridge, 1983[1]

What do a London taxi driver and geospatial intelligence (GEOINT) automation have in common? At first glance, practically nothing, but upon closer examination, a more compelling relationship emerges. Empirical evidence has revealed structural differences between the brains of London taxi drivers and bus drivers—the former have far more hippocampal gray matter volume (GMV) than their counterparts.[2] These research findings suggest that the greater spatial navigation knowledge a taxi driver has of the circuitous city streets and various routes, the stronger the neural networks that underlie human spatial memory and orientation. However, this superior navigational skill and increased GMV comes at a cost. While the drivers can tap into a dizzying array of potential routes with unnatural efficiency, these expert navigators suffer from the decreased ability of forming new visuo-spatial memories.[3]

Drivers unfamiliar with London will typically rely on navigation automation applications (e.g., GoogleMaps, Waze, etc.) to complement their spatial abilities. While these apps augment our spatial abilities and lessen our current cognitive load—that of multitasking, or dividing attentional resources to both navigating and driving—they also decrease our retention of routes. Arguably, the more we offload our cognitive processes onto these tools, the less opportunity for developing, deepening, and/or maintaining human expertise. We can all attest that the more we use these apps, the less we rely on our internal computational abilities for self-localization and navigation. Interestingly enough, there is even speculation that not only do these apps decrease development of our spatial knowledge, they might even lead to early Alzheimer’s disease, as the hippocampal formation serves as the center of both spatial and nonspatial memory.[4]

Similar to London’s cabbies, GEOINT analysts are highly specialized experts who employ superior spatial reasoning skills and sensemaking.[5] If we rely on machines to provide sensemaking in GEOINT analysis, then we will be asking machines to apply analytic processes that require context and subjective judgment regarding human activities and geography. Not only is this difficult, but even if it was possible, there is a hidden cost of automation.

An image from space is generated from photons that are sensed by a detector and whose energy is converted to pixels; so why can my phone recognize my face yet we are barely able to automate GEOINT? The truth is simple: we can automate the “GEO” but we cannot automate the “INT.”

There are several reasons why GEOINT is much harder than a facial recognition task, those images are not only vastly more complex than a photo from a phone but also are only the beginning of GEOINT. The GEOINT analyst frequently starts with imagery to assess and understand what is going on and what has changed. Over time, these assessments—much like our taxi driver navigating London—engage higher cognitive functions (e.g., spatial memory, working memory, mental imagery, and sensemaking) to build a mental representation of what is going on in a particular area of interest. This mental representation helps humans understand the larger intelligence issues based on previous experiences and contextual information. This contextual information not only comes from other intelligence disciplines or INTs (e.g., SIGINT, HUMINT, etc.), but also from knowledge of geography, history, engineering, philosophy, psychology, economics, and the entirety of human experience.

Automating GEOINT, then, should be an exercise in finding where humans and machines provide the most optimal value. The mastery of our human analysts has always been rooted in the deep and rich context that they bring to our GEOINT reporting. That mastery, however, has also been hard-won via constant and systematic examination of our mission areas through the visual cortex of our analysts’ brains. Reducing the amount of imagery analysts internally process will reduce their sensitivity to the difficult-to-detect “signatures” they have used to make assessments that are enriched by context. So, it makes more sense to approach artificial intelligence as an enabling technology[6] that can help humans answer requests for information on where objects are in space and time, but doesn’t inhibit a human’s ability to conduct deep analysis of imagery that requires their full range of contextual experience.

Therefore, any future imagery “processing” architecture must successfully integrate machine efficiency with human-derived context and experience. The key is understanding how automation and human analysis complement each other and achieve a higher level of interdependence than just deploying automation to identify and extract key entities and objects from imagery as data.

Human-Machine Complementarity

There are numerous efforts within the Intelligence Community (IC) that aim to off-load and allocate routinizable GEOINT tasks to automated computing systems. These automation efforts view human and machines as redundant systems and essentially seek to put “humans-out-of-the-loop,” where they simply validate the results (see Figure 1).

Figure 1. Schematic depiction of “human-in-the-loop” and “human-out-of-the-loop” paradigms.

GEOINT analysis is an inherently human-centric domain that demands intense coordination and cognitive work at both the team and individual level. Therefore, we do not propose a machine appropriating inherently human tasks or eliminating human inputs. Instead, we propose a hybrid “human-on-the-loop” or supervisory team model that combines the comparative strengths of humans and machines with regard to GEOINT analysis. From a GEOINT perspective, humans and machines have different strengths in what they can accomplish (see Table 1).

Human abilities, to name a few, include high spatial-resolution vision, reasoning about causal relationships within a scene over time, and the ability to communicate these perceptions and the logic underlying their assessments. While human cognition is remarkably dynamic and flexible, there exist several limitations in human perception and decision-making pitfalls that can benefit from automated systems. Some weaknesses include inattentional blindness, limited attentional resources, fatigue, and cognitive biases. Machine abilities include rapidly performing repetitive tasks without fatigue, recognizing patterns in large datasets, and precision. Similarly, there are common aspects of machine-based processes that can benefit from human expertise, such as task prioritization, dealing with outliers or unexpected situations, and understanding contextual information (e.g., sociocultural and environmental considerations).

TABLE 1

Limits and Challenges for Future Hybrid Human-Machine Teaming within the GEOINT Enterprise

The cornerstone of a successful human-on-the-loop architecture is contingent upon: (i) appropriate functional allocation, (ii) an evaluation infrastructure, (iii) trust by the workforce, and (iv) transparency of human-system accountability.

Identifying the Unique “Robotic Process Automation” Criteria for GEOINT

Robotic process automation (RPA) is used to automate monotonous tasks, freeing up human resources for more high-level, complex cognitive tasks. A successful functional decomposition of human- and machine-derived value will require the GEOINT enterprise to examine its mission space, functionally decompose and allocate these human- and machine-derived values to portions of that enterprise, and appropriately automate the system. An appropriate level of interdependence between humans and machines should emerge and assist with general adoption by the workforce. If the GEOINT enterprise cannot coherently decompose these roles or assigns incorrect functional roles, then a significant degradation in our ability to provide meaningful information will result. Therefore it is imperative that comprehensive workflow, domain, cognitive workload, and task analyses are performed to fully grasp the interdependence between collective roles and functions under realistic operational settings.[7]

Building a Continuous Evaluation Infrastructure

If the GEOINT enterprise is successful in implementing such a human-on-the-loop system, then it will need to address key limits and challenges before it can reach appropriate levels of mission value in the future. The workforce is looking for a system that can make their day-to-day job easier, provide more value to their outputs, and save them time overall. Naturally, correctly understanding what they value, and how to save them time is an important first step, as discussed above. However, equally important, is understanding why or the underlying processes and elements of what they value. As noted by a National Research Council Committee on Integrating Humans, Machines, and Networks, an evaluation infrastructure needs to be developed for “assigning metrics for each individual piece of the collaboration” and for systematically assessing “the quality of the decisions made by the overall human-machine collaborative system.”

Building Trust in Automation

If the organization places more value on such a hybrid system over purely manual or purely automated and promotes activities that support a hybrid system, then workforce adoption may be quicker. Also, one of the key tenets of human-machine partnership is creating interdependence within the system. If the hybrid system can save time/provide more value, then a human workforce, over time, will find they wouldn’t want to work any other way. If implemented incorrectly, the workforce will find they would prefer to work somewhere else. Finally, understanding how uncertainty and types of error are identified, articulated, and propagated within the system will also be critical to its success. If the system cannot articulate why an entity was identified at a certain level of uncertainty, then trust in the system will degrade over time.

Responsibility-Authority Conundrum

Failures in high-reliability organizations can have catastrophic consequences. However, machines do not hold the same sense of moral obligation, social responsibility, or code of ethics as intelligence professionals. In a hybrid system, who then is accountable for failures? One operational demonstration would be placing automation at the forefront of updating our safety-of-navigation products, which could assist humans with providing the most current data to our air crews and mariners. Ultimately though, who would you blame if a ship is lost due to a perfectly up-to-date digital nautical chart delivered via automated means that just happened to have an unknown bias in that particular area, or was actively deceived? How do humans retain both the responsibility and authority to make real-time overrides? And who determines the distribution of responsibility and maintains the threshold for human intervention? These remain open questions.

Future of GEOINT

As the GEOINT community moves toward “the age of automation,” an automated “human-on-the-loop” hybrid system is the preferred future of the GEOINT enterprise. Humans have always been the key to our success in providing actionable GEOINT to the warfighter and our national policy-makers. There are significant opportunities for the enterprise to assist humans in processing imagery and geospatial data at scale. However, those opportunities must be seized for the correct reasons, with an eye toward providing the workforce actual value and time savings. The key is understanding how the workforce does its job, of course. And understanding how the current infrastructure impedes or enables our current mission is another first step toward designing the hybrid system of the future.

The opportunity costs of not systematically considering the value of humans and machines in a future architecture are vast and present. If the human workforce loses confidence that the organization values their contribution, then those with the most experience and value will be more likely to leave for other opportunities. Also, if the architecture functionally allocates incorrectly, then the value of our future GEOINT outputs will be less valuable to the Intelligence Community. Therefore, it is imperative that these functional allocations are conducted carefully and not through arbitrary assessments about the workforce. If the future includes automation, then it should be implemented in a way that creates value-added impact.


This article is approved for public release by the National Geospatial-Intelligence Agency #19-1006.

  1. Bainbridge, L. 1983. “Ironies of Automation.” Automatica 19 (6):775-779.
  2. Woollett, K. & Maguire, E. 2011. “Acquiring “the knowledge” of London’s Layout Drives Structural Brain Changes.” Current Biology 21: 2109–2114. Accessed August 26, 2019. https://doi.org/10.1016/j.cub.2011.11.018; see also Maguire, E. & Woolett, K. 2006. “London taxi drivers and bus drivers: A structural MRI and neuropsychological analysis.” Hippocampus 16, no. 12: 1091-1101. Accessed August 26, 2019. https://onlinelibrary.wiley.com/doi/abs/10.1002/hipo.20233
  3. Woollett, K. & Maguire, E. 2011. “Exploring Anterograde Associative Memory in Taxi Drivers.” Neuroreport 23 (15): 885-8. Accessed August 26, 2019.
  4. Knapton, S. 2019. “Google Maps increases risk of developing Alzheimer’s, expert warns” The Telegraph,” May 29, 2019. Accessed August 26, 2019. https://www.telegraph.co.uk/science/2019/05/29/google-maps-increases-risk-developingalzheimers-expert-warns/
  5. “Sensemaking” describes the cognitive process by which humans form mental models or “frames”, such as maps, stories or scripts, and iteratively fits disparate and/or complex bits of data to the frame in order to construct a meaningful representation [for more details see Klein et al. “Making sense of sense-making: alternative perspectives,” (2006) IEEE Intelligent Systems 21 (4): 70-73.]
  6. Horowitz, Michael C., “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review 1, no. 3 (May, 2018), Accessed August 18, 2019. url: https://tnsr.org/2018/05/artificial-intelligence-international-competition-and-the-balance-of-power/#_ftn30
  7. Feigh, K. M., & Pritchett, A. R. (2014). Requirements for Effective Function Allocation: A Critical Review. Journal of Cognitive Engineering and Decision Making, 8(1), 23–32. https://doi.org/10.1177/1555343413490945. Accessed August 18, 2019.

Related


Posted by USGIF