Assessing the mission-relevance of machine learning for the GEOINT Community
Machine learning (ML) has existed in various forms for many decades, but it is only in recent years, with the advent of new deep learning techniques and hardware with more robust compute power, that algorithms have achieved instances of “human-level” performance. The ImageNet Challenge, with its large visual database, has driven significant improvements in visual object recognition. In 2017, ImageNet yielded algorithms that achieved less than three percent error rates for identifying objects in everyday photos—a metric considered to be better than even expert human performance levels. However, this does not mean such algorithms will replace humans. Although the results are impressive, ImageNet consists of photos of everyday objects. In contrast with the geospatial domain, satellite imagery has added complexities of overhead perspective and limited labeled training. For these reasons, deep learning-based approaches offer tremendous potential to support geospatial analysts and decision-makers in leveraging the vast amounts of data generated by an ever-increasing number of sensors and data acquisition techniques.
Internet search, image recognition, human speech understanding, and social media applications of deep learning have had considerable success recently, though a clear integration road map for the defense and intelligence communities remains a challenge due to the complexity, scale, and sensitivity of the diverse mission portfolio. This article seeks to characterize the state of ML for the geospatial intelligence (GEOINT) Community and explore current mission relevance. The promise of deep learning is the ability to harness the power of machine processing at speed and scale to assist humans in achieving better outcomes versus using traditional and possibly laborious manual approaches.
Opportunities and Challenges
ML offers promising assistive technologies for humans to harness automation, or semi-automation, of traditionally manual tasks where speed and scale are often needed to meet today’s challenges. This trend is playing out across many industries, from media to medicine, and, of course, defense and intelligence. A key enabler across all industries is the availability of massive amounts of data within the domain. These data—coupled with high-performance, relatively low-cost computing power and the ability to harness distributed workforces to create labeled training data through crowdsourcing—have created a perfect storm for the acceleration of ML applications. The use of ML is becoming a necessity given the vast data volumes from a proliferation of sensors, and growing mission requirements in our complex, interconnected world. When trained with the intelligence of humans, algorithms offer scale, speed, and, increasingly, enhanced accuracy, which allow analysts to accomplish more and focus on tasks to which they add the most value.
Increasingly, analysts and data scientists need to manipulate the vast incoming data in more intuitive ways. Integration of analytic tools, ML techniques, natural language, or better user interfaces has yielded more efficient means to query and search data stores for insightful nuggets of information. As ML is inherently an iterative, albeit speedy, approach to arrive at the “right” answer, the opportunity exists through deep learning frameworks to test numerous hypotheses, reduce false positives, and achieve a more robust interpretation of the data.
Additionally, with the proliferation of new sensors and phenomenology comes an increased need to automate metadata tagging, integrate a variety of data formats, and curate raw information before being ingested and exploited. The fusion of a variety of datasets can yield alternative means of tipping the detection of obscure objects and corroborating results (data veracity).
One of the most significant challenges in achieving mission relevance with ML for GEOINT Community applications are the prerequisites, including the availability of large labeled training datasets and the fragility of algorithms that work well in research and development environments but may have limitations if operationalized. Training data are ideally generated from sources with: 1) access to the necessary compute resources; 2) a labor force with requisite ML knowledge; 3) an understanding of the operational timelines and performance requirements; and 4) large enough input datasets of significant value. These four building blocks are needed in order to train algorithms so they can be run in a timely manner (or in real time) to meet mission timelines. Once created, experts then must measure performance and validate the utility of algorithms in real-world situations.
- This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations.
The Current State of Machine Learning
The current state of geospatial uses of ML in the GEOINT Community is primarily focused on developing new algorithms and improving accuracy, often measured by precision and recall. Much of this research focuses on applying advancements in computer vision to the geospatial domain given the abundance of imagery from various sensors. Within the last two years, six such computer vision datasets and competitions were launched related to geospatial applications:
- IARPA’s Multi-View Stereo 3D Mapping Challenge
- The SpaceNet Challenge by CosmiQ Works, DigitalGlobe, and NVIDIA
- The Defence Science and Technology Lab’s Semantic Segmentation Challenge
- IARPA’s Functional Map of the World Challenge
- Planet’s Forest Recognition Competition
- USSOCOM’s Urban 3D Challenge
These open competitions developed their own training data and metrics to benchmark algorithm performance. This training data creation is significant. For example, the SpaceNet Round 2 building footprint dataset took approximately 24 days to produce roughly 300,000 building footprints over 424 square kilometers across four cities. In addition, the winning algorithm took a full week to train on one graphics processing unit (GPU)—a timeline that could be accelerated with more GPUs. Inference time, or the speed at which the trained neural network can operate against new data, was approximately 1,800 square kilometers a day, though the amount of time and data it took to reach that algorithm speed reduces its application space.
If an algorithm is fragile—i.e., it requires retraining for every new situation—it will be difficult to use these algorithms to address emerging GEOINT problems. It is important to understand the training time and data requirements for an algorithm, in addition to performance metrics such as precision and recall to explore an algorithm’s potential accuracy for future mission applications.
OpenStreetMap is mostly maintained through manual means in which contributors add to the map using heads-up digitizing from overhead imagery or by uploading GPS tracks for areas they see the need to update. To prioritize areas where urgent updates are needed, the Humanitarian OpenStreetMap Team’s tasking manager lists buildings, roads, and land use as major features that are requested to be mapped. In addition, the two most common types of mapping requests are disaster response and missing maps. Current algorithms have the potential to aid in accelerating missing map tasks. The potential to apply algorithms to amplify and extrapolate the efforts of human contributors is significant.
However, in disaster response situations, timelines are important. A key question to tackle is: “How can you provide good enough solutions quickly enough to be useful for a real-world disaster response and recovery situation?” If it takes three weeks to produce the training data required to employ ML, the benefit to first responders will be limited since the timeline will be out of the response period and significantly into the recovery period for most events.
In addition to short timeline problems that have base level mapping requirements, such as buildings and roads, there is now significant effort placed into mapping the population of the world. Every year, Oak Ridge National Laboratory produces the LandScan product, which provides global population distribution data at 1-km resolution. This product is created by fusing geospatial information with census data, and is used for a wide range of activities such as epidemic modeling or vaccination campaign planning. As ML algorithms produce solutions to underlying geospatial problems, they will improve the accuracy of these population maps. While population mapping does not have the short-term timeline a disaster response dictates, it does need to maintain currency. The scale of the effort provides significant challenges in both maintaining consistency in the data product and also maintaining an acceptable data refresh cycle to provide the yearly updates required. As operationally relevant algorithms emerge, understanding the scale and quality of the data is important.
ML approaches can assist with situations in which data labelers can be assisted to speed up a process as well as in situations when the dataset is so large that human-only analysis is not feasible. As the community continues to explore ML approaches to GEOINT problems, we must also continue to explore how to make these algorithms and models effective against the wide range of conditions that occur in real-world scenarios.
A Case Study: Machine Learning within NGA
Based upon a 2017 Major Issue Study conducted by the Office of the Director of National Intelligence (ODNI) Systems & Resource Analyses organization, the National Geospatial-Intelligence Agency (NGA) is working with its Department of Defense (DoD) and Intelligence Community (IC) partners on a strategy and integration road map for implementing ML capabilities. Because of its historic mission and massive stake in automating imagery exploitation going forward, NGA is taking the lead on managing the research, development, and governance of computer vision (CV) capabilities for satellite and airborne imagery. “Computer vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding.”
In addition to coordinating which organizations will develop which capabilities, NGA recognizes the critical nature of producing standards across the areas of technical specifications, data interoperability, algorithm lineage, and validation criteria for CV solutions. It aims to support these standards with a governance model that encourages open innovation and transparency—something not often seen in government agencies.
As a first step, NGA recently announced the creation of an Office of Automation, Augmentation, and Artificial Intelligence (AAA) that will begin to formalize its implementation plan. The high-level strategy for adopting operational AI is built into the name of this new office: it is to automate routine tasks to give critical time back to employees while at the same time augmenting complex decision-making tasks with machine support. For the first time, cutting-edge AI technologies are delivering promising results in both of these directions for applications relevant to GEOINT production operations.
Examples of automating routine tasks for GEOINT operators using ML are data preparation, data conditioning, and image search (using CV) functions. Applying recent advances in deep learning to such tasks is expected to create a surge in human productivity. Orders of magnitude increases in productivity are necessary for NGA and its partners to possibly exploit the ever-growing tsunami of available imagery. The ability to find and extract relevant information in a deluge of data makes ML critical to mission success. Examples of augmenting complex decision-making tasks are resource optimization, hypothesis testing, and pattern-discovery functions. Applying ML solutions to such tasks is expected to result in better decision support through the use of more source data and the ability to increase complexity by understanding multivariate interactions. In simple terms, machines can search across more datasets using more variables to discover correlations that humans cannot. Humans can put those findings into a larger mission context to understand if they are relevant. Automation and augmentation are initial steps in implementing a spectrum of AI solutions.
While NGA recognizes the inherent power of ML for increasing the productivity of its mission output(s) and the complexity of its decision-making, ML solutions also introduce a significant challenge to their own utility: perceived lack of trust and transparency. One of the first requirements for ML solutions within the GEOINT Community will be the ability to unmask their methods—something even Google has expressed difficulty doing with its DL networks. NGA requires a framework for operationally testing and evaluating ML solutions to provide a level of confidence through validation and verification that those solutions will work properly for their defense and intelligence customers. Any mission partner working on ML solutions today needs to consider this fundamental aspect of their utility.
Additionally, there are other factors that can be built into AI solutions to raise their credibility. For one, the design of the user interface can offer continuous feedback as to the inner workings of the system. The simplest representation of this today is an on-screen visual cue that identifies when the machine is processing a request with a status bar. With AI systems, the range of feedback to and communications with the user will be far more complicated and therefore require an emphasis on elegant design. NGA ultimately desires AI solutions that offer a gentle user orientation with high degrees of trust building, two-way feedback, and transparent verifiability as to the solution’s mission effectiveness.
Finally, NGA has outlined a strategy to generate a labor force ready for AI capabilities—what it refers to as a “data-enabled workforce.” Through a combination of wide-scale training and education opportunities and a targeted recruiting strategy, NGA has identified its technical workforce goals and objectives in detail for the next five years. Harnessing these new capabilities is a challenge many government agencies and commercial entities will also face. To benefit from automation, an organization must help educate their workforce on the fundaments of ML so this capability can be applied to manual tasks across various roles.
So, the question remains: “Is ML mission relevant today for the GEOINT Community?” The short answer is: “Yes, and its relevance is growing.” While challenges remain around creating labeled training data, training algorithms, and applying them in mission context with transparency to the user, significant progress has been made in the last year and ML is having mission impact as mission needs grow. Public prize challenges have contributed to making geospatial data more accessible for ML research, allowing algorithms to be developed that may have eventual mission uses.
This relevance will continue to grow each year as more data is available for training, the availability and power of compute increases, algorithms improve, and the number of missions supported by ML continues to expand. The barriers to harnessing the advantages of ML will continue to decrease, and more end-users will benefit from this technology to better perform their core job functions. ML will continue to create rich opportunities for GEOINT mission across many sectors.
Headline Image: Staff Sgt. Ashlie Robledo and Senior Airman Thao Bui, 11th Special Operations Intelligence Squadron analysts, participate in a data tagging training event, August 2017, at Hurlburt Field, Fla. Data tagging is designed to assist them with analyzing imagery. Photo Credit: Senior Airman Lynette M. Rolen