The United States has been facing increasing global competition regarding artificial intelligence and needs cutting edge innovation to maintain an advantage. On Thursday afternoon, Aimee McGranahan, Chief Operating Officer from the Spookstock Foundation, moderated a panel focused on turning the innovations that have been discussed throughout the symposium into reality to support the war fighter and intelligence community. Panelists included Jim McCool, Director of the new department of Data and Digital Innovation at NGA, Rachel Martin, another component of the Data and Digital Innovation office, as well as Austin Davis, Associate Technical Director for Geospatial Research and Engineering at the United States Army Engineering Research and Development Center.
Martin defined artificial intelligence as a tool that, she hopes, we eventually won’t even discuss because it will be so commonplace, like having electricity. She added that we should do everything in our power to integrate it into our capabilities. To be as ethical as possible, she said that the NGA is invested in being transparent with how AI is being used within the agency.
McCool added that there is a tradecraft requirement that the IC has to be able to explain to customers who receive their intelligence as to what judgements and assessments are based, even if a machine is making the decisions. This requirement is not going to be lifted. Davis piggybacked this point, saying that explainability in AI is paramount to ethics with respect to its use.
“It’s going to be critical for us to understand how the machine learning outputs are produced, understand how effective that is and how to communicate that to customers,” McCool explained.
As AI capabilities and usage continue to expand, McCool said that he does not foresee a repurposing or reduction of personnel. Rather, roles will be modified. He used the example of an analyst, who in the past would have been involved with every single detection. In the future, and even now, the analyst is above that, observing and sampling in different ways how the machine is conducting its machine learning and production of detections. If anything, there may be a need for more analysts to ensure that machine learning is operating at a high efficiency.
Davis said that automation is a big topic for the Engineering Research and Development Center. “Developing semiautomated tools with the goal of eventually making them more automated and increasingly autonomous tools for extracting features from larger data sets, fusing information together that could be multimodal sensing or some type of remotely sensed data with other kinds of data sets that could enrich that information with better attribution or more actionable attribution to solve some particular tasks that soldier might need to do,” he said.
A member of the audience asked how AI can impact climate change and the resulting global instability. Martin said that this is a big focus area within NGA and that there are a lot of ways that AI will be essential in helping us understand the security risk due to climate change. It will allow for analysts to track climate change on an active basis, as opposed to seeing the effects years later. McCool said that the intelligence community has a long track record of trying to anticipate instability, but this is dependent upon data collection. AI can contribute to the GEOINT portion of climate change forecasts, in regard to instability.
When it comes to out-pacing our competitors in the required infrastructure to stay ahead, McCool said that we do a lot to set humans up for success and we should do the same for AI. The way that analysts are expected to respond in a crisis is how AI must respond to emergencies. Getting ahead is not just about investment, but also about how the technology being implemented is approached and treated.