What will happen to a person’s artificial intelligence (AI) when they retire? When a prospective employee interviews for a job, will his or her AI be questioned alongside them? Will companies hire AI straight from a factory, or will the system undergo a sort of apprenticeship before being put to work? More importantly—and more realistic in the near-term—what will be the line at which machines are not reliable enough or morally appropriate to use and humans take over?
These, along with many more immediate questions, are among the topics USGIF’s Machine Learning & Artificial Intelligence Working Group seeks to generate discussion around. The working group gathered dozens of experts at USGIF’s Machine Learning & Artificial Intelligence Workshop Nov. 13 to explore the opportunities and challenges intelligent technologies present. The daylong session at National Geospatial-Intelligence Agency (NGA) Campus East in Springfield, Va., attracted around 275 people, and was one of many events hosted as part of USGIF’s GEOINT Community Week.
Doug McGovern, chief technology officer of intelligence programs for IBM Global Business Services, led the first panel in a discussion about the implications of machine learning (ML) for GEOINT professionals.
“We expect this technology to transform our entire careers,” McGovern said. He added there are currently two schools of thought regarding AI—those who embrace the technology as one that will revolutionize intelligence much like the shift from film to digital photography, and those who fear AI is a threat to humankind. The truth, McGovern said, is likely somewhere in the middle, and the community cannot stand idle.
Panelist Melissa Planert, director of NGA’s Office of Analytic Tradecraft, agreed, saying it’s not hard to imagine a time when analysts won’t be able to work with all of the data available to them. Planert predicted computer vision (CV)—the ability of a computer to view, canvas, and database images—would be most important for the GEOINT Community.
Planert said AI presents the possibility for around-the-clock analysis that identifies patterns of life and trends, and can also enable anticipatory intelligence. She also outlined what analysts will expect from AI tools, including: an intuitive user experience, a simple feedback mechanism to train algorithms, increased accuracy over time, alert mechanisms, confidence metrics, and metadata tagging.
Dr. Curt Davis, director of the Center for Geospatial Intelligence at the University of Missouri, said AI is still in the early stages, but warned the United States is trailing behind the rest of the global community. Davis personally catalogued articles on deep learning (DL)—the branch of ML that attempts to mimic the architecture of the human brain—in three IEEE remote sensing journals from 2015 to 2017. He discovered that out of approximately 100 articles, 71 percent were authored by China, while 11 percent were authored by European nations, the rest of the world authored 13 percent, and only five percent originated from the U.S.
“This is pretty sobering and I hope it provokes thought,” Davis said. “I don’t believe we can flip this. The Chinese are going to be the leaders in this field for the foreseeable future, and it happened very quickly.”
Davis offered three suggestions to develop a stronger U.S. pipeline of ML/AI students:
- Industry partnerships with colleges and universities beyond internships, such as funding graduate students working on problems of mutual interest.
- Industry/government-sponsored competitions open only to U.S. institutions with a focus on topics more relevant to the U.S. defense and intelligence communities.
- Requiring government-funded ML/AI research to include participation from academia, in the same way many contracts must meet small business requirements.
Dr. Todd S. Bacastow, a professor of practice with the Dutton e-Education Institute at Penn State, said the technology is moving faster than educational progress, noting it takes him two to three years to stand up a new course. Dr. Darryl Murdock, vice president of professional development at USGIF, encouraged the audience to examine the competitive landscape and think about what might motivate outside talent from higher-earning sectors in Silicon Valley.
Planert added it will be important to prepare analysts with new skills to interact with AI, such as creating models that drive output, conveying the output in a compelling way, and understanding AI’s limitations and when its use is appropriate.
Trusting the Tools
A second panel dedicated to the challenges of relying on ML and AI further examined how to trust the tools and explain their results.
“We need to understand how to quantify the accuracy of the results,” said Todd Johanesen, director of NGA’s Office of Sciences and Methodologies.
David Aha, lead of the Naval Research Lab’s Adaptive Systems Section, said there are currently many common concerns that ML/AI technologies are opaque, unreliable, corruptible, spoofable, and overhyped. He pointed to the Defense Advanced Research Projects Agency’s Explainable AI program as an example of an initiative to create more explainable models that enable human users to understand and appropriately trust ML methodology.
William “Buzz” Roberts, director of automation, artificial intelligence, and augmentation at NGA, is responsible for AI across the agency. He emphasized how important understandable, accurate, and reliable results are in missions such as safety of navigation and intelligence that protect human life.
Roberts said conversations about how, where, and when to appropriately create training data as well as to test and implement AI aren’t occurring enough. He suggested rather than broad implementation, each mission should take its own approach to reach information assurance.
Regarding overhype, Roberts said, “There’s a huge set of misperceptions that ‘We can automate that.’ We need a deep level of understanding by all parties to get to 98/99 percent reliability. … An analyst’s ability to stand behind what’s happening in that black box is critical.”
Roberts pointed to the need for common definitions of ML, AI, and related technologies to move discussions “from subjectivity to objectivity.”
Dawn Meyerriecks, the CIA’s deputy director of science and technology, gave an afternoon keynote in which she also warned about hype, describing AI as “emergent.”
“One in 30 companies that claim they’re doing machine learning and AI today are actually doing it,” she said. “… Big data doesn’t count.”
Meyerriecks added she sees much potential for AI, but warned against ruining the technology’s reputation early on by over-promising and under-delivering.
She made clear that for AI-derived intelligence to make it to the level of the President’s Daily Brief, the Intelligence Community (IC) would need to have demonstrable high confidence in the results and be able to explain how they arrived at the answers.
Dr. Steve Hall, advanced analytics/activity-based intelligence portfolio initiative lead for NGA’s Analytic Capabilities Portfolio, also gave a keynote address. He echoed some of Planert’s sentiments from earlier in the day, pondering how much longer the IC will be able to provide insightful, integrated intelligence without the assistance of ML and AI.
“Some global estimates suggest we only analyze 0.5 percent of the data available today,” Hall said. “What’s that going to be two years from now?”
He added employing AI “isn’t as simple as writing a check,” and said there are many technical and cultural challenges to overcome.
“There’s a big difference between a machine making a query or launching a missile,” he concluded. “Where’s the line where we say, ‘This is where it stops’ [and humans take over]? … The answers to those questions are driven by the decisions we make today.”
The workshop also featured a classified session on hard problems of interest, a panel discussion on machine learning data sets and prize challenges, exhibits, and a networking reception.
To learn more about USGIF’s Machine Learning & Artificial Intelligence Working Group, visit their webpage or email MLAIWG@usgif.org.
Featured Image: From left to right, panelists Dr. Darryl Murdock, Dr. Todd S. Bacastow, Dr. Curt H. Davis, and Melissa Planert. Photo Credit: NGA