At a time when artificial intelligence is advancing, the GEOINT community is scrambling to suss out the “why’s” and “how’s” of layering human intelligence into the equation.
The coexistence of technology and humanity has been a favorite theme of academia, literature, and entertainment for a century or more. At Tuesday’s third Symposium event, it was the focus of a panel titled “Metaphors and Algorithms, Why Geospatial Professionals Need Both.”
Distilling the answer in a nutshell, GEOINT veteran and vice chair of the Landsat Advisory Committee Roberta (Bobbi) Lenczowski said, “Machines today are being taught to better understand the use of metaphors. One of the things that’s very difficult to teach computers is values and ethics and morals. And never forget that that’s a key piece to how we make decisions as human beings.”
“We have a tremendous amount of data, but what is lacking, really, are what I call the arts,” said Qassim A. Abdullah, Ph.D., vice president and senior chief scientist at Woolpert. “Not the technology and sensors as much as the art of using this data and mining it.”
Fellow panelist Matthew Berrett put it like this: “What meanings and insights might you miss if you’re just looking at the stuff you can put your hands on with our sophisticated tools? A lot.”
With the “why” firmly established, the conversation transitioned to pondering the “how.” The panel of four professionals were in unanimous agreement as to the first and most crucial strategy: Establishing networks of professionals whose backgrounds expand far beyond the obvious science and tech community. “At the Center for Anticipatory Intelligence, we systematically and aggressively approach multi-domain orientation throughout all problem sets,” said Berrett, who co-founded the center at Utah State University. “So in any given classroom, we’ve got 35 different majors, from AI/ML experts to cultural anthropologists to folks that are going to become bankers.”
Mining the perspectives of wide-ranging backgrounds, cultures, and credentials is key to avoiding the pitfalls of cognitive bias, assumptions, and tunnel vision. Berrett pointed to the infamous Iraq Weapons of Mass Destruction miscalculation of the early 2000s as a classic example. “How can the same premises that we saw before [in 1989 and 1990] lead to a very different conclusion,” he asked. “One of the reasons is we didn’t have a sufficiently diverse set of minds and backgrounds and disciplines in the room.”
Panelist Renny Babiarz presented his year spent in China as another example of cultural nuance. “I was struck by how many people of all ages wanted to learn English,” said Babiarz, Ph.D., vice president of analysis and operations for All-Source Analysis at Johns Hopkins University. “Everybody wanted to learn it; it was extremely competitive.”
That competitive mindset, said Babiarz, was a metaphor for China and China’s approach to competition with the world.
Maintaining an open mind with a healthy dose of skepticism is also crucial to the advancement of human intelligence. “Always be a little skeptical about machine learning and AI,” Lenczowski said. “Because it’s that skepticism that will in fact encourage us to continue to think about ‘what isn’t known?’”
It’s an old question with new and increasingly complicated applications, but all panelists agreed, the potential is positive. “How do we advance our state of human intelligence in order to fully understand the wealth of data that we’re living with,” said Lenczowski. “Not just that which is sensed from the invisible sensors but that which comes from a variety of sensors, including the human sensor.”
“That’s not just security, that’s not just warfare,” she continued. “It’s about agriculture, it’s about the health of the world, it’s about the environment.”