The AI of Tomorrow

How human-machine teaming, cybersecurity, and battlefield singularity may shape the next decade of defense.

AIFuture

“An unmanned systems future is inevitable for just about every facet of our lives. We need to deal with it head on,” Brig. Gen. Frank Kelly told the audience at Defense One and Nextgov’s Genius Machines: the Next Decade of Artificial Intelligence event March 7.

AI as it exists today is considered “narrow” in the scope of its applications. Programs such as Google Translate and personalized advertisement selectors are intelligent and effective, but ultra-specific and incapable of fulfilling more than one purpose. To truly harness the power of intelligent systems, industry and government are looking over the horizon to “general” AI capable of tackling wider, more diverse problem sets. General AI could manage a baseball team or help fix a damaged marriage without highly-labeled or pre-defined data.

Defense One technology editor and moderator Patrick Tucker brought up the question of trust: “How do we trust this opaque, complex process to deliver the correct insights?”

The idea isn’t to build artificial systems and let them run amok unsupervised. Instead, companies and agencies are focusing on collaboration between AI and individual analysts, a relationship analogous to that of a police officer and a K-9 unit puppy. The officer raises the puppy based on his own experiences in the field, teaching and building trust with the dog as it becomes more effective. The pup learns how to complete tasks for its human operator and, eventually, the two form a successful team.

But unlike a German shepherd, an AI’s decisions are traceable to the human-made decisions that taught the machine how to react in the first place. Seeing the thought process that led an AI to its conclusion will reassure a user the machine was trained off his or her own work and its analysis is trustworthy.

“We’ve found that the human responders over time begin to build more trust with the AI. Sometimes [people] don’t trust things until we try them out,” said Dr. Edward Chow, manager of the Civil Program Office at NASA’s Jet Propulsion Laboratory.

Some decisions, though, are still off limits for AI programs, such as the decision to take lethal (or potentially lethal) action. The U.S. military has had doctrine in place requiring a “human in the loop” on all potentially life-threatening decisions since 2012. The pace of AI development and the increasing likelihood of reaching “singularity on the battlefield”—a point when battle gets too fast-paced for the human mind to keep up and AI must make lethal decisions—puts pressure on this doctrine. This emphasizes the importance of maintaining security and control of these systems. If highly aware algorithms capable of exercising lethal force fell into malicious hands they could wreak havoc on civilians.

The current landscape of cybersecurity is offensively asymmetrical, where attacking is easier than defending, noted Intelligence Advanced Research Projects Activity Director Dr. Jason Matheny. System insecurity is a topic IARPA is anxious about, even more so than the popular idea of a sentient, self-serving AI.

“We’re much less worried about ‘Terminator’ and Skynet scenarios than we are about digital ‘Flubber’ scenarios—really badly engineered systems that are vulnerable to error or attack from outside,” Matheny said, citing easily spoofable image classifiers and data poisoning attacks.

These are uncharted waters for law enforcement and defense communities. If a program is built poorly and goes on a rogue cybercrime spree or is built for nefarious purposes (such as ransomware or worms), who is at fault? Is it the program itself? Is it the program’s creator?

“We’re going to go after the person who wrote it,” said Trent Teyema, FBI’s chief of cyber readiness and cyber chief operating officer. As systems become smarter and stronger, it will be harder to ascribe fault to human creators whose algorithms act in unintended ways. And as for the AI? “It’s not about how we arrest it, but how do we stop it,” Teyema said.

Photo Credit: Getty

The GEOINT Response

As confirmed COVID-19 cases continue to rise, GEOINTers deliver valuable insights into the pandemic response

A New Wave of Capabilities

5G holds the promise of increased data speeds and improved security features

Smart City Standards

DHS and OGC bring smart city standards to the public safety community with St. Louis SCIRA pilot