AI & Next Gen Big Data Architecture

The future of big data analysis will rely on open-source and artificial intelligence

MQCO_IBM-7qubit_take-2-copy

Big data is exploding, and has been for several years. According to an IBM report, 90 percent of the world’s data has been produced since 2015 with no sign of deceleration. Panelists at the Defense One Tech Summit July 13 discussed what this data boom means for the Intelligence Community, the future of data architecture, and artificial intelligence (AI).

The development of new GEOINT tools—including AI algorithms for tasks such as image and pattern recognition and labeling—depends on high volumes of testable data. According to Dr. Jason Matheny, director of the Intelligence Advanced Research Projects Activity (IARPA), open-source must become the norm if industry wants to continue to break ground in these areas.

“It’s too hard for the research community to advance if it’s required to rely on the small number of public data sets. Making more data available is critical,” Matheny said.

Industry’s instinct is to secure high-quality data in proprietary data lakes, available only to the source vendor and its business partners.

It will be up to government to lead the culture shift and to convince industry to unlock these lakes, relinquishing the valuable “oil of the future,” said Defense One technology editor and panel moderator Patrick Tucker.

When IARPA funds research, for instance, the commercial vendor is contractually required to make all unclassified data from the project publically accessible. As part of a new machine learning prize challenge, IARPA is releasing one of the world’s largest publicly available image data sets to date.

“It would be useful for all government sponsors to make that a requirement,” Matheny said. “The value of that data survives longer than the individual operation.”

As open-source data offerings grow exponentially in the years to come, AI will become necessary to store, scan, and label vast troves of imagery and information.

The current limitations of computing are not crippling, but creating architectures capable of analyzing the big data of the future is a major challenge for industry and government alike. With regard to high-performance computing systems, China and other competitors outpace the United States.

IARPA has invested heavily in a program called Machine Intelligence from Cortical Networks (MICrONS). The program seeks to leverage the immense computing capability of the human brain by reverse engineering the wiring of the brain’s neocortical columns.

Other ambitious researchers seek answers in quantum computing, but according to Matheny, this area is still far from operational relevance.

“To date, claims of the superiority of a quantum machine over a classical machine have been disproven,” he said.

Once proven, AI capabilities will change the role of the human analyst. The tedious cycle of image survey and inspection (which is prone to human error) will be taken over by intelligent algorithms. Analysts will focus on “the why”—interpreting the patterns identified by AI and predicting events.

According to the Defense Intelligence Agency’s Director of Science and Technology Ardisson Lyons, “It’s boring to stare at a monitor of pictures day after day. [A human] might not notice that a front left bumper is the same front left bumper of a truck you saw three weeks ago, but a machine is increasingly good at that. It’s a tremendous advantage.”

As AI becomes integrated into defense and intelligence processes, the pace of operations will accelerate, putting pressure on human decision-makers to consider outsourcing decisions to AI in order to maintain speed.

When small elements of operational control are relinquished to AI systems, ethical questions arise. One such question is that of lethal privilege. Should intelligent machines possess the ability to execute lethal action without the direct command of a human operator?

According to Dale Ormond, principal director of research for the Office of the Assistant Secretary of Defense, the answer is a resounding ‘no.’

“Ultimately, the commander is responsible for releasing lethal force downrange,” Ormond said. “That’s a fundamental premise of how we run our military and I don’t see that changing.”                       

However, Ormond believes AI has an important place on the battlefield of tomorrow. This role lies in the analysis of adversary reactions. As the dynamics of war change on a day-to-day basis, AI programs could evaluate the potential responses of an enemy power based on past action, giving commanders a head start in operational decision-making.

Photo Credit: IARPA

Mission Focus: Global Sustainability

The morning program included a keynote and a panel discussion about collaboration between government, industry, and academia to detect and quantify changes and map trends.

IC GEOINT Support to USAF and USSF

Panel highlights how commercial GEOINT contributes to mission success

USAF: From Air to Space

Kenneth Bray reflects on ongoing and future changes for USAF ISR, and how commercial capabilities will help them achieve their new goals.