Augmented reality tracks one’s location in both space and time. Thus, its very foundation is geospatial
A cosmopolitan young couple appears lost as they tour a busy city street in a foreign country. The man holds up his smartphone and a digital overlay appears on his camera view, dropping a pinpoint at a nearby building to identify a restaurant recommended by a friend. This is a common storyline in TV commercials touting the latest smartphone and its corresponding capabilities, but it’s also an illustration of augmented reality in its most basic form.
Augmented reality (AR) is simply defined as “a technology that superimposes a computer-generated image on a user’s view of the real world, thus providing a composite view.” Hence how the term was coined—it is a technology that literally augments reality.
In the commercial world, AR has been around for some time in the form of smartphone travel apps, allowing curious users to lay digital points atop of the real world, and quickly gather data on hotels, restaurants, landmarks, and more. More recently, the fervor over the development of Google Glass, an optical display worn like a pair of glasses for completely hands-free computing, has brought interest in the possibilities of AR to a whole new level.
“People tend to typically think of AR as ‘Terminator vision,’” said John Clark, chief innovation officer at Thermopylae Sciences and Technology, referencing the character Arnold Schwarzenegger played in the 1984 film. “But from a computer science perspective, that’s only one application of AR.”
Indeed, AR is much more than just mobile, wearable computing, but Google Glass has the potential to catapult the technology into everyday life. The capabilities and promise of AR also reach into law enforcement and security, intelligence, and even the military.
As both the hardware and software for AR is developed to greater sophistication, some experts predict it will become as pervasive in our lives as smartphones; others see its adoption as a slower process. Either way, AR is rapidly growing in importance—and because it depends on keeping track of one’s location in both space and time, the very foundation of all AR data is geospatial.
From Private Sector to Public Service
It’s a misconception that AR is radically futuristic and brand new. As futuristic as it may seem, it’s hardly a new technology. AR was used, at least on a limited basis, by manufacturers such as Boeing and by the U.S. military as far back as the late 1980s. What is new is that computing technology has advanced exponentially in power, speed, and capacity while shrinking in size and cost, making AR more accessible and practical for broader audiences and generalized purposes.
“The interesting thing about that is we think of AR in the commercial marketplace as being pretty leading edge,” said Mark White, chief technology officer of Deloitte Consulting, which recently published a white paper on potential uses for AR in the federal government. “Particularly in military and defense, it’s not that new. Some of the very first systems that implemented AR were in the defense space.”
Take for instance the sophisticated capabilities in Lockheed Martin’s F-35 Lightning II helmet, which takes advantage of heads-up display technology to show flight conditions, targeting information, and warnings in real time. This eliminates the need for the pilot to refer to heads down displays, and therefore reduces workload and increases responsiveness. The helmet also provides the pilot with night vision capabilities, using the helmet’s camera and Distributed Aperture System (DAS).
But more remarkably, the DAS generates a global view of the world around the F-35, allowing the pilot to “see through” the fuselage to view what is above, beside, and beneath the aircraft. Items of reference such as waypoints and targets are also captured and displayed on the pilot’s line of sight.
“The information provided is a conglomeration of data gathered from all the sensors on the F-35 to produce uniform situational awareness representation to the pilot,” said Casey Contini, Lockheed Martin’s F-35 engineering director for electro-optics and helmet.
While the F-35 team paves the way for pilots to employ augmented reality, the U.S. Army has taken a cue from Google Glass and is experimenting with the use of tactical glasses to develop solutions for soldiers on the move.
An early version of Google Glass is currently being beta tested through programs for early technology adopters such as Robert Scoble, startup liaison for open cloud computing company Rackspace and co-author of the book Age of Context: Mobile, Sensors, Data and the Future of Privacy.
“Using [Glass] for photography has been life-changing for me,” Scoble said. “I’m getting moments with my kids that I never got before because it’s the first camera that’s always ready. It takes less than one second to take a picture.”
That ease of access to an application has wide-reaching implications, particularly for the DoD and Intelligence Community.
“I’m taking two to five times more photos and videos, and I’m much more likely to take a video with this thing in the street,” Scoble said. “So when we have something like the Boston [Marathon] bombing, we’ll have even more video and photos available.”
Scoble’s predictions are inching closer to reality. In July, the Army tested the use of tactical glasses during Enterprise Challenge 2013 at Fort Huachuca, Ariz. The glasses were at the heart of the Distributed Common Ground System-Army (DCGS-A) exercise.
Full motion video from a Gray Eagle unmanned aircraft was exported to soldiers equipped with tactical glasses from Osterhout Design Group. Although the exercise deployed a Gray Eagle UAV and soldiers were tethered to the DCGS-A Tactical-Intelligence Ground Station vehicle, the prototype is designed to work with any airborne ISR platform and to eventually be untethered, according to Col. Edward Riehle, U.S. Army Training and Doctrine Command capabilities manager for sensors and sensor processing.
“The soldiers that used the glasses appreciated the ability to conduct operations on the move,” Riehle said. “The fact that you can put the glasses on, drive, provide reports, and not be encumbered by a computer was very helpful for them.”
In the case of Enterprise Challenge, the tactical glasses allowed an intelligence analyst to more easily wear his other hat of Track Commander while the convoy was on the move.
“Normally you’re looking at a computer to see the FMV display,” Riehle said. “When you’re moving, your job is to be the Track Commander, not so much an analyst. Putting the imagery on his head allows [the analyst] to do both jobs.”
Riehle added that tactical glasses eliminate the light signature put off by computers when a convoy is traveling at night. The Army will continue developing the project, with another tactical glasses exercise planned for Enterprise Challenge 2014. Riehle predicts the service will move toward head-worn computing devices for a number of capabilities in the near future.
“I don’t believe it’s just an intelligence analyst tool,” he said. “It shapes our situational understanding and awareness for soldiers at any echelon, so we need to move this capability to the soldier who’s on the edge.”
Riehle added that GEOINT is what really creates the bridge from head-worn computing into full-blown AR technology, recalling how, in the past, data was overlaid on paper maps using acetate.
“When we go digital, we don’t have to do that anymore,” he said. “We can augment that geospatial layer in a continuous process and get it all the way down to the soldier.”
Riehle added that potential Army AR uses could include overlaying IED hot spots, road conditions, and both friendly and enemy force disposition, to name a few.
The Defense Advanced Research Projects Agency (DARPA) is also evaluating a prototype AR system for soldiers in the field called Urban Leader Tactical Response, Awareness, and Visualization (ULTRA-VIS). By overlaying full-color graphics onto the real-world scene confronting the soldier, ULTRA-VIS reveals other forces, vehicles, and threats in the area that aren’t visible to the naked eye, such as a sniper lurking on the other side of a building. The system can also tie into remote data feeds to provide a variety of tactical information. Taking the idea a step further, DARPA is also working with a company called Innovega iOptics to develop AR-enabled contact lenses that project information in the field of vision near the eye, allowing the wearer to view data while still focusing on the more distant real-world scene.
The burgeoning development of commercial-based AR has both inspired and driven the adoption and expansion of AR in government.
“It used to be that in order to afford a heads-up display for completing a complex task in a difficult situation, [you] had to have plenty of resources and it came from the labs, from the big guys down,” White said. “Now, with consumerization, democratization, and technical advancements, it’s coming from the little guys up.”
A 2013 study by Deloitte’s GovLab think tank, titled “Augmented Government: Transforming Government Services through Augmented Reality,” noted that despite the increasing use of AR in the civilian sector, “Its strategic application to government service delivery is still nascent.”
The report details three hypothetical scenarios in which AR could play a vital role in government-based functions: one involving the scenario-based training of new border patrol agents; another depicting the detection and capture of a suspicious traveler trying to sneak explosives onto an airliner; and finally, FEMA personnel using AR to locate and rescue people trapped by a hurricane.
“The three examples that are in the paper are not real, but they are realizable,” White said. “None of them are beyond the reality of the technology.”
So, if augmented reality has such vast potential to enhance and improve government missions and tasks, why isn’t everyone using it already? Security, for starters.
“Security is critical—being certified to run this data over our networks,” Riehle said. “We have to get that right and it’s too important not to. I think that [is] one of our biggest challenges.”
There are other technical considerations, such as improving the battery life of visual displays and general reliability, as well as more ambiguous issues to consider, such as privacy, access to information, distraction, or confusing virtual layers with the real.
What will be the consequences of the technology when someone with AR-enabled contact lenses can glance at a stranger on the street and instantly access their entire personal profile, including address, employment and criminal records, family information, social media pages, and even real-time medical data such as blood pressure and heart rate—especially if that stranger is an undercover police officer or federal agent? What happens if AR systems are hacked or spoofed? And is the technology dependable enough for situations that allow for zero margin of error, such as the battlefield or a natural disaster area?
Given such unanswered questions, it’s easy to see why some federal agencies are more hesitant to embrace the technology.
“We have some things that we do now that I think could be improved with the use of AR,” said Mark Borkowski, assistant commissioner for the Office of Technology Innovation and Acquisition with U.S. Customs and Border Protection (CBP). But, he added, “There’s a lot of work we all believe has to happen to make AR ready for that kind of an application.”
Inspecting people and vehicles passing through ports of entry is one job for which AR could be valuable, Borkowski said. But it’s not clear yet just how valuable.
“There’s a real tendency to embrace technology because it’s the latest and greatest thing, but one of the things we have to think about is, what really will we get out of it?” Borkowski asked. “Does it really make a difference if I have AR at a port of entry compared to looking at a monitor from a normal computer screen at my booth?”
Another possible application might be noting subtle changes in the physical landscape that could indicate illegal entry and activity.
“This is again somewhat in the future, but we’re very interested in being able to detect changes in areas,” Borkowski said. “Because if there’s a change in an area, there’s something that caused it, and that’s good information to us. So to the degree that AR would help us overlay what was in a place yesterday compared to what’s in a place today, that might be a very useful tool downstream for [CBP].”
Training is another area where AR could prove valuable. Police officers, federal agents, and soldiers who face tough decisions about the proper use of deadly force in dangerous situations often use virtual reality—large video game-like screens and simulated weapons—to train without the risk of live fire. But AR-based scenarios could potentially take the realism to a whole new level.
“Those kind of training scenarios would benefit from something like AR where we could do these things in the actual operational environment, overlaid and augmented with AR,” Borkowski said.
But, Borkowski hasn’t completely jumped on the AR bandwagon.
“It’s the algorithmic development and then access to data that’s more likely to be a challenge,” he said. “To some degree, this could become a big data problem. How do you get access to data, synthesize that data to information, and then find a way to depict what you’ve concluded from all that data in a way that’s useful to whoever’s using the AR?”
Borkowski doesn’t entirely dismiss the technology’s promise, either.
“We haven’t quite gotten our arms around what we might do with it,” he concluded. “But [AR] intrigues us and we will want to follow it as we go forward.”
The Foundational Layer
Handling the enormous amount of data required to make AR successful is where geospatial data management comes in. In the future, each time a pilot, soldier, or border patrol agent swivels his or her head to follow an ever-changing situation, their AR device must swiftly scan, locate, tag, and return information on a tsunami of data points, all in real time.
“We want to be able to ubiquitously share our geospatial AR library with all of these devices,” said A.J. Clark, president of Thermopylae Sciences and Technology. “That means having to manage potentially billions of objects. If you’re walking down the street, AR could be everything from giving you the name and address of a building, to walking up close to it and wanting to see where the bricks came from that are on it, or what kind of parts you might need to replace the doors or the hinges. It gets kind of complex. So for us, we just want to be able to deal with broad AR down to detailed objects, which requires storing this geo-data in new ways.”
John Clark added, “I like to say all data is geospatial, even if it’s just your device’s location or how much information you requested when you’re in that area. Because it’s not just the location of the building where you are, it’s what you do when you’re near it, how many people go in and out of that building, how many Google searches about that building there are. It’s relating all that other content in time and space that we’re focused on. And then how to make it relevant for businesses, government, and users.”
Riehle is enthusiastic about AR’s ability to enhance and bring “flat maps” to life.
“AR allows you to analyze and annotate the changes and get them forward to the soldier, so not only are they seeing what the sensor is looking at, but also can be informed by what the analyst is looking at, and that data can be moved to them over the geospatial plane.”
Choose to embrace it fully for the missions and tasks at hand, or choose to adopt it sparingly and cautiously, but it’s undeniable—augmented reality will soon be a part of our reality.