Expanding the Utility and Interoperability of Rapidly Generated Terrain Datasets

Reality modeling advancements now allow for the generation of high-resolution 3D models from a variety of sources to rapidly meet modeling and simulation needs

iStock-1171350364

By Jason Knowles, Ph.D., GISP, GeoAcuity; Col. [ret.] Steven D. Fleming, Ph.D., Spatial Sciences Institute, University of Southern California; and Ryan McAlinden, Institute for Creative Technologies

Reality modeling is a staple in military modeling and simulation (M&S) communities. Whether from overhead or terrestrial imagery, photogrammetric-based collector’s reality modeling advancements now allow for the generation of high-resolution 3D models from a variety of sources to rapidly meet M&S needs. The University of Southern California’s Institute for Creative Technologies (ICT) Modeling and Simulation Terrain Team creates these immersive and informative 3D datasets that help warfighters and supporting elements improve performance through its participation in the U.S. Army’s One World Terrain (OWT) effort. These high-resolution 3D models deliver enhanced real-world views and provide a detailed understanding of the physical environment, allowing for in-depth simulated mission rehearsal and planning. Traditionally, these models were designed to serve the needs of the M&S communities only, with no allowance for geospatial representation in place in space (i.e., single-scene viewing). The models were limited to training and theoretical/scenario-based mission planning. ICT, through its partnership with the University of Southern California’s Spatial Science Institute (SSI) and GeoAcuity, a veteran-owned small business, has been working on an effort to transition and geospatially enable these datasets from M&S-specific needs to a more “operationalized” and interoperable dataset designed to better integrate with the GEOINT Community writ large. This has been done by a dynamic team with roots in both disciplines who conducted an in-depth study of the knowledge and technology gaps between the M&S and GEOINT Communities and the identification of the best way to ameliorate those differences. Through addressing differing data formats/standards and underlining how to best leverage geo-coordinates, data projections, and accuracy needs, this effort has allowed for the integration of these valuable, high-resolution datasets to be interoperable with both M&S software, commodity GEOINT, and operational mission command and mission planning systems.

Reality Modeling: Traditional Data Uses and OWT’s Role

Reality modeling provides a detailed understanding of the physical environment, and geo-specific 3D terrain representation is revolutionizing the M&S world. In tandem with the increased availability and use of unmanned aerial systems (UAS) and small satellites, reality modeling advancements now allow for practitioners to generate high-resolution (~cm-level) three-dimensional models to meet M&S needs across all-terrain environments. Scalable, mesh-based models deliver enhanced, real-world visualization for a variety of purposes including training, simulation, rehearsal, intelligence, and operations. Many simulators and mission command systems today still rely on precision 2D geospatial data, though increasingly there is a requirement to incorporate true 3D data into the workflow, which enhances the realism and better represents the complexity of the operational environment. This requirement ranges from ground-based, dismounted operations through large-scale combined arms missions sets where having an accurate, geo-specific 3D representation of the surface is important.

OWT is part of the Army’s Synthetic Training Environment (STE) program, which has a requirement to create the Army’s next-generation M&S platform across a wide spectrum of mission sets and operations. The goal of OWT is to create a geo-specific, high-resolution 3D representation of Earth’s surface that can be fed into the STE platform and delivered to the point-of-need. OWT relies on a cadre of traditional and non-traditional sources to produce this 3D representation; from open-source to commercial to national technical means. More specifically, OWT advances the feasibility of turning collected terrain data into simulation-usable terrain features that can be employed in near real time (and eventually in real time) by simulation platforms. This work demonstrates how rapid terrain generation may be incorporated in near real or real time into a virtual or constructive environment for geo-visualization and simulation applications.

Research has been conducted on the challenges presented by 3D terrain data for several decades, harkening back to the days of the Army’s Topographic Engineering Center (TEC). In the Department of Defense (DoD), tremendous efforts have focused on building the Army’s suite of next-generation interactive simulation and training platforms. Years ago, terrain was often considered the “Achilles’ heel” of simulators. Its generation is time-consuming, expensive, manpower-intensive, and fraught with vagaries that result in unrealistic, unsatisfying, and often uncompelling synthetic experiences.

Simulation environments are often created with entities “floating above the terrain” because of correlation issues, or virtual characters “passing through walls” because the models were not attributed correctly. Until recently, creating the virtual terrain in applications was purely a manual activity, with artists, modelers, and programmers spending significant time and money to create one-off terrain datasets that were rarely able to be repurposed in other rendering environments. Limitations in processing and machine learning (ML), and poor-quality source data compounded the problem for decades, stalling attempts to fundamentally change the way terrain is created for virtual applications.

However, in the past five to seven years, the introduction of cloud computing, better and cheaper CPUs and GPUs, and new sources of high-resolution terrain data (unmanned systems, airborne and terrestrial LiDAR, small satellites, and commercial mapping resources such as Bing or Google Maps) have provided new procedures for terrain generation.

The opportunity has arisen to reduce the time and cost for creating digital terrain by automating what were previously manual efforts. Automated functions include procedurally generated textures and polygons, the correlation and linking of datasets, and adding application-specific attributions to models that allow the simulation to reason with colliders, navigation meshes, and other entities. Adding semantic labels and metadata to the underlying data is critical so the engine can differentiate how the data is to be used at runtime (e.g., whether something will drive on it, shoot through it, move through it, hide behind it, etc.). Leveraging these advancements and combining them with automation routines has allowed the M&S community to exponentially grow its capabilities and output.

Expanding the Use of Reality Modeling Data

The field of geospatial intelligence (GEOINT) has rapidly evolved from paper maps with acetate overlays, to the digital 2D maps of the 1990s and 2000s, to the 3D/4D representations we see today. This data continues to grow in abundance and requires a new breed of cross-disciplinary collaboration and research to ensure its utility is maximized. Identifying and developing ways for users to exploit and better understand the 3D world is becoming increasingly relevant. The M&S community, through virtual and augmented reality (VR/AR), continues to dominate the generation and representation of realistic 3D mesh data; however, the integration and utilization of such data within the operational community has been very slow. Traditionally, M&S practitioners had no need for coordinate geometry, as typical VR/AR applications are self-contained or single-scene environments, and did not need to sync with a real-world place in space, thereby limiting their utility infusing with other spatially-aware operational datasets and GEOINT.

Furthering on the gains made in automation and increased productivity of M&S 3D datasets, the ICT team was asked to investigate the possibility and feasibility of “operationalizing” the M&S 3D datasets to support not only STE but the operational and geospatial needs of the warfighter.

Once the 3D data has been geospatially enabled, it becomes so much more than just simulation data; it becomes usable GEOINT. This data can then be leveraged in deployed environments by ground commanders, military planners, engineers, and practitioners for mission planning and rehearsal, terrain generation, route mapping and clearance, base layout and design, infrastructure planning, IED-modeling and post-blast assessment, cover/concealment, and more. For post-attack recovery efforts, practitioners can quickly send UAS to capture existing conditions, then model the damage and map unexploded ordinance to assess the situation and develop a recovery plan—while minimizing exposure to deployed troops. Operational units such as infantry and special operators can produce models to map the battlespace and to enhance defensive preparation efforts or model assault objectives.

It was in that vein that ICT reached out to both GeoAcuity and SSI for their geospatial acumen and DoD and IC support experience. This collaborative team yielded quick and beneficial results. The team was rapidly able to leverage the embedded geo-coordinates native to OWT’s source imagery data, and with a few manipulations to photogrammetric processing routines, was able to output spatially aware 3D datasets that were geo-referenced and aligned to the correct place in space. These datasets were now operational across all spatially aware software utilized by both the operational and GEOINT Communities (ArcGIS, ArcPro, SOCET GXP, Google Earth, etc.). The team also identified that there were ancillary 2D datasets produced during the 3D generation process (high-resolution orthographic imagery, digital surface models, point clouds, etc.) that were of no use to the M&S community and were not being utilized. These abandoned/unused datasets are of tremendous value to operational and GEOINT communities, so the existing workflow was once again modified to be geospatially enabled and to provide these valuable datasets along with the 3D products. In addition, the team was able to discern the best data formats across both 2D and 3D to benefit the largest number of end users and make these high-resolution and valuable datasets as interoperable as possible.

Operationalizing Reality Modeling Data: The Current State of Development

Following on from the team’s initial success in operationalizing and expanding the data offerings, they began working with operational units and the software packages they used to get feedback and do initial acceptance testing. Working closely with the Army Geospatial Center (AGC), the high-resolution, geo-rectified 2D datasets were successfully integrated into both the Army’s Command Post Computing Environment (CPCE), which provides a software infrastructure framework and common interface for both data and services, and Joint Capabilities Release (JCR), the Army’s next-generation friendly force tracking system currently fielding to Afghanistan. This upgrade builds on the situational awareness tool Force XXI Battle Command Brigade and Below/Blue Force Tracking (FBCB2/BFT), which is integrated on more than 120,000 platforms and fielded to every brigade combat team in the Army.

3D datasets were provided to Army intelligence directorates at the National Training Center, where they were able to view, interact, and perform analysis on the terrain utilizing Esri software. The team also worked with U.S. Special Operations Command partners to integrate both the enhanced 2D and 3D datasets into the Android Tactical Assault Kit (ATAK), a handheld device soldiers utilize for real-time, battlefield situational awareness. This user testing and feedback, specifically from the ATAK user community, highlighted some deficiencies in the newly geospatially enabled datasets. End users were noticing significant elevation offset errors in the vertical (z value) representation of the data. It was determined that the commercial grade GPS units on the UAS were recording large spatial accuracy errors, especially in the vertical. Subsequent testing showed on average that there was an up to 2-meter horizontal error and up to 60-meter vertical error in the spatial accuracy of the datasets. While this level of spatial accuracy method is sufficient for some visualizations, situational awareness, modeling, and simulation applications (e.g., single-scene viewing), it is not sufficient for analytics or operational use due to the high error margins.

The team sought to rectify this issue through the utilization of a professional grade global navigation satellite systems (GNSS) base station and a real-time kinematic (RTK) compatible UAS. RTK positioning is a satellite navigation technique used to enhance the precision of position data derived from GNSS, such as GPS, GLONASS, Galileo, and BeiDou. It uses measurements of the phase of the signal’s carrier wave in addition to the information content of the signal and relies on a single reference station or interpolated virtual station to provide real-time corrections, providing up to centimeter-level accuracy.[1] This had a tremendous effect, improving the spatial accuracy of OWT’s 3D terrain models down to centimeter (~5-cm) level accuracy in both the horizontal and vertical.

Operationalizing Reality Modeling Data: Possibilities for the Future

This research need stretches across the workflow from collection to application. Early efforts have led to many outcomes, including the purchase of Tactical Decision Kits (TDK) for the U.S. Marine Corps that allow small units to organically manage their own geospatial holdings. Unit operators now regularly collect image data and provide it to others in the force as well as to researchers for additional classification and segmentation experiments.

Future Advancements for OWT include provisioning the data for use not just by training and simulation systems, but for operations and other related functions—intelligence, logistics, planning, etc. The OWT program seeks to continue to migrate its collection and data-creation methodology so that the capability is organic and can empower the creation and control of geospatial data at the unit level as much as is possible.

An increase in direct ties to the broader geospatial enterprise such as AGC and the National Geospatial-Intelligence Agency (NGA) are critical, especially as more and more 3D terrain is created by units themselves. AGC and NGA serve as content managers and validation authorities for geospatial data, so ensuring a coordinated effort between them and OWT is a paramount focus moving forward.

Additionally, as AI improves both in capability and potential, OWT will develop and leverage various ML techniques that continue to improve classification and segmentation for the attribution of 3D datasets.

Finally, edge computing will serve as the glue for the vast array of content being produced and managed across the community. Existing network infrastructures are likely inadequate for sending/receiving gigabytes of data, so ensuring producers and consumers of geospatial data have the necessary computing locally that can be synchronized across the enterprise will be key. Ultimately, researchers hope to revolutionize the way the two communities collect, process, and serve 3D geospatial data with long-term goals being to obviate the need for human intervention, and to use automation to more quickly and cost-effectively deliver terrain data to the point of need.


  1. Lambert Wanninger. “Introduction to Network RTK.” www.wasoft.de. IAG Working Group 4.5.1. Retrieved February 14, 2018.

Open Geospatial Data Standards

The Critical Importance of Supporting Open Standards Development

Enhancing Military Mobility

OGC’s GeoPackage will help make every solider a sensor

Data On Demand

The cloud and the future of geospatial data