The Frontier of Multimodal Mapping

The future of secure, integrated data visualization

The-Frontier-of-Multimodal-Mapping

By Ashley M. Richter, AECOM; Rupal Mehta, Ph.D., the University of Nebraska-Lincoln; and Michael Hess, Ph.D., Alutiiq, LLC

What will data visualization and knowledge interaction look like in the future? We don’t need a crystal ball to guess. Between the exponential growth and confluence of extant and emerging technologies, and our species’ tendency to reverse engineer the systems we dream up in our science fiction, the hazy shapes of future mechanisms are already visible.

There are intersecting features across the sci-fi spectrum that provide clues: the fully immersive virtual reality of Ready Player One; the augmented reality of The Expanse; the more popular mixed reality of Avatar, Passengers, Prometheus, and The Hunger Games; and the incoming wave of speculation regarding brain-computer interfaces. These future, integrated, analytic data systems all share a need for multidimensional and multispectral 3D+ data capture as a base with layers of geospatial and activity-based intelligence at multiple scales—landscape, building, and human.

Likewise, the co-registration of this data yields interesting opportunities for a more robust computer vision and machine learning/artificial intelligence automation paradigm. Such a system implies a much-needed bump up in how we secure our digital data infrastructures and how we ethically access such an amalgamation of live-streaming and historic data.

Where previous years were spent lamenting lack of processing power, shortage of expertise, weak multimodal data co-registration mechanisms, or the “black box” nature of machine decision processes, recent progress has highlighted the new challenges to a digitally twinned world. Cybersecurity, data privacy, and control issues as well as a need for interdisciplinary/silo collaboration, and an improvement in the business practices with respect to data management are now at the forefront.

As more industry and academic groups build out the base levels of a global digital twin, it is essential that we consider not just what the future 3D-mapped, ubiquitous sensor-driven, annotatable, and tracked multimodal “Internet of Everything” will look like in its assorted mixed reality visualization hardware, but also how and why such an integrated data schema needs to be constructed, accessed, and securely maintained.

An ubiquitous sensing paradigm and the inevitable data economy posited by the smart cities of the future rely on this same digital scaffold at their base. The spaceships and space colonies of the future will depend on real-time decisions made from ever-expanding, integrated, and authenticated intelligence platforms. To develop these types of automated operations, we must be able to 7D+ map space, time, assets, life cycle, collaboration, financials, and more on top of 3D surroundings for our cities, buildings, and selves—otherwise we won’t to be able to reproduce it into any form of useful off-planet construction and maintenance at scale. But long before we get to that point, we must ensure that when (not if) these systems are put into inevitable mainstream use, the right balance of international powers are involved in their development and security from the start. A living, 3D+ blueprint of the world and the movement of humans and objects through it is both a security asset and a threat.

  • This article is part of USGIF’s 2019 State & Future of GEOINT Report. Download the PDF to view the report in its entirety. 

Where will these systems come from?

Just as so many other “emerging” technologies have actually been around for quite a while, the pieces of such a sci-fi-integrated visualization schema have lurked in the background for some time.

Ultimately, any arena that is considering how to map time and space is at the edges of the proposed unified theory of cyber-physical spatialization—exploration and survey geospatial intelligence (GEOINT) in the government; the architecture, engineering, and construction industries’ expanded use of building information modeling; the digital heritage community’s cultural heritage diagnostic efforts to digitize and annotate historical monuments and landscapes; the self-driving car industry’s labors to map and monitor roadways and vehicle context; and the augmented, virtual, and mixed reality industries and GIS communities that are increasingly a part of public awareness, education programs, and mainstream career paths. Even progress in the gaming industry to map digital realms or Hollywood special effects efforts to use real-world data captured via sensor instead of drafted digitally should all be considered relevant efforts to build a digital scaffold upon which to drape and access all of our other data streams in a grand system of systems.

Match spatial visualizations with increased intensity and science communication efforts toward analytic annotation layers, and, voilà, the pieces begin to take shape. As different arenas across industry collide and conspire toward applied use cases, more and more aggregate data will be visually and analytically entangled.

Everything—from the increased miniaturization and decreased cost of terrestrial LiDAR and multispectral data capture tools, public awareness of GIS systems, the rise of gaming engines capable of uniting interactive datasets, increased interest in establishing automation policies for the future of work, quantum computing simulation possibilities, indoor mapping, Wi-Fi mapping, medical imaging and training devices, etc.—are all related to the evolution of a unified digital twin. This singularity of sorts will be a constantly evolving digital representation of time and space that allows us to spatially record our lives on the landscape we inhabit, and subsequently derive further analytics from the accumulated data of those lives lived.   

Why put all the pieces together?

Technology should not be developed for technology’s sake.

Historically, technology has evolved and become ubiquitous practice because it served a need—even when that need was not immediately apparent to anyone but early adopters and creators. When the search engine was first introduced, the need to query a digital encyclopedia was infamously questioned. The trajectory of GPS from government to public use followed a similar quixotic pathway—and yet few among us would dare to head somewhere new today without the use of a mapping application. We are lost, both literally and figuratively, without our smartphones.  

An integrated, visual, spatiotemporal system of analytic, multimodal data yields even greater opportunities to chart and share the world around us for present and future use: when a field engineer in a disaster zone can receive automatic alerts to maintain an asset, be guided to the damaged area in augmented reality, and collaborate in virtual reality with additional experts; when the warfighter can automatically track changes projected directly onto the landscape for situational awareness; when a construction team can move through digital annotated blueprints actively layered over their real world; when a teacher can access the relevant, authenticated strata of scientific and crowdsourced anecdotes layered onto each painting to answer the questions of curious school kids; when a real estate agent or engineer can query the building itself for its maintenance records; when your medical history is visually tied to your body; when a future descendent can tour the world and be prompted to take a photo at a certain spot because their great-great-grandparent stood in that exact spot decades ago; when these are all the same, ubiquitous system—then we will have the beginnings of a mechanism to record and assess our species over the “longue durée” of our assorted civilizations and derive even greater analysis from our aggregate.

In the long term, a unified ecosystem of multimodal data is a living, collaborative multidimensional atlas of humanity accessible online in 2D via our assorted smart devices, and viewable in the ubiquitous mixed reality systems on the horizon in its 3D+, hopefully holographic form.  

A spatial representation of everything can be utilized to not just preserve our brief existence and connect us constantly to the past as the ultimate of our historic archives—but as training data for future levels of automation and optimization to ongoing society.

In the short term—this living, global, multidimensional digital twin is a tool to provide context to our activities—be they the maintenance and operations of a smart facility, the negotiation of a smart city’s labyrinth, the automated highways to come, or out in the field for research, reconnaissance, or disaster relief—on planet or off.

How will a global digital twin come into being?

Integrated, spatially visualized systems are an inevitable confluence, but they also represent a new challenge. One that will require mass collaboration and significant reworking of how government, industry, and academia share data and build systems together. But enough puzzle pieces are on the proverbial table to get started if an applied use can be decided upon to focus concerted efforts.

Previous work by some of the authors focused on the use of cultural heritage monuments as test beds for the development of multimodal data visualization platforms, most notably for the Florentine Baptistery of San Giovanni and the Duomo under the care of the Vatican’s Opera di Santa Maria del Fiore. Subsequent efforts have been focused on critical infrastructure and secure facilities operations and maintenance for the U.S. government as sandboxes to establish best security practices for these future platforms. It is important that the construction of a working, ubiquitous digital twin of this nature be dominated by the security concerns present at monuments and secure government assets to ensure data security issues are part of the recipe from the start. But whether a sandbox of these issues will be best handled by government or industry is up for debate given that both arenas can lay conflicting claim to cybersecurity supremacy.

As more and more elements are mapped together, it will be necessary for some element of the world’s government to take responsibility, not just for the future end system, but for the increasing layers of building, street, and subterranean 3D mapping elements already in play. Aerial LiDAR at the landscape level has set a precedent for data collection and sharing mechanisms. But as more and more annotated 3D blueprints at building level make their way into the public domain, a security mind-set is essential. A 3D archive of critical infrastructure, world monuments, or local housing is part education resource, part commercial driver, and part terrorist planning guide. The ability of a real estate agent to use a 3D annotated version of a home to sell it could also result in a well-planned home invasion. Digitized highways and self-driving cars mean hackable training data at multiple levels. A digital twin of a secure facility can be optimized by spatially mapping its asset management system, but it can also be compromised more significantly if it is breached.

What does this mean for the future of data privacy and security?

In a constantly replicated virtual version of the world, our physical movements would ideally be digitally live-mapped for best case data extrapolation. But while it’s one thing for our standing buildings to be represented and improved upon by their digital copies—what does an activity-based intelligence layer tracking individuals and populations over time and cross-referencing their actions mean? Philosophers and statesmen have pondered such a surveillance state for millennia. But as we find ourselves not just on the cusp, but already wading into a technocratic variant of a temporally and spatially tracked society, what are we doing and what can we do to ensure citizens maintain individual rights and their data cannot be compromised and used for nefarious purposes? Given how often recent waves of technological progress have failed to address this before being implemented, it’s important that such dialogue take place up front rather than be addressed ad hoc.

The return on investment to spatially layering data exceeds the security risk—but that security risk cannot get lost in the shuffle or go unmonitored by government agencies—even when it is with respect to publicly or privately collected data. Which begs the questions: Who ought to control the assorted levels of data and their interaction? What area will set its governance? Who will monitor compliance? How will the best version of a model or a user contributing data be authenticated? How will an individual’s data in the system be controlled—by third parties as is, by the individual’s aggregate self-sovereign identity of all of their data, by a new regime of data bankers to come, by government? The National Geospatial-Intelligence Agency is most likely to kick off handling this quagmire of data—but industry is not far behind and may come up with something more accessible and quicker in an effort to create and control the data economy at all levels. Society is struggling to answer these questions with respect to 2D data—how will 3D+ and multispectral data confuse and exacerbate these issues?

But the accumulation of 3D data on our bridges and houses, the tracking of our movements via GPS or Wi-Fi or health-monitoring devices, the thermal assessment of our bodies in public spaces, the nature of our very genes—are all being aggregated in one database or another as individual puzzle pieces and trends. We don’t actually know how much is out there, already mapped, or how it’s being used. But it’s likely only 1 percent of what has been collected thus far has been connected. We need to agree on a place to start. We need to determine a baseline of what exists already and establish systems for aggregation, access, and security to the inevitable sync of the world’s data before it’s too late and someone nefarious does so first.

Though we are struggling to make all of the base systems actively work—to turn machine learning into something more complex and yet understandable, to sync asset management systems into 3D building models, or to easily layer high-resolution building models into landscape-level imagery—we cannot and should not ignore the potential to get ahead of incoming systems and ensure American and ally control of the inescapable, data-driven, user-centric future.

Often, progress, innovation, and security are stunted and threatened by our inability to flexibly implement, act, or build policy and better business practices in relation to new technologies. We need to shift the current paradigm of disruptive technology even further to encompass how we’re handling global data strategy. For all that we may be able to estimate the shape and approximate nature and base data layers of future data visualization and knowledge management systems, we cannot predict them or their repercussions in full. We must be ready for anything.

Posted in: Contributed   Tagged in: 2019 S&FOG Report, Analysis, Data, Innovation

Do We Need A “Civilian ARPA” for AI?

The case for leveraging artificial intelligence to improve public service

Making Sense of Compression

Understanding the three common types of raster image compression

The Necessity of GPUs

Resolutions have surpassed Moore's Law: What does that mean for data analysis?