Innovation – Trajectory Magazine http://trajectorymagazine.com We are the official publication of the United States Geospatial Intelligence Foundation (USGIF) – the nonprofit, educational organization supporting the geospatial intelligence tradecraft Tue, 20 Feb 2018 16:12:19 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.4 https://i2.wp.com/trajectorymagazine.com/wp-content/uploads/2017/08/cropped-TRJ-website-tab-icon-1.png?fit=32%2C32 Innovation – Trajectory Magazine http://trajectorymagazine.com 32 32 127732085 The Significance of Falcon Heavy http://trajectorymagazine.com/significance-falcon-heavy/ Fri, 09 Feb 2018 16:54:47 +0000 http://trajectorymagazine.com/?p=36319 SpaceX demonstrates deep space capabilities with world’s most powerful rocket

The post The Significance of Falcon Heavy appeared first on Trajectory Magazine.

]]>
SpaceX’s Falcon Heavy rocket successfully launched from Kennedy Space Center in Cape Canaveral, Fla., Tuesday afternoon, careening into space with a cherry red convertible in tow. The historic field test now categorizes Falcon Heavy as the most powerful operational rocket in the world and demonstrates SpaceX’s new ability to send heavy payloads to deep space.

A concert of 27 Merlin engines provides the rocket’s first stage with five million pounds of thrust at liftoff (twice that of its closest competitor) and the ability to carry 140,000-pound payloads to lower Earth orbit. The rocket’s first payload, though, wasn’t a space-age machinery module or a cutting-edge satellite: it was Elon Musk’s personal Tesla roadster, an entertaining (and perhaps frivolous) way to demonstrate Falcon Heavy’s capabilities while extending humanity’s imprint on the universe.

After shedding its boosters, the rocket’s upper stage carried the Tesla on an impressive six-hour “coast” without firing the engines as an experimental capability demonstration for the U.S. Air Force. The maneuver was followed by a third and final burn, meant to send the car on its final orbit around Mars. While the burn was successful, it was more powerful than anticipated and instead pushed the upper stage closer to dwarf planet Ceres, near the asteroid belt. There, the Tesla will float through space until it’s destroyed by debris or radiation—or picked up by an extraterrestrial life form.

The reusability of Falcon Heavy’s parts also contribute to the launch’s significance. Three minutes after launch, the rocket’s two outer boosters, which were already recycled from SpaceX’s Falcon 9 rocket, detached mid-flight, falling back to Earth and simultaneously touching down on concrete landing zones.

The remaining center core booster was programmed to return to Earth for retrieval by an unmanned drone ship in the Atlantic Ocean. Unfortunately, only one of the core booster’s three engines fired for the landing burn. The rocket was unable to slow its descent and crashed in the water about 100 meters away, damaging the drone ship. Future missions will focus on retrieval of all three boosters. By recycling these components, SpaceX aims to accelerate rocket turnaround and vastly reduce launch costs. Falcon Heavy carries a price tag of $90 million, a bargain compared to the $422 million currently charged by United Launch Alliance.

Falcon Heavy’s first mission was one of the most anticipated rocket launches of the last decade, and one that suffered years of delays throughout its development. SpaceX first announced plans for the vehicle at a National Press Club conference in 2011, initially targeting a 2013 or 2014 launch. Engineering changes and failures with the partially reusable Falcon 9 boosters forced the company to postpone. Later, launch pad hardware changes caused further delays.

Now that Falcon Heavy’s maiden voyage has confirmed its mission readiness, commercial customers can feel confident leasing spots on the vehicle for flights as early as this year. The Verge reports the rocket has already been called on to launch communication satellites in 2018 for companies such as Inmarsat, Viasat, and Arabsat. This summer, SpaceX will outfit Falcon Heavy with a test payload for the U.S. Air Force to authorize the vehicle for national security missions.

Photo Credit: SpaceX

The post The Significance of Falcon Heavy appeared first on Trajectory Magazine.

]]>
36319
GEOINT at Platform Scale http://trajectorymagazine.com/geoint-platform-scale/ Thu, 01 Feb 2018 07:50:04 +0000 http://trajectorymagazine.com/?p=35872 Transforming GEOINT organizations from pipes to platforms

The post GEOINT at Platform Scale appeared first on Trajectory Magazine.

]]>
Today’s networked platforms are able to achieve massive success by simply connecting producers and consumers. Uber doesn’t own cars, but runs the world’s largest transportation business. Facebook is the largest content company, but doesn’t create content. Airbnb has more rooms available to its users than any hotel company, but doesn’t even own any property.

In his book, “Platform Scale: How an Emerging Business Model Helps Startups Build Large Empires with Minimum Investment,” Sangeet Paul Choudary describes how these companies have built two-sided markets that enable them to have an outsized impact on the world. He contrasts the traditional “pipe” model of production, within which internal labor and resources are organized around controlled processes, against the “platform” model, within which action is coordinated among a vast ecosystem of players. Pipe organizations focus on delivery to the consumer, optimizing every step in the process to create a single “product,” using hierarchy and gatekeepers to ensure quality control. A platform allows for alignment of incentives of producers and consumers, vastly increasing the products created and then allowing quality control through curation and reputation management. In this model, people still play the major role in creating content and completing tasks, but the traditional roles between producer and consumer become blurred and self-reinforcing.

A Platform Approach for Geospatial Intelligence

So, where does the geospatial world fit into this “platform” framework? Geospatial intelligence, also known as GEOINT, means the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on Earth. In most countries, there is either a full government agency or at least large, dedicated groups who are the primary owners of the GEOINT process and results. Most of the results they create are still produced in a “pipe” model. The final product of most GEOINT work is a report that encapsulates all the insight into an easy-to-digest image with annotation. The whole production process is oriented toward the creation of these reports, with an impressive array of technology behind it, optimized to continually transform raw data into true insight. There is the sourcing, production, and operation of assets used to gather raw geospatial signal, and the prioritization and timely delivery of those assets. Then, there are the systems to store raw data and make it available to users, and the teams of analysts and the myriad tools they use to process raw data and extract intelligence. This whole pipe of intelligence production has evolved to provide reliable GEOINT, with a growing array of incredible inputs.

These new inputs, however, start to show the limits of the pipe model, as new sources of raw geospatial information are no longer just coming from inside the GEOINT Community, but from all over the world. The rate of new sources popping up puts stress on the traditional model of incorporating new data sources. Establishing authoritative trust in an open input such as OpenStreetMap is difficult since anyone in the world can edit the map. And the pure volume of information from new systems like constellations of small satellites also strains the pipe production method. Combining these prolific data volumes with potential sources of intelligence, like geo-tagged photos on social media and raw telemetry information from cell phones, as well as the process of coordinating resources to continually find the best raw geospatial information and turn it into valuable GEOINT, becomes overwhelming for analysts working in traditional ways.

The key to breaking away from a traditional pipe model in favor of adopting platform thinking is to stop trying to organize resources and labor around controlled processes and instead organize ecosystem resources and labor through a centralized platform that facilitates interactions among all users. This means letting go of the binary between those who create GEOINT products and those who consume them. Every operator in the field, policy analyst, and decision-maker has the potential to add value to the GEOINT production process as they interact with GEOINT data and products—sharing, providing feedback, combining with other sources, or augmenting with their individual context and insight.

Transforming GEOINT Organizations from Pipes to Platforms

The GEOINT organizations of the world are well positioned to shift their orientation from the pipe production of polished reports to providing much larger value to the greater community of users and collaborators by becoming the platform for all GEOINT interaction. Reimagining our primary GEOINT organizations as platforms means framing them as connectors rather than producers. Geospatial information naturally has many different uses to many people, so producing finished end products has a potential side effect of narrowing that use. In a traditional pipe model, the process and results become shaped toward the sources consuming it and the questions they care about, limiting the realized value of costly assets.

Becoming the central node providing a platform that embraces and enhances the avalanche of information will be critical to ensure a competitive and tactical advantage in a world where myriad GEOINT sources and reports are available openly. The platform will facilitate analysts being able to access and exploit data ahead of our competitors, and enable operators and end users to contribute unique insights instead of being passive consumers. The rest of this article explores in-depth what an organization’s shift from pipe production toward a platform would actually look like.

Rethinking GEOINT Repositories

A GEOINT platform must allow all users in the community to discover, use, contribute, synthesize, amend, and share GEOINT data, products, and services. This platform should connect consumers of GEOINT data products and services to other consumers, consumers to producers, producers to other producers, and everyone to the larger ecosystem of raw data, services, and computational processes (e.g., artificial intelligence, machine learning, etc.). The platform envisioned provides the filtering and curation functionality by leveraging the interactions of all users instead of trying to first coordinate and then certify everything that goes through the pipe.

Trust is created through reputation and curation. Airbnb creates enough trust for people to let strangers into their homes because reputation is well established by linking to social media profiles and conducting additional verification of driver’s licenses to confirm identity, and then having both sides rate each interaction. Trust is also created through the continuous automated validation, verification, and overall “scrubbing” of the data, searching for inconsistencies that may have been inserted by humans or machines. Credit card companies do this on a continuous, real-time basis in order to combat the massive onslaught of fraudsters and transnational organized crime groups seeking to syphon funds. Trust is also generated by automated deep learning processes that have been broadly trained by expert users who create data and suggest answers in a transparent, auditable, and retrainable fashion. This is perhaps the least mature, though most promising, future opportunity for generating trust. In such a future GEOINT platform, all three of these kinds of trust mechanisms (e.g., reputation/curation, automated validation/verification/scrubbing, expert trained deep learning) should be harnessed together in a self-reinforcing manner.

Most repositories of the raw data that contributes to GEOINT products attempt to establish trust and authority before data comes into the repository, governed by individuals deeply researching each source. The platform approach embraces as much input data as possible and shifts trust and authority to a fluid process established by users and producers on the platform, creating governance through metrics of usage and reputation. These repositories are the places on which we should focus platform thinking. Instead of treating each repository as just the “source” of data, repositories should become the key coordination mechanism. People searching for data that is not in the repository should trigger a signal to gather the missing information. And the usage metrics of information stored in the repository should similarly drive actions. Users of the platform, like operators in the field, should be able to pull raw information and easily produce their own GEOINT data and insights, then and contribute those back to the same repository used by analysts. A rethinking of repositories should include how they can coordinate action to create both the raw information and refined GEOINT products that users and other producers desire.

  • This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations. 

Core Value Units

How would we design a platform that was built to create better GEOINT products? In “Platform Scale,” Choudary points to one of the best ways to design a platform is to start with the “Core Value Unit,” and then figure out the key interactions to increase the production of that unit. For YouTube, videos are the core value unit, for Uber, it’s ride services, for Facebook, it’s posts/shares, and so on.

For GEOINT, we posit the core value unit is not simply a polished intelligence report, but any piece of raw imagery, processed imagery, geospatial data, information, or insight—including that finished report. For the purposes of this article, we’ll refer to this as the “Core Value Unit of GEOINT (CVU-GEOINT).” It includes any annotation that a user makes, any comment on an image or an object in an image, any object or trend identified by a human or algorithm, and any source data from inside the community or the larger outside world. It is important to represent every piece of information in the platform, even those that come from outside with questionable provenance. Trusted actors with reputations on the platform will be able to “certify” the CVU-GEOINT within the platform. Or they may decide it is not trustable, but will still use it in its appropriate context along with other trusted sources. Many CVU-GEOINTs may be remixes or reprocessing of other CVUs, but the key is to track all actions and data on the platform so a user may follow a new CVU-GEOINT back to its primary sources.

Maximizing Core Value Units of GEOINT

It is essential that as much raw data as possible be available within the platform, both trusted and untrusted. The platform must be designed to handle the tsunami of information, enabling immediate filtering after content is posted to the platform, not before. Sources should be marked as trusted or untrusted, but it should be up to users to decide if they want to pull some “untrusted” information, and then, for example, certify as trusted the resulting CVU-GEOINT because they cross-referenced four other untrusted sources and two trusted sources that didn’t have the full picture. Open data sources such as OpenStreetMap, imagery from consumer drones, cell phone photos, and more should be available on the platform. The platform would not necessarily replicate all the data, but it would reference it and enable exploitation. These open data sources should be available to the full community of users, as the more people that use the platform, the more signal the platform gets on the utility and usefulness of its information, and, subsequently, more experts can easily analyze the data and certify it as trusted or untrusted.

It should be simple to create additional information and insight on the platform, where the new annotation, comment, or traced vector on top of some raw data becomes itself a CVU-GEOINT that another user can similarly leverage. An essential ingredient to enable this is to increase the “channels” of the platform, enabling users and developers in diverse environments to easily consume information and also contribute back. This includes standards-based application programming interfaces (APIs) that applications can be built upon and simple web graphical user interface (GUI) tools that are accessible to anyone, not just experts. It would also be important to prioritize integration with the workflows and tool sets that are currently the most popular among analysts. The “contribution back” would include users actively making new processed data, quick annotations, and insights. But passive contribution is equally important—every user contributes as they use the data, since the use of data is a signal of it being useful, and including it as a source in a trusted report is also an indication of trust. The platform must work with all the security protocols in place, so signal of use in secure systems doesn’t leak out to everyone, but the security constraints do not mean the core interactions should be designed differently.

Filtering Data for Meaning

Putting all the raw information on the platform does risk overwhelming users, which is why there must be complementary investment in filters. Platforms such as YouTube, Facebook, and Instagram work because users get information filtered and prioritized in a sensible way. Users don’t have to conduct extensive searches to find relevant content—they access the platform and get a filtered view of a reasonable set of information. And then they can perform their own searches to find more information. A similar GEOINT platform needs to provide each user with the information relevant to them and be able to determine that relevance with minimal user input. It can start with the most used data in the user’s organization or team, or the most recent in areas of interest, but then should learn based on what a user interacts with and uses. Recommendation engines that perform deep mining of usage and profile data will help enhance the experience so that all the different users of the platform—operators in the field, mission planning specialists, expert analysts, decision-makers, and more—will have different views that are relevant to them. Users should not have to know what to search for, they should just receive recommendations based on their identity, team, and usage patterns as they find value in the platform.

The other key to great filtering is tracking the provenance of every piece of CVU-GEOINT in the platform so any derived information or insight also contains links to the information that came from it. Any end product should link back to every bit of source information that went into it, and any user should be able to quickly survey all data pedigrees. Provenance tracking could employ new blockchain technologies, but decentralized tracking is likely not needed initially when all information is at least represented on a centralized platform.

Building readily available source information into the platform will enable more granular degrees of trust; the most trusted GEOINT should come from the certified data sources, with multiple trusted individuals blessing it in their usage. And having the lineage visible will also make usage metrics much more meaningful—only a handful of analysts may access raw data, but if their work is widely used, then the source asset should rise to the top of most filters because the information extracted from it is of great value. If this mechanism is designed properly, the exquisite data would naturally rise to the surface, above the vast sea of data that still remains accessible to anyone on the platform.

It is important to note that such a platform strategy would also pay dividends when it is the divergent minority opinion or insight that holds the truth, or happens to anticipate important events. The same trust mechanisms that rigorously account for lineage will help the heretical analyst make his or her case when competing for the attention of analytical, operational, and policy-making leadership.

The Role of Analysts in a Platform World

To bootstrap the filtration system, the most important thing is to leverage the expert analysts who are already part of the system. This platform would not be a replacement for analysts; on the contrary, the platform only works if the analysts are expert users and the key producers of CVU-GEOINT. Any attempt to transform from the pipe model of production to a platform must start with analysts as the first focus, enabling their workflows to exist fully within a platform. Once their output seamlessly becomes part of the platform, then any user could easily “subscribe” to an analyst or a team of analysts focused on an area. The existing consumers of polished GEOINT products would no longer need to receive a finished report in their inbox that is geared exactly to their intelligence problem. Instead, they will be able to subscribe to filtered, trusted, polished CVU-GEOINT as it is, configuring notifications to alert them of new content and interacting with the system to prioritize the gathering and refinement of additional geospatial intelligence.

The consumption of GEOINT data, products, and services should be self-service, because all produced intelligence, along with the source information that went into it, can be found on the platform. Operators would not need to wait for the finished report; they could just pull the raw information from the platform and filter for available analyst GEOINT reports. Thus analysts shift to the position of the “curators” of information instead of having exclusive access to key information. But this would not diminish their role—analysts would still be the ones to endow data with trust. Trust would be a fluid property of the system, but could only be given by those with the expert analyst background. This shift should help analysts and operators be better equipped to handle the growing tsunami of data by letting each focus on the area they are expert in and allowing them to leverage a network of trusted analysts.

The other substantial benefit of a platform approach is to integrate new data products and services using machine learning and artificial intelligence-based models. These new models and algorithms have the promise to better handle the vast amounts of data being generated today, but also risk overwhelming the community with too much information. In the platform model, the algorithms would both consume and output CVU-GEOINT, tracking provenance and trust in the same environment as the analysts. Tracking all algorithmic output as CVU-GEOINT would enable analysts to properly filter the algorithms for high-quality inputs. And the analyst-produced CVU-GEOINT would in turn be input for other automated deep learning models. But deep learning results are only as good as their input, so the trusted production and curation of expert analysts becomes even more important in the platform-enabled, artificial intelligence-enhanced world that is fast approaching. The resulting analytics would never replace an analyst as it wouldn’t have full context or decision-making abilities, but the output could help analysts prioritize and point their attention in the right direction.

Recommendations for GEOINT Organizations

Reimagining GEOINT organizations as platforms means thinking of their roles as “trusted matchmakers” rather than producers. This does not mean such agencies should abdicate their responsibilities as a procurer of source data. But, as a platform, they should connect those with data and intelligence needs with those who produce data. And this matchmaking should be data-driven, with automated filters created from usage and needs. Indeed the matchmaking should extend all the way to prioritizing collections, but in a fully automated way driven by the data needs extracted from the system.

A GEOINT organization looking to embrace platform thinking should bring as much raw data as possible into the system, and then measure usage to prioritize future acquisitions. It should enable the connection of its users with the sources of information, facilitating that connection even when the utility to the users inside the agency is not clear.

  • Be the platform for GEOINT, not the largest producer of GEOINT, and enable the interaction of diverse producers and consumers inside the agency with the larger intelligence and defense communities and with the world.
  • Supply raw data to everyone. Finished products should let anyone get to the source.
  • Govern by automated metrics and reputation management, bring all data into the platform, and enable governance as a property of the system rather than acting as the gatekeeper.
  • Create curation and reputation systems that put analysts and operators at the center, generating the most valuable GEOINT delivered on a platform where all can create content. Enable filters to get the best information from top analysts and data sources by remaking the role of the expert analyst as curators for the ecosystem rather than producers for an information factory.

The vast amounts of openly available geospatial data sources and the acceleration of the wider availability of advanced analytics threaten to overwhelm traditional GEOINT organizations that have fully optimized their “pipe” model of production. Indeed there is real risk of top agencies losing the traditional competitive advantage when so much new data can be mined with deep learning by anybody in the world. Only by embracing platform thinking will organizations be able to remain relevant and stay ahead of adversaries, and not end up like the taxi industry in the age of Uber. There is a huge opportunity to better serve the wider national security community by connecting the world of producers and consumers instead of focusing on polished reports for a small audience. The GEOINT organization as a platform would flexibly serve far more users at a wider variety of organizations, making geospatial insight a part of everyday life for everyone.

The post GEOINT at Platform Scale appeared first on Trajectory Magazine.

]]>
35872
Weekly GEOINT Community News http://trajectorymagazine.com/weekly-geoint-community-news-40/ Mon, 22 Jan 2018 16:43:18 +0000 http://trajectorymagazine.com/?p=35802 Several USGIF Members Make Reuters Global Tech Leaders List; USAF Launches New SBIRS Satellite; Northern Sky Research Publishes Updated UAS Report; 2017 A Success for Commercial Space Investment

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]>
Several USGIF Members Make Reuters Global Tech Leaders List

Reuters released its Top 100 Global Tech Leaders list, identifying industry leaders positioned to drive future innovation in the technology sector. Several USGIF Member Organizations made the list, including: Accenture Federal Services, Adobe, Amazon Web Services, CA Technologies, Cisco, DXC Technology, Hewlett Packard Enterprise, HP, IBM, Leidos, ManTech International, Micron Technology, NVIDIA, Oracle, and SAP National Security Solutions.

USAF Launches New SBIRS Satellite

The U.S. Air Force launched a Space Based Infrared System (SBIRS) Geosynchronous Earth Orbit Satellite to act as part of its global missile warning constellation. The satellite is equipped with powerful scanning infrared sensors and is supported by a sophisticated SBIRS ground control system to turn captured data into actionable information. Over the weekend, the satellite, named SBIRS GEO Flight-4 and built by Lockheed Martin, began transitioning to its final orbiting location 22,000 miles above Earth, where it will deploy its antennae, light shade, and solar arrays for further testing.

Northern Sky Research Publishes Updated UAS Report

Northern Sky Research this month published the fourth edition of its Unmanned Aircraft Systems Satcom and Imaging Markets report. This study offers a comprehensive analysis of the satcom market for satellite operators and geospatial imaging companies. New in this edition is coverage of Chinese, Middle Eastern, and Russian systems, assessment of commercial UAS strategies, and forecasts for high and medium altitude long endurance systems.

2017 A Success for Commercial Space Investment 

A report from Space Angels investment firm revealed commercial space companies received $3.9 billion from private investors in 2017. More than 120 firms made investments in space, marking a record year for the industry. The private space sector’s growth can be attributed to the rising reliability and cost-efficiency of commercial rockets. The report investigates the price of expanding mid-development government and commercial fleets to 500,000 kilograms of launch capacity. Results show that by 2025 it would cost $6.6 billion for government rockets and $4.2 billion for commercial rockets.

Peer Intel

Peraton hired former DHS and DoD official Reggie Brothers as executive vice president and chief technology officer. Brothers was most recently a principal at The Chertoff Group, and will oversee tech solutions, business development, and mergers and acquisitions at Peraton.

KeyW Corp. hired former Mantech International executive Dave Wallen as senior vice president of advanced cyber. Wallen built Mantech’s cyber operations, a business unit that recorded $175 million in sales in 2017. Wallen will focus on strategic growth and cyber teams at KeyW.

Photo Credit: NASA

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]>
35802
The Tipping Point for 3D http://trajectorymagazine.com/tipping-point-3d/ Fri, 12 Jan 2018 14:59:36 +0000 http://trajectorymagazine.com/?p=35748 The ability to fully harness 3D data is rooted in acquisition and scalability

The post The Tipping Point for 3D appeared first on Trajectory Magazine.

]]>
The application of location intelligence and the incorporation of 2D maps and positioning have become ubiquitous since the advent of smartphones. Now, we are entering a new era in which we can harness the power of 3D data to improve consumer experiences, as well as applications for enterprise, public safety, homeland security, and urban planning. 3D will play a more significant role in these experiences as we overcome the technical barriers that have made it previously difficult and cost-prohibitive to acquire, visualize, simulate, and apply to real-world applications.

Outdoor Data: Our World Is 3D

In a geo-enabled community in which we strive for more precise answers to complex spatial intelligence questions, traditional 2D correlation is a limiting factor. When you think about 3D data and maps, modeling buildings in an urban environment seems obvious. However, 3D is incredibly important when trying to understand the exact height of sea level or the uniformity of roads and runways. For example, one can imagine the vast differences in 2D versus 3D data and its application during the 2017 hurricane season. By including the Z-dimension in analysis, we can achieve true, precise geospatial context for all datasets and enable the next generation of data analytics and applications.

Access to 3D data for most geospatial analysts has been limited. Legacy 3D data from Light Detection and Ranging (LiDAR) and synthetic aperture radar sensors has traditionally required specialized exploitation software, and point-by-point stereo-extraction techniques for generating 3D data are time-consuming, often making legacy 3D data cost-prohibitive. Both products cost hundreds to thousands of dollars per square kilometer and involve weeks of production time. Fortunately, new solutions provide a scalable and affordable 3D environment that can be accessed online as a web service or offline for disconnected users. Users can stream, visualize, and exploit 3D information from any desktop and many mobile devices. Models of Earth’s terrain—digital elevation models (DEMs)—are increasingly used to improve the accuracy of satellite imagery. Although viewed on a 2D monitor, DEMs deliver the magic through a true 3D likeness for bare-earth terrain, objects like buildings and trees, contours, or floor models, and unlimited contextual information can be applied to each measurement. This provides a true 3D capability, replacing current “2.5D” applications that aim to create 3D models out of 2D information, at a cost point closer to $10 to $20 per square kilometer and only hours of production time.

Indoor Data: Scalable 3D Acquisition

As 3D becomes more common in outdoor applications, its use for indoor location is being explored. Unsurprisingly, similar challenges need to be overcome. Until now, real-time indoor location intelligence has been difficult to achieve. This is largely due to the absence of, or difficulty in obtaining, real-time maps and positioning data to form the foundation for the insights businesses can derive about their spaces.

Image courtesy of InnerSpace.

To create a 3D model of a room, businesses stitch together blueprints, photos, laser scans, measurements, and hand-drawn diagrams. Once the maps and models are in hand, operators must create location and positioning infrastructures that accurately track people and things within the space. Operators need to position these sensors—typically beacons—precisely according to the building map, but the beacons have no knowledge of the map and cannot self-regulate to reflect any changes to the environment. If the beacon is moved, its accuracy is degraded and its correlation to the map breaks down. Overall, this process is lengthy, cost-prohibitive, and fraught with error.

Using current methodology, professional services teams work with a wide variety of tools—LiDAR trolleys, beacons, 3D cameras, and existing architectural drawings—to compose an accurate representation of a space. Additionally, the resulting system becomes difficult to maintain. Physical spaces are dynamic, and changes quickly render maps obsolete. Changes to technology used to create the models or track assets require ongoing management, and the process rapidly becomes overly complex. This complexity is stalling innovation across myriad industries, including training and simulation, public safety and security, and many consumer applications. While organizations and consumers can easily find data for outside locations using GPS, no equivalent data source exists for indoor location data.

Emerging location intelligence platforms leverage interconnected sensors that take advantage of the decreasing cost of off-the-shelf LiDAR components and the ubiquity of smartphones, wearables, and wireless infrastructure (WiFi, Bluetooth, ultra-wideband) to track people and assets. LiDAR remains an ideal technology for these sensors because its high fidelity is maintained regardless of lighting conditions and, unlike video, maintains citizen privacy. The result is a platform that is autonomous and scalable, and operates in real time to deliver 3D models and 2D maps while incorporating location and positioning within a single solution. These sensors are small and can be deployed on top of a building’s existing infrastructure—not dissimilar to a WiFi router.

The advantage of this approach is that the platform is able to capture the data needed to create 3D models of the indoor space while also understanding the sensors’ own positions relative to each other. Each sensor captures point clouds of the same space from different perspectives to create a composite point cloud that is automatically converted into a single structured model. This solves the two critical roadblocks that industry has faced when trying to acquire indoor data: Maps no longer need to be created/updated by professional services teams, and the maps and positioning data are always integrated and updated in real time.

The sensors track electromagnetic signals from people and assets within the space. This approach respects citizen privacy by capturing a unique identifier rather than personal information. Algorithms eliminate redundant data (such as signals from computers or WiFi routers) when identifying humans within a space and model the traffic patterns and behaviors over time. The data includes the person’s or asset’s longitude and latitude coordinates, along with altitude—which, as more people live and work in high-rise buildings, is becoming increasingly necessary for emerging enhanced 911 requirements in the United States. The scalability and real-time nature of a platform-based approach results in a stream of data that can be used to drive a variety of applications, including wayfinding, evacuation planning, training and simulation scenarios, airport security, and more.

Integrating Indoor and Outdoor 3D

Accurate correlation in four dimensions will drive the framework for future information transfer and corroboration. Fixed objects at a point in time must be properly located for all of their properties to be associated. This work is more challenging than it might appear. Many objects look very similar, and various sensors have differing levels of resolution and accuracy—bleed-over of attribution and misattribution of properties is possible. The better the 3D base layer, or foundation, the more likely all scene elements will be properly defined. Once objects move within the scene, the correlation of observables, initial position, and the changes to it often allow inference of intent or purpose.

Connecting data from outside to inside to deliver a seamless experience has yet to be solved, although there is progress. By capturing indoor 3D quickly and in real time, the opportunity to integrate it with outdoor 3D models is now possible. We expect the integration of 3D and its related positioning data will soon be ubiquitous regardless of where a person is located. In areas where data providers can work together, the same approach used by Google to track traffic could allow for the establishment of routes from outdoor to indoor and vice versa to evolve rapidly. Companies creating the 3D data are defining the standards, and, as more data becomes available, accessing information can be as easy as “get.location” for software developers creating outdoor navigation apps. A centralized database with established formats, standards, and access protocols is recommended to ensure that analysts and developers work with the same datasets and that decisions and insights are derived from the same foundation, no matter where stakeholders are located.

3D Accessibility for Success

As it becomes easier to quickly and cost-effectively create and integrate indoor and outdoor 3D data, managing how that information is stored and accessed will be the next opportunity for the geospatial community. In order for 3D to be truly valuable, it must be easily—if not immediately—accessible for today’s devices. Ensuring 3D can be captured in real time will drive the need to deliver it quickly and across a wider variety of applications. A smart compression and standardization strategy is critical to the portability of the information. As the use of 3D by consumers increases, there will be more natural demand for ready access from user devices, which will help streamline and optimize applications (as it has for 2D mapping over the last decade).

Applying 3D to the real world, in real time, provides:

  • Improved situational awareness to users from their own devices.
  • Seamless wayfinding from outdoors to indoors.
  • Exceptionally detailed and portable data for military/emergency planners and operators.
  • Readily available data and web access for first responders and non-governmental organizations.
  • Global GPS-denied navigation capability for mission critical systems (e.g., commercial flight avionics).
  • A globally accurate positioning grid immediately available for analysis.

Moving Forward

3D is ready to play a bigger role in how we experience the world. The manipulation of location in 3D should be as natural as controlling a video game console. As long as the GEOINT Community keeps in mind what has been learned from both 2D mobile mappers and gaming aficionados, the move into the Z-dimension should prove as easy as it is worthwhile. Moving forward, 3D data developers and users have an important role to play—to provide feedback on what “feels” natural, and what doesn’t. After all, that’s what reinserting the third dimension is all about.

Headline image: The Great Pyramid of Giza, from 3D compositing of DigitalGlobe imagery. Courtesy of Vricon.

The post The Tipping Point for 3D appeared first on Trajectory Magazine.

]]>
35748
Digitizing the Mind http://trajectorymagazine.com/digitizing-the-mind/ Fri, 08 Dec 2017 17:03:42 +0000 http://trajectorymagazine.com/?p=35537 Merging biological and artificial intelligence

The post Digitizing the Mind appeared first on Trajectory Magazine.

]]>
The human brain is a frustrating paradox for scientists and researchers. Solving the mysteries of the brain could help quantify abstract concepts such as language, memory, and imagination. Its neural activity could be modified to make people learn faster and think quicker. Better understanding the brain could also help develop treatments for neurodegenerative diseases and unlock the secrets of consciousness. But the organ’s complexity is baffling. Humans have barely begun to discover the potential that exists within the brain, and it may take decades before significant, applicable breakthroughs are made.

A forward-thinking population of the tech community is working toward those breakthroughs using neuroprosthetic devices and brain-sensor interfaces, essentially digitizing the electric signals that bounce among neurons as the brain operates.

A recent Wired article tells the story of entrepreneur Bryan Johnson, who is using neural wiring to capture the brain activity that triggers memories. Johnson owns an algorithm capable of recording and translating neural signals into code that can be enhanced or altered and sent back to the brain—like Photoshop for a person’s memory.

Though his focus on memory is unique, Johnson isn’t alone in his pursuit. Mark Zuckerberg and a team from Facebook are developing a non-invasive, speech-to-text interface using optical imaging to type words as the user thinks them. The federal government has taken an interest in this trend as well, and is on the front lines of research and development. The Defense Advanced Research Projects Agency’s Biological Technologies Office has created a direct neural interface for movement and sensation that is capable of restoring function for people who’ve lost the ability to feel or move a limb, for example.

The artificial intelligence (AI) community is now leveraging this type of research with the goal of harnessing the human brain’s propensity for logical and critical thinking, emotion, and creativity. SpaceX founder Elon Musk has invested heavily in Neuralink, a brain implant company hoping to improve brain function by using “neural lace” to merge human consciousness with software and AI. Google’s DeepMind initiative has created an artificial neural network that uses rational reasoning to make decisions and solve puzzles.

While the common trope pits man against machine, the future landscape of intelligence seems better poised for cooperation than competition.

Photo Credit: Shutterstock

The post Digitizing the Mind appeared first on Trajectory Magazine.

]]>
35537
The AI Arms Race http://trajectorymagazine.com/ai-arms-race/ http://trajectorymagazine.com/ai-arms-race/#comments Wed, 06 Dec 2017 16:03:00 +0000 http://trajectorymagazine.com/?p=35528 NGA’s Dr. Anthony Vinci speaks at USGIF GEOINTeraction Tuesday

The post The AI Arms Race appeared first on Trajectory Magazine.

]]>
The United States, China, and Russia are “in an arms race for artificial intelligence” (AI), according to Dr. Anthony Vinci, director of plans and programs for the National Geospatial-Intelligence Agency (NGA).

Vinci discussed the importance of preparing for the future in front of a crowd of more than 100 people Nov. 14 at USGIF’s GEOINTeraction Tuesday event, hosted by OGSystems in Chantilly, Va.

Vinci pointed to recent reports in which Russian President Vladimir Putin claimed the leader in AI would be “the ruler of the world.” Meanwhile, he said, China is planning to turn AI into a $150 billion industry by 2030.

“There are these real threats, and they’re not necessarily just the ones we’ve grown used to since 9/11,” Vinci said, emphasizing the importance of dialogue about the future among NGA, the broader Intelligence Community (IC), industry, academia, and organizations such as USGIF.

“There’s a real possibility the U.S. could become second best—that we could lose some of these arms races,” he said. “We have all grown up in a world in which, by far, the U.S. was the dominant GEOINT capability, even before it was called GEOINT. We can’t even imagine a world in which we aren’t, but it’s a possibility, and we need to confront that possibility and ensure it doesn’t happen. We need to remain dominant.”

A graphic visualization of Dr. Vinci’s speech, produced by graphic recorders from OGSystems’ Visioneering team. Click to view full size. Credit: OGSystems

Vinci outlined emerging technologies with the potential to help the U.S. maintain intelligence dominance:

Commercial space: Vinci described the commercial space boom as a “complete game changer” as it opens up the use of new sensors. He also noted the rate at which the technology is advancing, citing a U.K. company that touts real-time, full-motion video from space.

The Internet of Things (IoT): The sheer numbers represent the significance of this technology, Vinci said. According to Gartner, there were 8.4 billion connected things last year, and that number is expected to reach 20.4 billion by 2020. By 2025, the IoT is expected to generate two trillion gigabytes of data per year. “Everything changes for us when we start to talk about that,” Vinci said.

Autonomous Vehicles: “What was one science fiction is now just a reality in our lives and within a few years will be everywhere—and not just in the U.S.,” Vinci said. He added that of particular interest to the GEOINT Community is the fact that all autonomous vehicles are sensors equipped with cameras and LiDAR—and of course the maps that will be needed for the vehicles to navigate. “Commercial industry will go out and map urban areas and well-developed areas,” he said. “But they’re probably not going to map dirt roads in Helmand Province any time soon, and so we need to start thinking about those things.”

Vinci said it’s important the IC confront not just how it will take advantage of these new technologies, but also how adversaries might leverage them.

“We have to prepare for a world where a country like China might try to dominate AI, where terrorists have UAVs and other autonomous vehicles they can use for attacks, where Russia might use IoT devices or other things for spying in our country, and where lots of countries and even non-state actors have access to space,” Vinci said.

In some cases, he added, these things are already happening.

Vinci concluded with a quote from writer William Gibson: “The future is already here—it’s just not evenly distributed.”

The post The AI Arms Race appeared first on Trajectory Magazine.

]]>
http://trajectorymagazine.com/ai-arms-race/feed/ 1 35528
Rochester’s Remote Sensing History http://trajectorymagazine.com/rochesters-remote-sensing-history/ Fri, 10 Nov 2017 20:52:18 +0000 http://trajectorymagazine.com/?p=35409 Kodak’s long-secret intelligence legacy carries on at Harris Corp.

The post Rochester’s Remote Sensing History appeared first on Trajectory Magazine.

]]>
Not many people—particularly younger generations of geospatial intelligence professionals—are familiar with remote sensing’s roots in Rochester, N.Y.

Dr. Bob Fiete, chief technologist and fellow with Harris Corporation’s Space and Intelligence Systems business, recently gave a presentation on the city’s remote sensing history to students at Rochester Institute of Technology (RIT), and shared his research with trajectory.

One of the first aerial night photographs, captured by George W. Goddard over Rochester, N.Y., in 1925. Courtesy of Defense Visual Information Center.

Few people also realize the legacy of Harris’ space business is traced to Kodak—a longtime Rochester institution. Fiete just celebrated his 30th anniversary at Harris, but began his career with Kodak’s Remote Sensing Systems Group, which today is part of Harris Corporation.

Fiete began gathering information on the city’s remote sensing past shortly after the Harris acquisition, with the goal to introduce new colleagues who were experts in communications systems to the world and history of imaging systems.

Kodak first revolutionized reconnaissance with its invention of roll film in 1884. Eliminating the need for heavy, glass plates allowed for the development of smaller, lighter cameras. This in turned allowed military operators to transition from taking cameras up in hot air balloons to sending them up on kites—with explosives to trigger the shutter. The first cameras were fastened to rockets as early as 1897, and in 1904 the gyroscopically stabilized camera, which is still used today in some satellites, was invented to reduce motion blurring.

Kodak’s K-1 camera, designed for use in WWI. Courtesy of Defense Visual Information Center.

1909 marked the first documented aerial photograph from an airplane, taken by Wilbur Wright, but the first use of airborne surveillance and reconnaissance occurred during World War I, when Kodak K-1 cameras were attached to the side of planes to map trench networks. During World War II, Kodak miniaturized the F24, built by Fairchild, to create the K-24 camera.

In 1957, when the Soviet Union launched Sputnik during the Cold War, the U.S. Intelligence Community (IC) began to ask, “If we put a camera on a satellite how are we going to get the film back to Earth?” The CIA took on this challenge in 1960 with CORONA, the nation’s first photo reconnaissance satellite. CORONA was equipped with a film bucket that would return the film to Earth for capture via plane. Later CORONA satellites had two film buckets, with the first bucket recovery typically occurring within the first week from launch, and the second bucket recovered the following week before the satellite was decommissioned.

Imagery analysts gave first assessments of CORONA’s film as it was developed at Kodak’s Hawkeye facility—known as “Bridgehead” in the IC for its location adjacent to a bridge over the Genesee River—which became the first imaging ground station for the National Reconnaissance Office (NRO). It wasn’t until the existence of Bridgehead and several NRO programs were declassified in 2011 that many Kodak employees could finally tell their family and friends they’d actually been working on intelligence programs all those years.

In the early 1960s, Kodak developed a satellite camera system for the SAMOS program that would develop the film onboard, then scan the images for electronic transmission back to Earth in order to get them in front of analyst’s eyes sooner. The program was short-lived because the system proved inadequate for reconnaissance, but the technology turned out to be just what NASA sought for its Lunar Orbiter camera in order to image the lunar landing sites for the Apollo program.

When NRO launched the GAMBIT 1 and 3 satellites in 1963 and 1966, respectively, Kodak for the first time supplied not just the film, but also the satellite’s cameras. While GAMBIT provided focused imagery at a higher resolution, the HEXAGON satellite was launched in 1971—using Kodak film but not a Kodak camera—to cover wider areas at lower resolution. Together, the satellites “became America’s eyes in space,” according to the NRO.

From 1965 to 1969, the U.S. pursued development of the DORIAN Manned Orbiting Laboratory—which aimed to place a manned surveillance satellite in orbit. Though the program was canceled in 1969 before a manned vehicle was ever launched, its spending totaled $1.56 billion. Having been tasked to build the DORIAN camera, the program resulted in a large optics manufacturing capability for Kodak, which would later enable the company to develop commercial satellite cameras as Harris still does today.

After President Clinton approved the sale of 1-meter resolution commercial satellite imagery in 1994, Lockheed Martin contracted Kodak to build the camera for the IKONOS satellite. In 1999, IKONOS produced the first commercial, high-resolution, color digital images from space. Since then, the commercial remote sensing industry has grown exponentially, with companies now permitted to sell imagery at 0.25-meter resolution. Harris imaging systems, building upon the Kodak legacy, now fly on most large commercial satellites, including all of DigitalGlobe’s WorldView systems and the first GOES-R advanced weather satellite launched by NASA and NOAA in 2016. The company is moving into the small satellite arena as well.

According to Fiete, Rochester is experiencing an imaging science renaissance, and Harris is fortunate to partner with the city’s many universities, including the Center for Imaging Science at RIT and the Institute of Optics as well as the Goergen Institute for Data Sciences at the University of Rochester.

Harris is currently pursuing research and development efforts such as how to build better space optics—or how to make camera systems lighter while maintaining high image quality. The company is also interested in accelerating development, as it typically takes three years from start to launch to construct an imaging system.

Another key initiative for the company is image chain modeling that allows a computer to simulate the process that creates the image—in other words, allows Harris to see the images a camera would take and understand their quality before the camera is even built.

Looking forward, Harris aims to build data analytics directly into its platforms, according to Fiete.

“The key today is people don’t just want a camera that provides an image,” he concluded. “They want a camera that gives them the necessary data for making informed decisions.”

Featured image: Kodak’s Hawkeye facility, which was known as “Bridgehead” in classified circles. Photo Credit: R.D. Sherwood and J. Sherwood

The post Rochester’s Remote Sensing History appeared first on Trajectory Magazine.

]]>
35409
Geospatial Growth in St. Louis http://trajectorymagazine.com/geospatial-growth-st-louis/ Tue, 07 Nov 2017 20:49:27 +0000 http://trajectorymagazine.com/?p=35389 USGIF convenes government, industry, academia, and community leaders for two days of discussion, technology demos, and networking in St. Louis

The post Geospatial Growth in St. Louis appeared first on Trajectory Magazine.

]]>
The United States Geospatial Intelligence Foundation (USGIF) hosted a series of events in St. Louis, Mo., Oct. 16-18, bringing together representatives from government, industry, and academia to discuss the many opportunities presented by the city’s growing GEOINT industry and the National Geospatial-Intelligence Agency’s (NGA) planned new campus in North St. Louis.

On Oct. 16, USGIF’s Young Professionals Group (YPG) hosted a Tech Talk titled “The Road to Sub-Meter Digital Elevation Modeled Surfaces” at T-REX—a nonprofit technology incubator in downtown St. Louis. Tom Creel, Ph.D., NGA’s SFN Technical Executive, and Scott J. Spaunhorst, chief of NGA’s geosciences division, discussed the future of high-resolution 3D terrain within foundation GEOINT. The talk was followed by a networking reception.

Data Centricity

The following day at T-REX, the Foundation hosted an unclassified Technology Cluster Forum, at which more than 230 attendees viewed several panel discussions, industry flash talks, and a keynote by Justin Poole, NGA’s new deputy director.

Poole said there would be many “promising opportunities for partnerships” with NGA in the coming months as the agency shifts toward a data-centric business model and continues to explore new analytic tradecraft such as high-performance computing and deep learning.

“We’re changing the way we bring in data, and we’re changing the way we use data,” Poole said. “Our traditional way of getting data—collecting pixels across a largely disconnected government system—will no longer meet the needs of our customer moving forward.”

He elaborated that the agency must move from pixels to services, and beyond tasking and collection to “brokering”—finding and acquiring content that agency customers need.

Poole also said the agency will continue to implement structured observation management and object-based production, while embracing high-performance computing and deep learning—all methods to ensure data is sharable, discoverable, and ingestible by analytic models. Poole also emphasized the need for “GEOINT assurance” to safeguard the integrity of pixels, data, algorithms, and the resulting artificial intelligence.

NGA aims to automate 75 percent of its processes to free up analysts to conduct deeper analysis, for which desktop-ready capabilities will be needed to help rapidly visualize and integrate diverse data types.

Regional Innovation

The Technology Cluster Forum featured afternoon panels on St. Louis-based innovation as well as the city’s GEOINT career pipeline.

Patricia Hagen, Ph.D., president and executive director of T-REX, moderated the panel on St.-Louis-based innovation. Hagen pointed to the fact that today’s startup rate for new businesses is roughly half what it was in the 1980s, despite current tech startup trends.

“New job creation comes from new companies, so [entrepreneurship] is really important for cities. … St. Louis is succeeding around entrepreneurship and has been recognized nationally for these efforts,” she said.

Jeff Mazur, executive director of LaunchCode—a nonprofit founded in St. Louis that provides technology jobseekers with accessible education, training, and paid apprenticeship job placement—said the organization has recently moved into the geospatial marketplace and gained new partners such as NGA and Boundless.

LaunchCode is developing a new curriculum around geospatial skills and aims to place 170 people as developers at NGA in the next few years, Mazur said.

The intent is to provide “new people a new pathway to tech jobs” that are going to be available at the Next NGA West (N2W) facility.

Kenneth Olliff, vice president for research at Saint Louis University (SLU), said he was attracted to the university because of the potential that exists in the St. Louis region. Olliff now aims to make SLU the No. 1 Jesuit research university in the country.

“Geospatial is a big thing we’re looking at right now—assets across the entire university such as disaster management, human behavior, robot interaction,” Olliff said. “We want to pull the entire university together to be a one-stop shop for NGA and other industry partners working with geospatial; to mobilize the human capital we have and through partnerships make a real contribution to the region.”

Boundless CEO Andy Dearing, co-chair of USGIF’s St. Louis Area Working Group (SLAWG), moderated the discussion of the city’s career pipeline. Dearing encouraged attendees, especially those who live and work in the St. Louis area, to join the group, which brings together professionals from government, military, industry, academia, and the community to create lasting educational and community pathways to geospatial degrees, certifications, and careers in the St. Louis region.

The Next NGA West

Pollmann and Gum. Credit: NGA

The unclassified day concluded with an update from NGA’s N2W Program Director Sue Pollmann and Scott Gum, the agency’s assistant program manager for information technology.

Pollmann said site clearing in North St. Louis is going well and that her team’s current focus is compiling design-build performance requirements into a request for proposals (RFP). The agency has already down-selected to three design-build teams, who will receive the RFP in January with a final contract award targeted for fall 2018. Pollmann added she hopes “shovels will be in the ground” by spring 2019.

Pollmann and Gum both said NGA aspires for the new facility to have plenty of unclassified space to welcome partners and academia, flexible workspaces that can be quickly converted from unclassified to SCIF, and mobile, wireless capabilities throughout.

“We plan to have a bring-your-own-device capability so you can bring your [mobile] device, have it there, and use it,” Pollmann said. “That’s another pretty big sea change for us. The other piece of flexibility is in how we work. … We know it is important in terms of recruitment and retention. We want to give employees options to go to different areas of the building.”

Scott said his goal is to keep up with changing requirements to ensure employee and user capability needs are met when the new facility opens.

“Whatever IT call is made today isn’t going to stand the test of time in five to seven years,” he said.

The Technology Forum concluded with a networking reception at T-REX.

On Oct. 18, USGIF hosted NGA Tech Showcase West at NGA’s current Second Street facility. The classified day was attended by more than 130 people and featured remarks by Poole, followed by a series of technology demos that provided attendees a first-hand look at the work of analysts in St. Louis. The series of events closed with a reception at the Anheuser-Busch Beer Museum.

To learn more about USGIF’s St. Louis initiatives or to get involved, email the SLAWG at slawg@usgif.org.

Featured image: NGA Deputy Director Justin Poole gives a keynote address. Credit: NGA

The post Geospatial Growth in St. Louis appeared first on Trajectory Magazine.

]]>
35389
The Genesis of Google Earth http://trajectorymagazine.com/genesis-google-earth/ http://trajectorymagazine.com/genesis-google-earth/#comments Wed, 01 Nov 2017 14:53:37 +0000 http://trajectorymagazine.com/?p=35010 The history and future of the software that made GEOINT mainstream and changed the way we view the world

The post The Genesis of Google Earth appeared first on Trajectory Magazine.

]]>
In August 2005, Hurricane Katrina ravaged the Gulf Coast of the United States, bursting levees throughout Louisiana and Mississippi and submerging the streets of south Florida. According to the National Hurricane Center, it was the deadliest hurricane since 1928, claiming at least 1,800 lives and causing more than $108 billion in damages.

The U.S. Navy, Coast Guard, and other federal relief groups deployed helicopter teams to rescue people stranded in New Orleans without the resources to escape or survive in their homes. Hurricane victims dialed 911 for urgent help at specific street addresses, but it was impossible for first responders to find them without precise GPS coordinates—street signs and house numbers were invisible beneath the deluge. In the absence of traditional situational awareness, responders were operating blind.

In California, a team from the recently minted Google Earth program launched into action, creating real-time imagery overlays of heavily affected areas on top of its existing 3D globe platform. Fly-by aerial photos from the National Oceanic and Atmospheric Administration (NOAA) and satellite imagery from DigitalGlobe—one of Google Earth’s primary providers—revealed the scope of the hurricane’s destruction. Google Earth made this data publicly available and responders had eyes again.

Now, they could input a caller’s location into Google Earth paired with case-specific details—for example, a target trapped in a two-story house with a clay roof next to an oak tree. Equipped with up-to-date imagery from Google Earth, relief teams saved thousands of people from Katrina’s aftermath.

Years later, the Louisiana Governor’s Office of Homeland Security and Emergency Preparedness would pair internal data with Google Earth Enterprise (GEE)—the desktop software suite for private or offline use of Google Earth—to create 3D globes for emergency response and infrastructural planning.

Today, Google Earth is among the most popular geospatial software in the world, boasting upward of one billion downloads. With it, students take virtual tours of the world’s wonders from their classrooms, house hunters evaluate prospective properties without leaving home, and much more. The U.S. military employs GEE for secure mission planning and intelligence professionals use it to visualize points of interest and detect change. Google’s spinning globe truly represents the democratization of geospatial intelligence.

In the case of GEE, government and military organizations became so dependent on the software’s private storage and visualization capabilities that not even a depreciation announcement from Google two years ago stopped them from using the platform.

As a result of the community’s reliance on GEE, earlier this year Google decided to make the software’s code open source and available for public download on GitHub.

With its future in the hands of its users, GEE is poised to remain at the center of mission planning and situational awareness efforts for the defense and intelligence communities—at least until a supported platform of equal utility arises.

A Giant’s Infancy

At the time Hurricane Katrina made landfall, Google Earth software had been available to the public for only three months. But the story of Google Earth began to take shape 10 years earlier at a computer hardware company called Silicon Graphics (SGI).

Michael T. Jones, then a member of SGI’s computer engineering team, had developed an invention that would revolutionize the firm’s 3D graphics offering, which at the time was used primarily for flight simulation.

“It was called clip mapping. That’s the fundamental hardware feature SGI had that let it do this amazing, smooth flight around the world,” said Jones, now a managing partner at Seraphim Capital.

Jones’ technique displayed a small region of graphics—the region under examination—in high resolution while the peripheral regions were displayed in low resolution. Jones, along with SGI engineers Chris Tanner, Chris Migdal, and James Foran, patented the method in 1998. Clip mapping required powerful supercomputers to run, but enabled a high-fidelity texture map that became the centerpiece of SGI’s final graphics system, Infinite Reality, which at the time boasted the fastest 3D graphics in the world.

Federal agencies such as the National Geospatial-Intelligence Agency (NGA) and the National Reconnaissance Office (NRO) would later follow suit, Jones said, using clip mapping to build data visualization platforms of their own.

To demonstrate the vastness of Infinite Reality’s capabilities, SGI created a demo called “Space to Your Face.” It began with a wide view of Earth from space, slowly zooming into Europe. When Lake Geneva became visible, the program would focus on the Matterhorn in the Swiss Alps. It would continue to zoom until reaching a 3D model of a Nintendo 64 console on the mountainside. Then it would zoom in even more, settling on the Nintendo’s MIPS r4000 graphics chip—a microprocessor created by SGI—before snapping smoothly back to space.

The demo was well received. Educators were excited to see an interactive, classroom-friendly global map tool, and video game developers had never seen such fluid graphics.

Seeking a new home for their brainchild, Jones, Tanner, and former SGI engineers Remi Arnaud and Brian McClendon founded a company of their own. Called Intrinsic Graphics, it focused on developing high-quality 3D graphics for personal computers and video games.

In October 1999, Tanner took the concept further when he designed a software version of the clip mapping feature that allowed a user to “fly” within a 3D visualization of Earth.

“People were blown away,” Jones said. “They were looking at Google Earth.”

Though the software platform wasn’t Intrinsic’s primary product—the graphics themselves were—Jones was intrigued and continued refining the spinning globe.

Yet running the software required expensive and highly specialized computing hardware not available to most of the private tech industry, let alone the commercial user.

“That machine cost $250,000. We wanted to be able to offer this without the specialized hardware,” said McClendon, now a research professor at the University of Kansas. “To be able to get that performance out of a PC meant we could share it with the world. The moment you realize you can transmit this data over the internet, you begin to realize the impact. A group of us at Intrinsic thought, ‘We need to build a company around this.’”

And before long, yet another company was founded. In 2000, Jones, McClendon, and a few others spun out the software from Intrinsic Graphics to launch Keyhole. In early 2001, Keyhole raised first round funding from NVIDIA and Sony Digital Media Ventures, making official its existence as a standalone company. Keyhole’s first product, EarthViewer 1.0, was the true precursor to Google Earth.

Using public data gathered from NASA’s Landsat constellation, IKONOS imagery, and aerial photos of major U.S. cities, Keyhole built a complete digital Earth. Though pixels were beginning to proliferate, high-resolution imagery was mostly limited to U.S. metropolitan areas.

Under the direction of newly appointed Keyhole CEO John Hanke, the company marketed EarthViewer to the commercial real estate and travel industries. Civil engineers also purchased it for the ability to sketch out location information when planning construction projects. 

“Intelligence agencies wanted this capability as well, but they wanted to use their own data,” McClendon said.

The Intelligence Community (IC) was intrigued, but wanted to use classified geospatial data gathered through National Technical Means rather than the data on Google’s public server. To accommodate such buyers, Keyhole began offering an enterprise version of its software, allowing large-scale users to stand up private network servers and host their own data on a replica of EarthViewer’s 3D globe.

NIMA Backing

The National Imagery and Mapping Agency (NIMA) was the first agency to take note of this unprecedented capability. Under the leadership of then director James Clapper and deputy director Joanne Isham in 2001, NIMA launched a research and development directorate known as InnoVision. The new directorate sought to leverage state-of-the-art technologies from industry to help the IC adapt to the changing face of conflict in the aftermath of 9/11.

Isham, a former CIA employee, was well versed in In-Q-Tel, the CIA’s nonprofit venture capital initiative. She approached Robert Zitz, InnoVision’s first director, about collaborating with In-Q-Tel to find partner companies.

“We sat down together with In-Q-Tel and went over what our most urgent requirements were,” said Zitz, now senior vice president and chief strategy officer of SSL MDA Government Systems. “In-Q-Tel started trying to locate companies and [in 2002] discovered Keyhole.”

In-Q-Tel was impressed by the low barrier to entry and EarthViewer’s ease of use.

[Users] will create data files … rapidly and not to spec, put them in Google Earth, and they’ll run somehow. That’s really the reason why no other applications have been able to enter this space as dominantly as Google Earth.

— Air Force Lt. Col. Mike Russell, NGA

“With [EarthViewer], you just click on the icon and all of a sudden you’re flying around the globe,” said Chris Tucker, In-Q-Tel’s founding chief strategic officer and now the principal of Yale House Ventures. “There had been some way earlier-era, very expensive defense contract iterations [of a 3D digital Earth], but none at a consumer level that a regular analyst could make sense of without being a missile defense expert or some other technical user.”

In 2003, In-Q-Tel invested in Keyhole using NIMA funding. It was the first time an intelligence agency other than the CIA had employed In-Q-Tel. NIMA experienced an immediate return on its investment. Within two weeks, the U.S. military launched Operation Iraqi Freedom, which Keyhole supported in its first mission as a government contractor.

“We wanted a capability that would help military planners visualize and seamlessly move through datasets pertaining to particular target areas,” Zitz said. “We also wanted the ability to rapidly conduct battle damage assessments. NIMA was supporting joint staff in the Pentagon, and to sense how effective a strike was after-the-fact was very labor and imagery intensive. With Keyhole, we were able to streamline that process.”

EarthViewer quickly gained public exposure through TV news coverage using its battlefield imagery.

One of McClendon’s junior high school classmates, Gordon Castle, was CNN’s vice president of technologies. McClendon approached Castle with his EarthViewer demos. Castle was wowed, and CNN became one of Keyhole’s first media customers. The network routinely used EarthViewer to preview story locations during broadcasts. When the U.S. invaded Iraq, CNN used the software heavily—sometimes several times an hour—to show troop movement or combat locations.

The Big Break

Realizing its technology could improve people’s understanding of the planet, widespread commercialization became Keyhole’s mission. But Keyhole was a small company, and scaling up its computing infrastructure to handle more traffic was expensive. An annual EarthViewer Pro subscription still cost $599—a price justified by the company’s high operating costs. Keyhole’s bottom line stood in the way of its goal.

“[We wanted] everybody that opened the app to be able to find their house,” McClendon said. “It’s the first thing everybody searches for. If that experience isn’t good, the user thinks the product isn’t good.”

That first step required high-quality coverage of the entire land surface of Earth—a seemingly unattainable achievement for Keyhole’s 29 employees, even with In-Q-Tel backing. And the startup’s network bandwidth wasn’t strong enough to offer a high-resolution 3D globe to millions of consumers worldwide. McClendon recalled making regular trips to Fry’s electronics store to purchase hard drives, struggling to keep up with demand.

“To provide high-resolution data for the whole world was an epic undertaking … that would’ve taken us probably a decade to build up on our own,” he said.

For its vision to materialize, Keyhole needed more capital to scale up imagery procurement and to build powerful data infrastructure to store high volumes of imagery. In 2004, as if on cue, along came Google—one of the few companies powerful enough to manifest Keyhole’s mission. And they wanted to buy.

“It seemed like a tough road. Everybody was impressed with what we had done, but there was going to be competition and we needed to move quickly,” Jones said. “So we sold to Google because our dream would happen.”

As part of the acquisition, the Keyhole team maintained control of the program as it evolved. Most personnel, including McClendon and Jones (Tanner had since departed Keyhole), became executives at Google, developing their software unrestricted by the need to keep a startup afloat.

Once at Google, the program began to operate on an entirely different scale. Instead of acquiring licensing deals for small portions of a vendor’s imagery at a time, Google bought out all the imagery a vendor had available at once. Google also provided access to a rapidly growing user base already hooked on its web search platform.

Before debuting a Google-branded product, the former Keyhole team had to rewrite EarthViewer’s service code to run within Google’s infrastructure. Additionally, pre-release engineering refinements focused on adding data around the globe, making the program accessible to non-English speaking users, and simplifying features. Finally, Google Earth launched in June 2005.

The software exploded in the commercial marketplace. Where Keyhole’s consumer version of EarthViewer was too expensive for most casual civilian users, Google Earth was downloadable for free.

“We had millions of users in the first few days and tens of millions in the first year,” McClendon said.

Keyhole brought to Google a new form of interactive information that mimicked the real world and helped people understand their place in it. A GEOINT tool had finally made it to the mainstream.

In 2006, Google released Google Earth Enterprise for organizations seeking the capabilities of Google Earth but with private data in a secure, offline environment. The GEE suite included three software components: Fusion, the processing engine that merged imagery and user data into one 3D globe; the Earth server that hosted the private globes built by Fusion; and Client, the Javascript API used to view these globes.

Whether to disseminate that data after creating proprietary globes in GEE was, and still is, up to the user. This was the final evolution of the EarthViewer enterprise suite used by the Pentagon at the outset of the Iraq war.

GEE in Action

In the years following its launch, government agencies, businesses, and state municipalities began to deploy GEE at internal data centers to produce 3D globes using sensitive or classified data.

The city of Washington, D.C., for example, has used GEE to model and visualize public safety data including crime, vehicle and fire hydrant locations, and evacuation routes.

Arguably the largest user of GEE is the U.S. Department of Defense (DoD). When Google Earth was first released, military customers had an explicit need for this capability to function in a highly secure private network.

For example, the Army Test and Evaluation Command (ATEC) uses private data on enterprise servers such as Google’s to evaluate a wide range of weapon systems as well as ground and air operations.

At ATEC’s Yuma Proving Ground (YPG) in Arizona, proprietary terrain data, imagery, and operations maps are overlaid on Google Earth and used to plan and schedule launches.

“Knowing where everyone is and moving in a secure range and air space is important to our operations,” said Ruben Hernandez, an Army civilian in the YPG’s engineering support branch. “Much of this data is also armed for range awareness display.”

For example, prior to an indirect fire artillery test, personnel use YPG data within GEE to assess the safest positions on base to conduct the test—when to fire, where to fire from, and what to fire at. That information is disseminated throughout YPG for awareness.

“Many of these munitions have extensive footprints. We want to find out how much air and land space [the blast] is going to consume. Safety is a big component of how these overlays are planned,” Hernandez said.

NGA is another major GEE stakeholder. In 2008, the agency’s new GEOINT Visualization Services (GVS) program invested in the enterprise server. GVS has since produced a proprietary version of Google Earth for warfighters featuring classified NGA data.

According to GVS program manager Air Force Lt. Col. Mike Russell, “GVS was built around providing a version of Google Earth in the secret and top secret domains so users could visualize classified information geospatially and temporally in a common operating picture.”

Now, NGA’s private Google Earth globes are mission critical for more than 30,000 customers daily, including DoD Combatant Commands, the FBI, CIA, NRO, National Security Agency, and Federal Emergency Management Agency. NGA’s current release is the second largest Google Earth globe in the world and is used across the DoD and IC for common situational awareness, tracking vehicles and personnel, delivering intelligence briefings, and more.

Russell praised Google’s efficient rendering of data files in the Keyhole Markup Language (KML) format. KML was created for file building in Keyhole’s EarthViewer platform and has since become an industry standard for visualizing geospatial data.

“[Users] will create data files like the location of an IED or a live dynamic track of an aircraft. They can build these files rapidly and not to spec, put them in Google Earth, and they’ll run somehow. [Competitors] can only render smaller KMLs or those built to spec. That’s really the reason why no other applications have been able to enter this space as dominantly as Google Earth,” Russell said.

The Unbundling

GEE served a far more specific client and purpose than the commercial Google Earth services, but its rate of adoption was noticeably low compared to most Google products.

According to McClendon, “Continuing to innovate on a hosted service exclusively for the enterprise community was not financially viable.”

In March 2015, Google announced the depreciation of GEE. After a two-year transitional maintenance period, the company stopped supporting GEE software in March 2017. Though it was being phased out of Google’s product line, GEE remained in use by invested customers relying on it to meet mission demands and house their data.

Hernandez recalled pushback from teams at Yuma who were not keen to change their data storage and visualization system. According to Russell, GVS feared losing its primary product and stranding customers without an application to replace it.

To accommodate the ongoing need, Google announced in January it would publish all 470,000 lines of GEE’s code on GitHub, allowing customers to continue using the software they’d grown loyal to and to improve the product independently.

For customers who prefer transitioning to a supported enterprise software, Google has coordinated with Esri to offer free software and training for GEE customers who migrate to Esri’s ArcGIS platform. 

The open-source GEE (GEE-OS) suite includes the Earth server, Fusion, and a portable server allowing users to run GEE on a mobile device or desktop computer not connected to a centralized server. The GEE Client software, which is required to connect to the Earth server and view 3D globes, was not carried forward into the open-source environment. Instead, it will continue to be maintained and provided by commercial Google Earth.

Thermopylae Sciences and Technology (TST), NT Concepts, and Navigis—three longtime Google partners—supported GEE’s transition to open source. In the spring, each of the three companies sent a developer to Google in Mountain View, Calif., to spend several weeks learning the code from Google developers who had been maintaining the software baseline. 

TST began a partnership with Google in 2007 through a series of federal government customer engagements supporting Thermopylae’s own Google Earth-based tracking console. When the open-source announcement was made, TST’s Earth Engineering team was reassigned to the company’s Open Source Development Office to create the GEE GitHub site and migrate the source code.

On Sept. 14, TST’s open source team released GEE-OS version 5.2.0, which matches the last proprietary release as well as fixes bugs that emerged during the two-year depreciation period.

“When we pulled the code out from [Google’s] proprietary side, there were a lot of things that needed to be built back up or replaced with open-source components,” said Thermopylae CEO AJ Clark. “Really these first few months are just about providing feature parity with where the code was at its last state inside Google.”

TST’s team aims to release GEE-OS 5.2.1 by the end of 2017.

Now that parity is achieved and the program’s performance is stabilized, developers will begin submitting expanded code contributions. According to Clark, the first value-add propositions will most likely begin to flow in early 2018. Meanwhile, DoD and IC users are eager to discover how they can further adapt the software for their specific missions.

Chris Powell, CTO of NT Concepts, said the company is working with its defense and intelligence community customers to support GEE and their transition to the GEE-OS baseline. 

“We’re also actively looking for opportunities to contribute back to the open source baseline for feature improvements and capabilities,” Powell said, adding some possibilities are scaling the GEE processing power to a larger compute platform and examining how the software can be optimized for the cloud.

Hernandez said the planning crew at Yuma is looking forward to new software capabilities that could be built out at the request of the test community. Among these features, he said, is the ability to “grab geospatial objects and collaborate on them between multiple users; to grab, extend, and change the shape of a [weapon] footprint in 2D or 3D; and to provide a simulation of an object’s line trajectory.”

According to Jon Estridge, director of NGA’s Expeditionary GEOINT Office, the agency has committed to provide enhancements and ongoing sustainment to open-source GEE on Github through at least 2022.

“A few specific examples would be multi-threading the fusion process to support massive terrain and imagery updates, enhanced 3D mesh management, and inclusion of ground-based GEOINT content like Street View,” Estridge said. 

Open source means more customizability for users with niche wants and needs. No two proprietary Google Earth globes look the same, and teams will have more command over the unique data they store, visualize, and analyze within the program.

“It’s very positive,” Russell said. “[Open source is] an opportunity for NGA to partner with Thermopylae to tie the proprietary and non-proprietary pieces together, and it allows us to sustain Google Earth for our user community for a longer period of time.” 

The decision to make GEE code open source only improves the program’s accessibility and potential use cases, and will bolster the software’s longevity. Code sharing is a growing trend in the IC, and Google has provided government, military, and industry unlimited access and control of one of the most useful enterprise GEOINT tools on the market. 

The post The Genesis of Google Earth appeared first on Trajectory Magazine.

]]>
http://trajectorymagazine.com/genesis-google-earth/feed/ 2 35010
Record Breaking Data Storage http://trajectorymagazine.com/record-breaking-data-storage/ Thu, 03 Aug 2017 22:36:23 +0000 http://trajectorymagazine.com/?p=34422 IBM once again breaks record for uncompressed data storage

The post Record Breaking Data Storage appeared first on Trajectory Magazine.

]]>
While most modern data storage efforts (such as Google’s Nearline and Coldline) are focused on the cloud, IBM Research and Sony recently announced a major breakthrough in one of the field’s oldest technologies: sputtered magnetic tape.

Using a cartridge prototype that can fit in the palm of a hand, IBM can now store up to 330 terabytes—more than 330 million books worth—of uncompressed data. According to Techspot, that’s six times the size of the world’s largest hard drive.

At 201 gigabits of data per square inch, this is the highest areal recording density ever achieved and about 20 times the density of mainstream commercial tape drives. This new capacity surpasses IBM’s existing world record set in 2015 of 123 gigabits per square inch. To store 330TB, IBM’s newest cartridge holds more than 1 kilometer of tape.

 

According to a Sony press release, the high performance tape was developed “by bringing together Sony’s new magnetic tape technology employing lubricant with IBM Research – Zurich’s newly developed write/read heads, advanced servo control technologies, and innovative signal-processing algorithms.”

Magnetic tape is a strong means of storage thanks to its ability to hold high volumes of data over long periods of time, low consumption of computing power, and low cost per terabyte.

Techspot reports the milestone—IBM’s fifth storage density record since 2006—has cemented the tech firm’s plans to scale up tape storage research and development for the next decade as it seeks solutions to one the greatest big data challenges.

Photo Credit: IBM Research

The post Record Breaking Data Storage appeared first on Trajectory Magazine.

]]>
34422