Civil – Trajectory Magazine We are the official publication of the United States Geospatial Intelligence Foundation (USGIF) – the nonprofit, educational organization supporting the geospatial intelligence tradecraft Tue, 20 Feb 2018 16:12:19 +0000 en-US hourly 1 Civil – Trajectory Magazine 32 32 127732085 The Past, Present, and Future of Geospatial Data Use Thu, 01 Feb 2018 16:31:38 +0000 Exploring the uses of geospatial data in retail, health care, financial services, and transportation/logistics

The post The Past, Present, and Future of Geospatial Data Use appeared first on Trajectory Magazine.

Over the past quarter century, information in the form of digital data has become the foundation on which governments, industries, and organizations base many of their decisions. In our modern world, there exists a deluge of data that grows exponentially each day. Companies and institutions have come to the awareness that not only must they have access to the right data at the right time, but they must also have access to analysis of the raw data to make correct decisions. The proper collection, analysis, and usability of timely and relevant data can mean the difference between success and failure.

“As organizational decisions increasingly become more data driven, businesses need to assure decisions are made with the most accurate data. That explains why so many organizations have made data collection and analysis a strategic and organizational priority and recognize data as a mission-critical asset to manage.”

– Harvard Business Review

Hence the constant search for new data sources, tools, solutions, and experts. Hence the persistent quest for new ways to use data, find relationships in data, and discover patterns in data. As we reflect on the uses of geospatial data, one of the most significant growth areas in the broader world of data is the area of data visualization. Whether rendering information in two or three dimensions, geospatial data is the key to visualizing data, which is why it has become one of the most sought after forms of data. Geospatial data was traditionally confined to use by the military, intelligence agencies, maritime or aeronautical organizations, etc. Today, the use of geospatial information has expanded into almost every market and institution around the globe, with the discovery that it can provide new levels of insight and information. Geospatial data has become an integral element in how companies and organizations conduct business throughout the world. As we look at how geospatial data is being used in the past and present, it makes us question how the uses of this data will change in the future.

Geospatial Data in Retail

Unbeknownst to most consumers, data drives the world of retail. Google, Amazon, and Walmart have realized the value of geospatial data to achieve growth and digital transformation, and now others are following suit. To tailor products, services, and goods, it is important to know the socioeconomic information of your customers. Specifically, geospatial data can provide retailers data on income, housing/rent prices, surrounding business performance, population, and age. These details determine the brands and the products they carry. For example, a store like Macy’s or JCPenney in an urban location will carry different brands than it would in a suburban or rural community.

Another way retail uses geospatial data is in combination with weather pattern predictions. In areas prone to hurricanes, tornadoes, or extreme winter weather, it is necessary to change items overstocked or on hand. In times of catastrophe, such as Hurricanes Harvey, Irma, and Maria, stores like Home Depot typically carry a surplus of generators. In the restaurant industry, Waffle House prepares to provide a limited menu during times of inclement weather. During predicted storms, especially in the South, Waffle House will order the necessary food to operate on a limited menu to provide their customers with breakfast in times of need. Also, weather pattern visualizations allow grocery stores to know when they should stock up on non-perishable items. Although storms are not predictable, the times and trends year after year are, and the ability to forecast at least a few weeks ahead can increase profits and better serve customers in times of need.

In a more traditional brick-and-mortar industry, such as banking or fast food, companies like Subway and Wells Fargo can select future optimal sites and assess the past performance of existing locations. Socioeconomic data as well as information like traffic patterns, foot traffic, and the number of residences in the area can be helpful when choosing a location. Geospatial data can also provide information on competitors in the area and forecast upcoming trends or construction projects that may affect business. For example, it’s important to know if a major, long-term road construction project is planned that may impact traffic patterns and accessibility of the business location.

The use of geospatial data in retail is not a new development. People began using customer data for retail sales forecasting back in post-World War II, however, it wasn’t until the 1990s when our technology improved enough to allow companies to perform “data mining” on their customers and retail stores. Since then, data mining has gone from raw statistics to incorporating other technologies such as artificial intelligence to help log and track activities in certain locations.

The creation of GIS software in particular provided companies with a multitude of information. Through the use of thematic map coloring, companies are able to visualize geographic patterns that may not otherwise be seen in the raw data. By entering the raw data into data tables and then instructing the GIS software to generate the data into a layer on the map (such as placing pins on the map to mark where a company’s best customers lived), it creates a visual that allows retail companies to recognize certain patterns in the population. Different layers of maps can also be added or taken away to provide additional/less information with just a click of the mouse. For example, if a company was viewing a map that showed where the best customers lived but wanted to focus on the average annual income of the population living near the store, they could uncheck the layer showing where the best customers lived and select the average annual income layer instead. These abilities make GIS an invaluable asset for any retail business.

  • This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations. 

Geospatial Data in Health Care

Health geography and the application of geospatial data and techniques continues to expand its influence and use to support more accurate and timely decision-making in the healthcare market. Research continues into the application of social geography and redefining health care from a model of treatment to a model of prevention and wellness.

Geospatial data is essential for both the study of epidemiology and the geography of health care. When we “know the earth,” when we discover patterns and influencing factors, when we understand how a population is influenced by social and cultural norms, only then can we begin to understand the effects on humans and their health needs.

Many diseases are being researched today using geographic techniques. The location of water, IV drug users, environmental hazards, or the nomadic patterns of people can all provide clues and knowledge to determine where the greatest healthcare need could exist in the future. One such example follows.

Geospatial research teams are using commercial data to develop simultaneous sky and ground truth for detecting and tracking nomadic pastoralists in rural areas of Africa. Using algorithms originally developed for defense intelligence, industry has prototyped solutions that can detect and geo-locate new dwellings in the Lake Chad region. Analysis helps develop patterns of life, including health-related information, based on data availability. This information can be provided to workers on the ground in order to provide efficient vaccine and medical care to nomadic populations.

Industry is also applying advanced algorithms to epidemiology to refine the scope and improve the cost-effectiveness of imagery tasking for more sensitive and specific results. For example, geospatial technology is being used to detect and geo-locate waste tire piles in Africa, which are a significant breeding ground for disease-carrying mosquitos.

Through discovery of these disease breeding grounds, healthcare teams can determine disease vectors to ultimately provide much needed vaccinations for diseases such as polio, West Nile virus, and malaria. Currently, nomadic tribes and camps are difficult to track, requiring locals and untrained health workers to deliver vaccinations in remote areas based on the seasonal migration of the tribes. Because of the difficulty in pinpointing the tribe locations and the nature of employing sometimes-corrupt locals for delivery, inoculations may end up on the black market and many people could remain unvaccinated. One in five children worldwide are not fully protected with even the most basic vaccines. As a result, an estimated 1.5 million children die each year—one every 20 seconds—from vaccine-preventable diseases. This application of technology provides better tools to track human migration, and to produce trends and reports that can make the vaccination delivery to humans in need more precise and timely. This same technology can be applied to find other structures and bio-forms that function as breeding sites—information that can be provided to survey teams for validation and action.

The use of geospatial data and analysis provides impact and benefits to the healthcare industry daily. Geospatial tools are able to visualize and inform service providers about changes in patterns, environmental impacts, identification of and changes within high-risk areas, and where the greatest need for resources providing the greatest benefit should be deployed.

Geospatial Data in Financial Services

The financial services industry, which traditionally consumes data in the form of dollars, cents, credits, and debts contained within spreadsheets, balance sheets, or financial statements, has discovered value in geospatial data. Consider the world of investment banking, an industry whose success is built by betting on ventures that offer the best return on investment and avoiding ventures that have a high risk of failure. This industry has created a science out of making the right investments based on analysis of all available data. Certainly this includes accounting data, balance sheets, and financial forecasts. However, today many financial services providers are also including geospatial data and analysis in their decision process. By using geospatial data and employing experts in geospatial analysis, companies can access new elements of knowledge, including but not limited to:

  • Visualizing real estate or land holdings tied to a particular investment.
  • Tracking changes to corporate, industry, or regional construction or development over time.
  • Visualizing geographic and demographic data of investments and the regions of the globe they occupy.
  • Analyzing services and infrastructure in a geographic area that may have a positive or negative impact on an investment.
  • Using geospatial data as one more source to avoid inaccurate or false financial information.
  • Analyzing imagery data of current or prospective investments half a world away without the need for travel.

In all of these examples, the benefit is the same. Geospatial data provides a new type of information that promotes a better final decision. Geospatial data provides new information that at a minimum promotes a more informed decision process and, in many cases, a more profitable decision. Additionally, investment risk can be reduced in ways that were unheard of a decade ago.   

Geospatial Data in Logistics/Transportation

Historically, geospatial data has been most commonly associated with transportation through the utilization of maps for navigation and transit. However, we have seen an abundance of new applications become available with digital maps that are changing the way that we understand our world. Would one ever think you could know exactly how much time it would take to get from Point A to Point B using the fastest route? Or, that we would be able to caution other drivers of a disabled car?  

But beyond the common use of applications like Google Maps and Waze, companies and industries are leveraging geospatial data for transportation that provides better solutions.

Today’s economy is focused on how to achieve results cheaper and faster while still maintaining high-quality products. Geospatial data has been a key influence in logistics and routing via roads/highways, railways, ports/maritime, and airports/aviation. Companies have been able to expand their businesses with this data by reducing the complexity of navigating large geographic areas. These operations can include:

  • Using global positioning systems (GPS) for vehicle tracking and dispatch to expedite schedules.
  • Conducting route analysis for better efficiency when transporting goods.
  • Mapping operation/warehouse locations for the proper inventory of goods for transport.

By implementing geospatial data into business decisions, companies can see favorable results. Recent studies show that by using geospatial data, companies can help improve efficiencies and customer satisfaction as well as drive business strategy:

“Research carried out by Vanson Bourne on behalf of Google, shows that mapping technology has had a dramatic impact on the transport and logistics organizations that have embraced it. 67 percent are experiencing better customer engagement, 46 percent have improved productivity and efficiency, and 46 percent have seen reduced costs as a result. Over half (54 percent) of those surveyed say that it has led them to reconsider their organization and/or product strategy.”

Internationally, geospatial data for transportation is in great demand. Data consisting of population densities, land uses, and travel behavior are valuable at the federal, state, and local levels to aid in transportation policy and planning. These data improve decisions made for highway management to ensure better use of limited funding.

During natural disasters, geospatial data plays an important role in risk management regarding transportation routes. Use of geospatial data informs strategic planners of potential routes that could be impacted due to the risks inherent to geography. These data also help identify evacuations routes. Emergency management organizations are able to identify road closures to help them navigate to people in need as quickly as possible.

Lastly, public transportation, fitness, and sport-based applications used for transportation should not be overlooked. There is an abundance of these applications available to the everyday user that helps provide information necessary to make timely decisions to improve schedules and results.

The Future of Geospatial Data

Innovation and cutting-edge research and development (R&D) in the field of geospatial data, geospatial science, and analytics continue to yield new ways to incorporate geospatial data into new arenas and offer solutions to today’s most challenging problems. Companies and academic institutions across the country are investing in developing geospatial technologies that will further extend the use of this valuable data outside traditional markets.

The fields of remote sensing and mobile drone platforms/sensors are expanding rapidly and providing consumer markets new levels of persistent and targeted geospatial data previously available only to the military and intelligence agencies. Geospatial data is a critical element to the operation of drones and small autonomous spacecraft, all of which depend on geospatial data to provide precise positioning. Numerous R&D activities are finding new ways to provide more accurate data to these platforms, thus enhancing their overall performance.

GIS research has also become a critical element in developing artificial intelligence (AI) and machine learning (ML) technologies, providing an important data element to the content libraries and algorithms of these systems. AI innovations offer groundbreaking ways to perform topological data analysis, spatial analysis, change detection, and feature selection.

Geospatial data is also one of the foundational elements of virtual reality (VR) development. There is an increase in the use of geospatial data to inform policy-making. Spatial data related to urban sociology, demography, and statistics are becoming an essential element of many local, state, and federal government decision processes. The aforementioned is merely a sampling of the future of GIS. Other R&D activities that will further broaden the use of geospatial information include but are not limited to:

  • Biosecurity and health informatics.
  • Biostatistics and health risk appraisals.
  • Geospatial patterns of health behaviors and outcomes.
  • Geospatial patterns of disease treatment and outcomes.
  • Urban health, education, crime, and economic development.
  • Computation spatial statistics and social-environmental synthesis.
  • Geospatial urban planning and development.
  • Geospatial civil engineering.
  • GIS for traffic analysis and engineering applications.
  • Environmental and food security on both a regional and global scale.
  • Transport of contaminants in soil and water.
  • Geospatial trends in air pollution.
  • Food and water security.
  • Regional climate response and agricultural forecasting.

Geospatial data use has expanded beyond traditional consumers and is adding value to the retail, transportation, healthcare, and financial markets, to name a few. This expansion indicates that adding geospatial data to any data collection or analysis effort is beneficial. Furthermore, it speaks to the ever-present need to ensure geospatial data and related tradecrafts are properly governed to provide consistency in quality, accuracy, and security.

The post The Past, Present, and Future of Geospatial Data Use appeared first on Trajectory Magazine.

The Vanguard of Commercial GEOINT Wed, 31 Jan 2018 17:20:46 +0000 From self-driving cars and “drones as a service” to crowdsourcing exercise routes, the commercial world continues to leverage GEOINT in new and creative ways

The post The Vanguard of Commercial GEOINT appeared first on Trajectory Magazine.

First we told our devices how to locate themselves, then we gave our computers the power to parse the profusion of data those devices generate. Now, those devices are returning the favor by providing useful information about the world around us. But many of us have only begun to realize all the possibilities these changes have opened after creeping up on us from multiple directions.

“We sort of slouched into it,” said Dr. Todd S. Bacastow, a professor of practice at USGIF-accredited Pennsylvania State University. “It’s certainly been within the last five to 10 years that we’ve begun to see this massive amount of data and all the opportunity within it.”

Around 25 years ago, only approximately 15 percent of the information collected in the world was geo-tagged, observed Dr. Steven D. Fleming, a professor of spatial sciences with the University of Southern California’s Spatial Sciences Institute, which is also accredited by USGIF to grant academic GEOINT Certificates.

Now? “Most of the world’s data is geo-tagged—I think it’s 85 to 90 percent,” Fleming said. “We know where a banking transaction starts and where it ends. We can track digits. We can certainly track where people are.”

That’s the story of how geospatial intelligence (GEOINT) has generated new perspectives on the natural and built environment. But the next chapter—how companies take these possibilities and turn them into new products and services—includes many plot twists.

These four companies illustrate only a few of many ways the commercial world is leveraging the power of GEOINT.

Teaching Cadillacs to Drive Themselves

The form of GEOINT many people know best is the digital map—for example, the latest Geography 2050 conference in New York City focused entirely on mobility. The ability of a phone to locate itself and then offer directions customized to traffic conditions was the stuff of science fiction 30 years ago. But as impressive as the digital cartography of Google and others can be, it’s not precise enough to feed directly to a self-driving vehicle.

So Cadillac decided to commission its own maps before it could include its highway-only Super Cruise self-driving option in the 2018 CT6. The carmaker turned to a Livonia, Mich., firm named Ushr to take navigational mapping to the next level.

“The difference about an autonomous driving map versus a navigation map, we’re concerned about the lane delineators, the slope of the road,” said Chris Thibodeau, senior vice president of Ushr. “In a navigation map, none of that information is needed.”

Plus, an autonomous driving map needs accuracy beyond what GPS can deliver—down to 10 centimeters. Ushr sent cars packed with LiDAR sensors on a tour of America’s highways—220,000 miles driven since 2013.

“It took us about a year and a half to collect and process all that data,” Thibodeau said.

Layered over original LiDAR imagery, Ushr roadway data includes details like cross-slope, lane width, lane markings, and more, all globally geo-referenced to sub-10 centimeter accuracy. Data is available every 0.5 meters along the road. (Image credit: Ushr)

In September, I had the opportunity to take a CT6 on loan from Cadillac for a test drive from Washington, D.C., to Cleveland, and the results were a kind of magic: Once the CT6 recognized it was on a highway in its database, a steering-wheel icon lit up on the dashboard to advise me that Super Cruise was available. I’d press a button to activate this mode, and the top of the steering wheel illuminated in green to show the car had taken over.

Informed by its database, the CT6 stuck to a lane as if it were a rail, slowing and accelerating as needed to compensate for traffic around me. All I had to do was keep my eyes focused on the road ahead—something the car itself watched for, using an inward-facing camera to ensure I was still paying attention.

Ushr is now looking to drive down the costs of its mapping solution, in part by applying machine learning techniques to recognize road features such as stop signs and crosswalks.

“We’re also spending a good amount of engineering resource today on basically automating those feature identification and feature extraction algorithms,” said Brian Radloff, Ushr’s vice president of business development. He added this would allow Ushr to begin mapping secondary roads.

Cadillac might not need that data—company president Johan de Nysschen told me in 2016 that bulky LiDAR sensors needed to detect pedestrians would not fit with a Cadillac’s style—but Ushr has other customers in mind.

“Some municipalities are looking at potentially using this data [in place of conducting their own surveys] if it’s accurate enough,” Radloff said. “When Amazon’s talking about things like drone delivery having a very precise HD map those drones can follow [it could] be another kind of further-out-there application of this technology.”

Bringing Eyes to the Skies—For Rent

Unmanned aerial vehicles, more commonly known as drones, are one of the most public symbols of GEOINT’s new era. But many companies that could benefit from the ability of drones to extend human senses to places that are difficult or dangerous for humans to reach lack the budget and expertise to buy their own systems.

That’s where D.C.-based Measure comes in, offering drones and analytical tools clients can hire for particular jobs. This business model—what it calls “Drones as a Service”—has given the firm extensive insight into what drones can and cannot do.

“Agricultural is probably one of the most overhyped applications for drones,” cautioned Abigail Lacy, Measure’s vice president of sales. “Anybody you talk to who’s been in the drone space for more than two years would probably tell you that.”

A drone’s different perspective can, however, make a difference at the margins by gathering data points about exactly where in the field a crop is flourishing or struggling.

“A lot of them really derive from just having the eye in the sky—not just the RGB, but the NDVI,” Lacy said, referring to the Red-Green-Blue of traditional imagery and the Normalized Difference Vegetation Index that a near-infrared camera can produce to indicate the presence of live vegetation.

That, in turn, can allow for a more precise, cheaper application of fertilizer.

But many farmers remain skeptical. “They just tend to be slow adopters when it comes to technology,” she continued. “They’re really hesitant to drop money on all of this different equipment.”

Measure is more bullish about the potential for drones to provide insight for industries such as construction and energy. Lacy cited solar farms as one example, touting the ability of drones to answer questions before construction, such as: “How productive will the solar farm be?” and “Am I going to have water runoff issues on the site?” Once the site is in operation, drones can help identify malfunctioning panels.

Measure doesn’t disclose its rates, but Lacy cited internal research that the company’s service can yield $7,200 in annual savings on a 10-megawatt solar facility compared to traditional inspections.

She noted drones don’t just operate at a lower cost than manned aircraft, they can also get lower to the ground. The firm relies mostly on visual and thermal cameras.

“We are keeping a close eye on how LiDAR is evolving,” she said, but added that so far costs are too high and quality is too low.

The firm also often has to deal with a lesser GEOINT hindrance—every company seems to have its own proprietary software. “You’ll get 15 different software providers that all have a unique system,” Lacy said.

Measure hopes automated data processing will cut down on its own overhead, but the real “game changer” would be automation of a drone’s flight—which, in turn, will require a loosening of regulations that today ban drone flights beyond a human operator’s visual line of sight.

Fusing Maps and Live Data

Now that so many mobile devices come equipped with GPS receivers—meaning the apps on those devices can also geo-tag user activities—coping with the massive scale of the resulting data becomes a challenge.

“As the variety of channels and devices that connect customers, companies, and physical assets increases, so too do the ways to measure and analyze spatial information,” a 2016 Forrester report observed. “One of the great challenges for effectively making use of location data has been integrating it with other data sets and analysis to provide deeper context and insight.”

That’s a big theme in the work of MapD, a D.C.-based firm that’s made a specialty out of integrating live data with maps.

One of its most fascinating demos tracks the last several weeks’ worth of geo-tagged tweets around the world, placing them on the map and color-coding them by language. Users can search for keywords and hashtags or just float the cursor across countries to see what is trending. For example, the large rectangle hovering over Finland turns out to be @EveryFinnishNo, a bot that tweets out the Finnish word for a new number every minute.

Another MapD demo offers a similarly granular look at ship movements around the U.S. from 2009 to 2015, both offshore and in lakes and rivers. “Tug” is overwhelmingly the most popular type of vessel, with more than five billion records.

MapD’s New York City taxi ride data set currently totals approximately 1.2 billion records. (Image credit: MapD)

A third demo provides a look at nearly seven years’ worth of taxi rides across New York City, from 2009 to 2015. During that time, cash transactions outnumbered credit, at more than 632.1 million cash transactions versus more than 510.8 million credit—while more than 2.2 million rides were recorded as going uncharged.

The massive computational power provided by GPUs is critical to these efforts.

“GPU computing is really going to take data to the next level and analytics to the next level,” said Monica McEwen, MapD’s vice president for U.S. federal customers. She pointed to how this revolution in processing power allowed Verizon Wireless to accelerate its analysis of network problems.

“Historically, they had to do that in batch mode,” she said. “Today, they’re looking at that in real-time.”

Also important: Ensuring interfaces scale up to meet a density of data she predicted will mean “being able to display literally billions of records and have a response time in the milliseconds.”

“The pure volume of [data] makes it nearly impossible to present it in a fashion in which people can make meaningful sense of it,” McEwen said. As a result, MapD’s interfaces let users easily add or remove layers of data so they can focus on particular variables.

Crowdsourcing Exercise Intelligence

Strava, a workout-tracking app popular with many cyclists and runners, has a different challenge to address. Mashing up the location reports it gets from users can inform individual Strava athletes looking to find popular routes on its heatmap.

That trove of data soon caught the attention of urban planners, and that led to a complementary product: a database of cycling and pedestrian activity over time called Strava Metro.

“We started hearing from departments of transportation who said, ‘This is cool, but we can’t see the temporal details,’” said Brian Devaney, sales and marketing lead for Strava Metro. “We had to figure out a way to get all those GPS pulse points and aggregate them and anonymize them.”

Strava’s Global Heatmap of New York City reveals popular routes and activities. (Image credit: Strava)

Combining the heatmap with Metro required the firm to address privacy risks.

Part of its answer is privacy options. Users can choose to place a geo-fence around a home, office, or other location, hiding it and the last 500 meters of a route from the view of others. A more comprehensive enhanced privacy option suppresses even more data from the feeds of other Strava users, down to your last name.

The company won’t say how many users have exercised either option, but many customers never touch the default settings. To keep their information safe as well, Strava aggregates individual GPS measurements without reference to where users started and ended their workouts. The result is an alternative map in which a major highway like Interstate 66 in northern Virginia vanishes from view, while the bike trail next to it glows yellow but leaves no hint of where along the path one person started or ended their ride or run. Strava’s underlying maps, developed by D.C.-based Mapbox on top of cartography from OpenStreetMap, automatically show bike-and pedestrian-hostile roads in gray. Even in small towns like rural Lincoln, Va., enough users walk, run, or bike to leave a dense web of trails on Strava’s heatmap. To use this to get a sense of an individual’s whereabouts, you’d need to know where they live first.

Strava Metro, launched in 2014, offers customers not just the heatmap’s static view (updated once a quarter) of overall movements but also minute-by-minute data about how many people went in one direction on one street. Again, the company boils its data down to GPS points, this time showing direction and time, while removing everything else.

“We do buffer for privacy,” Devaney said of the company’s decision to obscure the start and end of a workout route.

And users can opt out of having their data used in Metro at all, although only “a fraction of a percent” have done so.

Customers such as city and state transportation departments as well as cyclist advocacy organizations use this data to plan or push for improvements such as bike lanes and wider sidewalks, then audit how the new infrastructure performs.

They can learn what corridors are most busy during peak commute times versus on weekends,” Devaney said. “A lot of groups are using the data to understand how behavior changes after they put in infrastructure.”

You can imagine that this data would also be enormously attractive to such businesses as athletic-wear manufacturers, but Strava has chosen to limit its sales of Metro data to organizations “working to influence policy and infrastructure.”

Eschewing commercial use of data gathered from workout-tracking apps happens to line up with one of the core privacy principles put forth last year by the Future of Privacy Forum.

What’s Next? Megacities, Drones, and Small Sats

In terms of its commercial evolution, GEOINT is barely old enough to run for office. What could it look like by the time this roughly 25-year-old discipline is old enough to run for president?

USC’s Fleming said a larger trend—humanity’s move to large cities, in which the height and volume of buildings make the traditional references of GPS unusable or suspect—will force a switch to more resilient location technologies that work better inside and next to large structures.

“We’re piling up people along the coastlines of the world, so we have to deal with megacities better.”

Many smartphone users have already seen this problem when location-based apps lose a GPS signal, decide a nearby WiFi router that happens to have been moved from one venue to another is just as valid, and vault the user to a spot miles away.

Fleming also expects drones to become even more on-demand, “where everyone expects them to be around and they’re providing things like public safety services.”

But a world in which the whine of quadcopter rotors is a normal part of the background din may take some persuasion by drone vendors.

Fleming’s colleague Andrew Marx pointed to a different form of GEOINT system: small sats.

“The advantage of a small sat is you can build up an activity of an object,” Marx said. “You can have so many repeat observations.”

But adding this temporal dimension to GEOINT will require further innovation to display it. The tasks MapD and Strava already face will only grow more arduous.

“It’s a struggle, because we’re trying to depict things in four dimensions,” Marx said.

Penn State’s Bacastow, meanwhile, warned about two trends people might not appreciate as much.

One is which countries are focusing their efforts in this area. “Many of our students in AI and deep learning are not from the U.S.,” he said, referring to a recent presentation by the University of Missouri’s Dr. Curt Davis. “Counting publications, you’d find that scholars from other countries, such as China, have a significantly larger number of publications than scholars from the U.S.”

Another is how different generations view the privacy implications of having their geo-location harvested by smartphone apps. Bacastow recounted a freshman seminar he taught two years ago about geospatial privacy.

“I thought students would be concerned and engaged in a seminar about their loss of privacy,” he said. “Quite honestly, they didn’t care. For them, while they understand the loss of their location privacy, as one student put it, ‘I want my pizza delivered to the right place with the push of a button.’”

The post The Vanguard of Commercial GEOINT appeared first on Trajectory Magazine.

The GeoCarb Mission Fri, 19 Jan 2018 19:33:58 +0000 NASA to map carbon gas output over the Americas

The post The GeoCarb Mission appeared first on Trajectory Magazine.

A new NASA remote sensing mission aims to revolutionize our understanding of the carbon cycle by measuring and mapping carbon gas output over the Americas. 

The Geostationary Carbon Observatory—or GeoCarb—is targeted for launch in the early 2020s and will monitor vegetation health and stress as a result of greenhouse gas concentrations in the atmosphere. To lower mission costs, GeoCarb will launch on a commercial SES-Government Solutions communications satellite. Devices in geostationary orbit mimic the Earth’s rotation, meaning the satellite can hover over and repeatedly monitor a specific region.

GeoCarb’s advanced payload will build on that of NASA’s Orbiting Carbon Observatory-2 (OCO-2) mission, including identical detector technology, algorithms, and calibration techniques along with oxygen spectral bands and a grating spectrometer. However, GeoCarb will add a fourth spectral band to measure carbon monoxide and, for the first time in U.S. satellite history, methane. GeoCarb will also record solar-induced fluorescence (SIF), which indicates that plants are pulling carbon from the air and photosynthesis is occurring.

According to NASA, this payload will result in roughly 10 million daily recordings of carbon dioxide, carbon monoxide, methane, and SIF at a spatial resolution of three to six miles. The collected data will illuminate how carbon flows between land, oceans, and the atmosphere as well as how carbon-based gasses are distributed by wind and weather patterns. Because of its geostationary placement, GeoCarb will fill information gaps left by polar orbiting satellites, resulting in a more complete picture of the carbon cycle.

In addition to informing climate science, this mission could also give a boost to the energy industry. Scientific American reports methane leaks cost the U.S. natural gas industry up to $10 billion each year. The article suggests GeoCarb’s collection of essential industry information (and its cost-efficient hosted payload method) is an effective way to please those in Congress who want to cut spending and maximize profits from the energy sector as well as those who want increased research toward improving environmental sustainability. If GeoCarb is successful, the hosted payload could serve as a model for future NASA partnerships with commercial satellite vendors and for international space programs to expand this research to other parts of the world.

Photo Credit: NASA/Lockheed Martin/University of Oklahoma

The post The GeoCarb Mission appeared first on Trajectory Magazine.

The DigitalGlobe Foundation Celebrates 10 Years Fri, 15 Dec 2017 17:50:43 +0000 A look at some of the globally meaningful work the foundation has made possible

The post The DigitalGlobe Foundation Celebrates 10 Years appeared first on Trajectory Magazine.

The DigitalGlobe Foundation (DGF), an educational nonprofit established by commercial satellite imagery provider DigitalGlobe, celebrates its tenth anniversary this year. To promote globally significant research and prepare the next generation of geospatial professionals, DGF awards grants to students and scientists in the form of free access to the company’s imagery, training, and other space-based technology.

DGF founder Mark Brender saw the need in 2007 to ramp up workforce development in preparation for the industry’s imminent growth.

“We needed a way to open our aperture, to bring new ideas and people into geospatial sciences and the commercial remote sensing imagery ecosystem,” Brender said. “The best way to do that was to establish a foundation that can put high-resolution imagery into the hands of students so they can experiment with it, understand it, and eventually become geospatial users.”

To date, DGF has awarded more than 3,000 imagery grants valued at more than $14 million to students and researchers around the world. Such fieldwork has explored changes in topography over time, human and wildlife population sustainability, and historic site identification.

Students at USGIF-accredited GEOINT programs are often the recipients of such grants. 

Our partnership with DGF provides unique opportunities for USGIF’s 14 accredited college and university programs,” said USGIF CEO Keith Masback, who is also a member of DGF’s board of directors. “With this access they are able to expand their ability to conduct research and advance the GEOINT tradecraft.” 

In addition to research support, DGF also offers scholarships to select partner schools, including $5,000 annual awards to students at George Mason University and the University of Colorado.

To encourage more global-scale problem-solving from promising geospatial scientists, DGF is gradually expanding its scope beyond awarding imagery grants for specific research projects. Since March, DGF President Kumar Navulur has led the foundation toward investments in three main areas:

  • Leveraging machine learning and spectral analysis to extract insights from data.
  • Promoting the study of foundational sciences where the current global capacity is sub-par, specifically photogrammetry and physics.
  • Creating a cooperative network of research-focused universities.

According to Navulur, DGF has also expanded its reach from just a few universities outside the U.S. to a wider distribution of 50 universities in 20 countries. Additionally, DGF has established a relationship with the African Association of Remote Sensing of the Environment, which consists of about 50 more universities.

The foundation hopes increased support will push young geospatial professionals to seek tangible solutions to major environmental problems.

“I would love for universities to look at how to use imagery to document the quantifiable progress of the United Nations’ Sustainable Development Goals,” Navulur said.

In years to come, DGF partners and grant recipients will benefit from new access to cloud-penetrable radar data from Maxar Technologies, DigitalGlobe’s new parent organization. Additionally, case-specific imagery grants will be supplemented with access to the company’s global base map, DigitalGlobe Cloud Services.

“We are ensuring students have the skills to develop location-based technologies like the Internet of Things and remote sensing,” Navulur said. “Not only will they get jobs, they’ll make a difference in the world.”

Following are case studies featuring seven DGF grant recipients who are already making a difference:

Egyptian Looting

DGF granted three high-resolution images to University of Alabama at Birmingham’s Dr. Sarah Parcak to help measure archaeological looting in Egypt. Illegal digging reports were growing in the Saqqara and Dashur regions south of Cairo. Up-to-date data was not immediately available, so official theft measurements for the area were highly inaccurate until Parcak received access to GeoEye imagery via DGF.

[See image gallery at]


DigitalGlobe Foundation – Sarah Parcak / Girls Inc. from Trajectory On Location on Vimeo.

Surveying Nomadic Health

In one of its first grants, DGF released imagery to Stanford researcher Hannah Binzen Wild for her analysis of health in nomadic pastoral populations in Ethiopia. Wild used the data to locate mobile settlements quickly enough to develop and deliver hundreds of surveys to people living in the remote Nyangatom region of Ethiopia’s Lower Omo Valley. She’s now back at Stanford, working in collaboration with the Stanford Geospatial Center to refine the use of imagery for analysis by developing algorithms to determine average settlement size and other population characteristics. The team hopes these methods and pilot data can serve as a foundation to improve health care access for nomadic populations in other contexts.

[See image gallery at]

Tracking Gold

Michael Armand Canilao, an archaeologist and University of Illinois in Chicago graduate student, received an imagery grant from DGF supporting his research on ancient gold trading routes in the Philippines. DGF released four sharpened WorldView-2 multispectral images each displaying 1,000 square-foot tiles in northwest Luzon. The imagery enabled a closer look at the trails and, according to Canilao, made clear “how small-scale gold miners were able to negotiate, and, in some cases dictate, the terms of their participation in Early Historical Period maritime gold trade.”

[See image gallery at]

Mapping the Magan Peninsula

New York University doctoral candidate Eli Dollarhide sought to uncover the true historic landscape of Magan, an ancient peninsula in Oman with an uncertain political past. DGF granted Dollarhide access to Worldview-2 and -3 imagery of the land between Bronze Age settlements Bat and Amlah. This imagery helped Dollarhide’s team determine where to spend their limited time in the field and enabled the discovery of prehistoric tombs, petroglyphs, and roughly 450 other previously undocumented archaeological sites.

[See image gallery at]

Satellites Over Seals

University of Minnesota researcher Michella LaRue and her team used imagery provided by DGF to determine factors affecting the population variation and distribution of Weddell Seals along the Antarctic coast. Both commercial fishing and the melting of ice caused by climate change have affected the ice-dependent species. The project aims to determine what environmental conditions the seals require to survive. “We literally couldn’t do this research without [this imagery],” LaRue said. She manually scoured the imagery to count seals, and compared her findings to modern, ground-validated counts as well as counts from the 1960s.

[See image gallery at]

Erosion in the Yukon

It is theorized that slight increases in temperature caused the recent disappearance of the glacial Slims River in the Yukon. Dan Shugar, a researcher and professor at the University of Washington, Tacoma, was awarded WorldView-1, WorldView-2, and GeoEye-1 imagery by DGF to create 3D maps of the region. This enabled him to observe erosion processes in the Slims and Kaskawulsh rivers. Some imagery is being converted into a series of multi-temporal digital elevation models (DEMs) to visualize the hydrological system underground in search of changes that would affect glacial drainage. Shugar called these DEMs “a game changer.” DGF is continuing to work with Shugar on new tasking for stereo and multi-spectral images to detect changes in Kluane National Park.

[See image gallery at]

Valley of the Khans

DGF helped researchers from the University of California San Diego, the Mongolian Academy of Science, and the National Geographic Society in their quest to locate the final resting place of Genghis Khan. In one of its first grants, DGF provided Albert Yu-Min Lin and his team with imagery of multiple areas over Mongolia. The researchers are leveraging the power of the crowd and enlisting the general public to help study the satellite imagery and identify features of interest. The aim is to find Khan’s tomb using non-invasive tools and enable protective conservation methods at the historic site.

[See image gallery at]

Images courtesy of DigitalGlobe and the individual DGF grant recipients.

The post The DigitalGlobe Foundation Celebrates 10 Years appeared first on Trajectory Magazine.

Weekly GEOINT Community News Mon, 20 Nov 2017 17:53:36 +0000 Radiant Solutions Announces Plan of Operations; General Atomics Acquires Surrey Satellite Technology U.S.; Planet Imagery Made Available in SpyMeSat App; USGS Publishes Global Crop Map; & More

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

Radiant Solutions Announces Plan of Operations

Maxar Technologies’ geospatial business unit Radiant Solutions will combine former service brands RadiantBlue, HumanGeo, MDA Information Systems, and DigitalGlobe Intelligence Solutions into one commercial provider. The business will be organized into three missions: sensor and ground modernization; data to insight; and agile intelligence. This convergence of data-gathering sensors, cloud computing, open source, big data, and machine learning will offer customers a strong and thorough way to uphold national security missions.

General Atomics Acquires Surrey Satellite Technology U.S.

General Atomics acquired the majority of the assets of Surrey Satellite Technology U.S., a Colorado-based provider of small satellite technologies, systems, and services. The assets and workforce will be integrated into General Atomics’ Electromagnetic Systems Group to support the organization’s growth initiatives focused on the development and delivery of small satellite and advanced payload systems.

Planet Imagery Made Available in SpyMeSat App

Planet reached an agreement with Orbit Logic, allowing users of Orbit Logic’s SpyMeSat mobile app to access Planet’s daily satellite imagery. SpyMeSat provides on-demand access to recently archived imagery and the ability to request tasking over specific areas. Planet images available in the app cover 625 square kilometers at 3.7-meter resolution and cost less than $1.30 per square kilometer, while new tasking options begin at $375.

Esri Partners with Mobileye on Driver Assistance

Esri announced a collaboration with Intel’s Mobileye, a provider of driver-assistance software, to integrate Esri’s analysis and visualization capabilities with Mobileye’s Shield+ system. A network of sensors placed on the vehicle will record real-time data like pedestrian or cyclist detection in blind spots—that data will be uploaded into Esri’s ArcGIS platform and viewed on the Mobileye dashboard. Municipal buses and other public transport will be outfitted with this technology, making for safer commutes and communities.

USGS Publishes Global Crop Map

The United States Geological Survey released a new high-resolution map of croplands around the world. The map identifies 1.87 billion total hectares of farmland—India has the highest net cropland area, followed by the U.S., China, and Russia. The map was built using Landsat imagery at 30-meter resolution, the highest quality of any global agricultural dataset.

Loft Orbital Raises Funding for Condo Constellation

Loft Orbital has raised $3.2 million in seed funding to create a constellation of satellites carrying multiple payloads from different customers. Spacecraft would weigh between 100 and 200kg to keep launch prices low enough to dissuade customers from purchasing and operating satellites of their own. Loft will manage the satellite procurement, launch, operations, and downlinking data, while customers will task their own payloads. Loft is targeting a first mission for the second half of 2019.

Boundless Rebrands GIS Software

Boundless announced the rebranding of its flagship GIS software from Boundless Suite to Boundless Server. The new enterprise package will feature enhanced styling and increased compatibility with Esri’s ArcGIS. The software’s flexible architecture allows users to manage and publish location data with ease.

Luciad Launches Data Management Software Updates

Luciad announced the new V2017.1 version of its software suite, particularly the LuciadFusion data management platform. Luciad refers to the suite as a one-minute data manager—it can complete setups, publishing, visuals, discovery, and analytics in 60 seconds each.

Pitney Bowes Launches Collaborative Online GIS Community

Pitney Bowes launched its Li360 Community, a global online population of GIS professionals, clients, and customers collaborating on business tools and capabilities. The community serves as a way to promote innovation from geospatial industry as companies realize the benefits of location intelligence and begin using it to drive sales.

ODNI Re-launches

The Office of the Director of National Intelligence announced the re-launch of, a central website for the U.S. Intelligence Community (IC). The move is rooted in a community-wide effort to standardize transparency about the IC’s activities. Users can parse through public data, documents, and products, and can link to other resources such as the websites of specific intelligence agencies.

Photo Credit: USGS

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

What are Your 3 Words? Fri, 17 Nov 2017 18:47:46 +0000 What3words assigns three-word identifiers to every location on Earth

The post What are Your 3 Words? appeared first on Trajectory Magazine.

The global address system is imperfect. Road names are often repeated or similar within municipalities, leading to botched deliveries, confusing navigation, and wasted time. Street addresses only cover developed areas with established infrastructure. Geographic coordinates are precise but too complicated for everyday use.

To fix these problems, London-based what3words is simplifying global addresses. The company has divided the entire surface of the world into a geocoding grid of 57 trillion 3-meter-by-3-meter squares, assigning each a unique three-word identifier. This allows more accurate location sharing and product delivery and provides addresses for the billions of people living in developing neighborhoods without defined street names.

To encourage the use of their system around the world, what3words has translated the map grid into 14 languages such as French, Arabic, and Swahili, with more to come including Hindi and Zulu.

The system’s benefits are numerous. To date, the national post services of Nigeria, Djibouti, Côte D’Ivoire, and Mongolia have adopted the what3words system and begun delivering goods and mail to many residential locations for the first time. South African cities like Durban are using it to properly direct emergency responders. The United Nations is using it to geotag imagery as a common operating picture for disaster recovery efforts in remote locations. The system could even break into personal navigation. Mercedes announced it will incorporate what3words addresses into the voice-activated satellite GPS for next generation vehicles.

For areas without thorough building numbering or street addresses, embracing what3words could improve city planning, enable efficient business, and help people define their homes.

Photo Credit: what3words

The post What are Your 3 Words? appeared first on Trajectory Magazine.

Weekly GEOINT Community News Mon, 13 Nov 2017 11:21:11 +0000 USGS Releases Landsat Analysis Ready Data; ArdentMC Supports California Fire Recovery Efforts; DARPA Bets on In-Space Robotics; More

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

USGS Releases Landsat Analysis Ready Data
The United States Geological Survey (USGS) released new Landsat Analysis Ready Data (ARD) of the continental U.S., Alaska, and Hawaii. The ARD has already been processed for direct use in monitoring and analyzing terrain change over time. This will circumvent the process of downloading and preparing Landsat scene data for high-volume analytic projects and will reduce data processing for scientists. ARD will serve as the primary dataset for the USGS Land Change Monitoring, Assessment, and Projection Initiative.

ArdentMC Supports California Fire Recovery Efforts
Ardent Management Consulting announced plans to aid fire recovery efforts in Northern California with a drone initiative to collect imagery in heavily affected areas. Drones will fly for 100 hours, gathering data including storm track visualizations, road closures, flood depth, property damage, and more to be used for damage assessments and efficient response.

DARPA Bets on In-Space Robotics
DARPA is pushing to launch a rule-making coalition for the use of high-orbit satellite repair robots. The agency maintains that the lack of clearly defined technical and safety rules is a major hindrance to the in-orbit robotics industry’s expansion, which would extend satellite life spans and save operators millions of dollars over time. The proposed Consortium for Execution of Rendezvous and Servicing Operations would bring together major players in the geosynchronous satellite industry to discuss standard operating protocol and regulations.

Enview Raises Millions for Utility Infrastructure Management
Startup Enview raised $6 million to support its deployment of analysis tools used by oil and gas companies to monitor and manage pipeline networks. The company’s software processes national public data to create detailed reports of major firms’ infrastructure assets and to determine when pipes and power lines are due for upgrades. Enview asserts that an improved understanding of transmission system breakdowns would help avoid human injury and financial damages when accidents occur.

Penn State Adds Homeland Security Concentrations 
Pennsylvania State University expanded its master’s degree program in Homeland Security to offer concentrations in counterterrorism and cyber threat analytics and prevention. Counterterrorism students will learn how to craft effective national policies to prevent and respond to acts of terror and analytic techniques to identify incoming attacks. Cyber enrollees will study data mining and web security. The program is offered exclusively online through Penn State World Campus and is currently accepting applications for the spring 2018 semester.

Boundless Celebrates GIS Day with Geography Education Events
Boundless is ramping up its support of geographic education with a number of upcoming events. In partnership with NGA, USGIF, the American Geographical Society (AGS), and Learning Plunge, Boundless is hosting the first GeoPlunge card game tournament for youth in St. Louis Nov. 15.

The same day, in honor of GIS Day, the company will run GIS workshops and present a lecture titled “Thinking Open: A Hybrid Approach to Implementing Open GIS” at universities around the country. Additionally, Boundless announced it will bring more than 50 geography teachers to Geography 2050, held Nov. 16-17 in New York City. Boundless executives will also host special panels on post-graduation career development and government transparency at the AGS Geography Teacher Fellows Workshop Nov. 18.

Peer Intel

Anthony Robbins joined NVIDIA as vice president of the company’s public sector practice. Robbins will lead NVIDIA’s federal and defense businesses in the U.S. and Canada, and oversee its higher education and research businesses. Prior to NVIDIA, Robbins served as vice president of global defense at AT&T and in various positions at Brocade, Oracle, Sun Microsystems and Silicon Graphics.

Photo Credit: Space Systems Loral

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

The Genesis of Google Earth Wed, 01 Nov 2017 14:53:37 +0000 The history and future of the software that made GEOINT mainstream and changed the way we view the world

The post The Genesis of Google Earth appeared first on Trajectory Magazine.

In August 2005, Hurricane Katrina ravaged the Gulf Coast of the United States, bursting levees throughout Louisiana and Mississippi and submerging the streets of south Florida. According to the National Hurricane Center, it was the deadliest hurricane since 1928, claiming at least 1,800 lives and causing more than $108 billion in damages.

The U.S. Navy, Coast Guard, and other federal relief groups deployed helicopter teams to rescue people stranded in New Orleans without the resources to escape or survive in their homes. Hurricane victims dialed 911 for urgent help at specific street addresses, but it was impossible for first responders to find them without precise GPS coordinates—street signs and house numbers were invisible beneath the deluge. In the absence of traditional situational awareness, responders were operating blind.

In California, a team from the recently minted Google Earth program launched into action, creating real-time imagery overlays of heavily affected areas on top of its existing 3D globe platform. Fly-by aerial photos from the National Oceanic and Atmospheric Administration (NOAA) and satellite imagery from DigitalGlobe—one of Google Earth’s primary providers—revealed the scope of the hurricane’s destruction. Google Earth made this data publicly available and responders had eyes again.

Now, they could input a caller’s location into Google Earth paired with case-specific details—for example, a target trapped in a two-story house with a clay roof next to an oak tree. Equipped with up-to-date imagery from Google Earth, relief teams saved thousands of people from Katrina’s aftermath.

Years later, the Louisiana Governor’s Office of Homeland Security and Emergency Preparedness would pair internal data with Google Earth Enterprise (GEE)—the desktop software suite for private or offline use of Google Earth—to create 3D globes for emergency response and infrastructural planning.

Today, Google Earth is among the most popular geospatial software in the world, boasting upward of one billion downloads. With it, students take virtual tours of the world’s wonders from their classrooms, house hunters evaluate prospective properties without leaving home, and much more. The U.S. military employs GEE for secure mission planning and intelligence professionals use it to visualize points of interest and detect change. Google’s spinning globe truly represents the democratization of geospatial intelligence.

In the case of GEE, government and military organizations became so dependent on the software’s private storage and visualization capabilities that not even a depreciation announcement from Google two years ago stopped them from using the platform.

As a result of the community’s reliance on GEE, earlier this year Google decided to make the software’s code open source and available for public download on GitHub.

With its future in the hands of its users, GEE is poised to remain at the center of mission planning and situational awareness efforts for the defense and intelligence communities—at least until a supported platform of equal utility arises.

A Giant’s Infancy

At the time Hurricane Katrina made landfall, Google Earth software had been available to the public for only three months. But the story of Google Earth began to take shape 10 years earlier at a computer hardware company called Silicon Graphics (SGI).

Michael T. Jones, then a member of SGI’s computer engineering team, had developed an invention that would revolutionize the firm’s 3D graphics offering, which at the time was used primarily for flight simulation.

“It was called clip mapping. That’s the fundamental hardware feature SGI had that let it do this amazing, smooth flight around the world,” said Jones, now a managing partner at Seraphim Capital.

Jones’ technique displayed a small region of graphics—the region under examination—in high resolution while the peripheral regions were displayed in low resolution. Jones, along with SGI engineers Chris Tanner, Chris Migdal, and James Foran, patented the method in 1998. Clip mapping required powerful supercomputers to run, but enabled a high-fidelity texture map that became the centerpiece of SGI’s final graphics system, Infinite Reality, which at the time boasted the fastest 3D graphics in the world.

Federal agencies such as the National Geospatial-Intelligence Agency (NGA) and the National Reconnaissance Office (NRO) would later follow suit, Jones said, using clip mapping to build data visualization platforms of their own.

To demonstrate the vastness of Infinite Reality’s capabilities, SGI created a demo called “Space to Your Face.” It began with a wide view of Earth from space, slowly zooming into Europe. When Lake Geneva became visible, the program would focus on the Matterhorn in the Swiss Alps. It would continue to zoom until reaching a 3D model of a Nintendo 64 console on the mountainside. Then it would zoom in even more, settling on the Nintendo’s MIPS r4000 graphics chip—a microprocessor created by SGI—before snapping smoothly back to space.

The demo was well received. Educators were excited to see an interactive, classroom-friendly global map tool, and video game developers had never seen such fluid graphics.

Seeking a new home for their brainchild, Jones, Tanner, and former SGI engineers Remi Arnaud and Brian McClendon founded a company of their own. Called Intrinsic Graphics, it focused on developing high-quality 3D graphics for personal computers and video games.

In October 1999, Tanner took the concept further when he designed a software version of the clip mapping feature that allowed a user to “fly” within a 3D visualization of Earth.

“People were blown away,” Jones said. “They were looking at Google Earth.”

Though the software platform wasn’t Intrinsic’s primary product—the graphics themselves were—Jones was intrigued and continued refining the spinning globe.

Yet running the software required expensive and highly specialized computing hardware not available to most of the private tech industry, let alone the commercial user.

“That machine cost $250,000. We wanted to be able to offer this without the specialized hardware,” said McClendon, now a research professor at the University of Kansas. “To be able to get that performance out of a PC meant we could share it with the world. The moment you realize you can transmit this data over the internet, you begin to realize the impact. A group of us at Intrinsic thought, ‘We need to build a company around this.’”

And before long, yet another company was founded. In 2000, Jones, McClendon, and a few others spun out the software from Intrinsic Graphics to launch Keyhole. In early 2001, Keyhole raised first round funding from NVIDIA and Sony Digital Media Ventures, making official its existence as a standalone company. Keyhole’s first product, EarthViewer 1.0, was the true precursor to Google Earth.

Using public data gathered from NASA’s Landsat constellation, IKONOS imagery, and aerial photos of major U.S. cities, Keyhole built a complete digital Earth. Though pixels were beginning to proliferate, high-resolution imagery was mostly limited to U.S. metropolitan areas.

Under the direction of newly appointed Keyhole CEO John Hanke, the company marketed EarthViewer to the commercial real estate and travel industries. Civil engineers also purchased it for the ability to sketch out location information when planning construction projects. 

“Intelligence agencies wanted this capability as well, but they wanted to use their own data,” McClendon said.

The Intelligence Community (IC) was intrigued, but wanted to use classified geospatial data gathered through National Technical Means rather than the data on Google’s public server. To accommodate such buyers, Keyhole began offering an enterprise version of its software, allowing large-scale users to stand up private network servers and host their own data on a replica of EarthViewer’s 3D globe.

NIMA Backing

The National Imagery and Mapping Agency (NIMA) was the first agency to take note of this unprecedented capability. Under the leadership of then director James Clapper and deputy director Joanne Isham in 2001, NIMA launched a research and development directorate known as InnoVision. The new directorate sought to leverage state-of-the-art technologies from industry to help the IC adapt to the changing face of conflict in the aftermath of 9/11.

Isham, a former CIA employee, was well versed in In-Q-Tel, the CIA’s nonprofit venture capital initiative. She approached Robert Zitz, InnoVision’s first director, about collaborating with In-Q-Tel to find partner companies.

“We sat down together with In-Q-Tel and went over what our most urgent requirements were,” said Zitz, now senior vice president and chief strategy officer of SSL MDA Government Systems. “In-Q-Tel started trying to locate companies and [in 2002] discovered Keyhole.”

In-Q-Tel was impressed by the low barrier to entry and EarthViewer’s ease of use.

[Users] will create data files … rapidly and not to spec, put them in Google Earth, and they’ll run somehow. That’s really the reason why no other applications have been able to enter this space as dominantly as Google Earth.

— Air Force Lt. Col. Mike Russell, NGA

“With [EarthViewer], you just click on the icon and all of a sudden you’re flying around the globe,” said Chris Tucker, In-Q-Tel’s founding chief strategic officer and now the principal of Yale House Ventures. “There had been some way earlier-era, very expensive defense contract iterations [of a 3D digital Earth], but none at a consumer level that a regular analyst could make sense of without being a missile defense expert or some other technical user.”

In 2003, In-Q-Tel invested in Keyhole using NIMA funding. It was the first time an intelligence agency other than the CIA had employed In-Q-Tel. NIMA experienced an immediate return on its investment. Within two weeks, the U.S. military launched Operation Iraqi Freedom, which Keyhole supported in its first mission as a government contractor.

“We wanted a capability that would help military planners visualize and seamlessly move through datasets pertaining to particular target areas,” Zitz said. “We also wanted the ability to rapidly conduct battle damage assessments. NIMA was supporting joint staff in the Pentagon, and to sense how effective a strike was after-the-fact was very labor and imagery intensive. With Keyhole, we were able to streamline that process.”

EarthViewer quickly gained public exposure through TV news coverage using its battlefield imagery.

One of McClendon’s junior high school classmates, Gordon Castle, was CNN’s vice president of technologies. McClendon approached Castle with his EarthViewer demos. Castle was wowed, and CNN became one of Keyhole’s first media customers. The network routinely used EarthViewer to preview story locations during broadcasts. When the U.S. invaded Iraq, CNN used the software heavily—sometimes several times an hour—to show troop movement or combat locations.

The Big Break

Realizing its technology could improve people’s understanding of the planet, widespread commercialization became Keyhole’s mission. But Keyhole was a small company, and scaling up its computing infrastructure to handle more traffic was expensive. An annual EarthViewer Pro subscription still cost $599—a price justified by the company’s high operating costs. Keyhole’s bottom line stood in the way of its goal.

“[We wanted] everybody that opened the app to be able to find their house,” McClendon said. “It’s the first thing everybody searches for. If that experience isn’t good, the user thinks the product isn’t good.”

That first step required high-quality coverage of the entire land surface of Earth—a seemingly unattainable achievement for Keyhole’s 29 employees, even with In-Q-Tel backing. And the startup’s network bandwidth wasn’t strong enough to offer a high-resolution 3D globe to millions of consumers worldwide. McClendon recalled making regular trips to Fry’s electronics store to purchase hard drives, struggling to keep up with demand.

“To provide high-resolution data for the whole world was an epic undertaking … that would’ve taken us probably a decade to build up on our own,” he said.

For its vision to materialize, Keyhole needed more capital to scale up imagery procurement and to build powerful data infrastructure to store high volumes of imagery. In 2004, as if on cue, along came Google—one of the few companies powerful enough to manifest Keyhole’s mission. And they wanted to buy.

“It seemed like a tough road. Everybody was impressed with what we had done, but there was going to be competition and we needed to move quickly,” Jones said. “So we sold to Google because our dream would happen.”

As part of the acquisition, the Keyhole team maintained control of the program as it evolved. Most personnel, including McClendon and Jones (Tanner had since departed Keyhole), became executives at Google, developing their software unrestricted by the need to keep a startup afloat.

Once at Google, the program began to operate on an entirely different scale. Instead of acquiring licensing deals for small portions of a vendor’s imagery at a time, Google bought out all the imagery a vendor had available at once. Google also provided access to a rapidly growing user base already hooked on its web search platform.

Before debuting a Google-branded product, the former Keyhole team had to rewrite EarthViewer’s service code to run within Google’s infrastructure. Additionally, pre-release engineering refinements focused on adding data around the globe, making the program accessible to non-English speaking users, and simplifying features. Finally, Google Earth launched in June 2005.

The software exploded in the commercial marketplace. Where Keyhole’s consumer version of EarthViewer was too expensive for most casual civilian users, Google Earth was downloadable for free.

“We had millions of users in the first few days and tens of millions in the first year,” McClendon said.

Keyhole brought to Google a new form of interactive information that mimicked the real world and helped people understand their place in it. A GEOINT tool had finally made it to the mainstream.

In 2006, Google released Google Earth Enterprise for organizations seeking the capabilities of Google Earth but with private data in a secure, offline environment. The GEE suite included three software components: Fusion, the processing engine that merged imagery and user data into one 3D globe; the Earth server that hosted the private globes built by Fusion; and Client, the Javascript API used to view these globes.

Whether to disseminate that data after creating proprietary globes in GEE was, and still is, up to the user. This was the final evolution of the EarthViewer enterprise suite used by the Pentagon at the outset of the Iraq war.

GEE in Action

In the years following its launch, government agencies, businesses, and state municipalities began to deploy GEE at internal data centers to produce 3D globes using sensitive or classified data.

The city of Washington, D.C., for example, has used GEE to model and visualize public safety data including crime, vehicle and fire hydrant locations, and evacuation routes.

Arguably the largest user of GEE is the U.S. Department of Defense (DoD). When Google Earth was first released, military customers had an explicit need for this capability to function in a highly secure private network.

For example, the Army Test and Evaluation Command (ATEC) uses private data on enterprise servers such as Google’s to evaluate a wide range of weapon systems as well as ground and air operations.

At ATEC’s Yuma Proving Ground (YPG) in Arizona, proprietary terrain data, imagery, and operations maps are overlaid on Google Earth and used to plan and schedule launches.

“Knowing where everyone is and moving in a secure range and air space is important to our operations,” said Ruben Hernandez, an Army civilian in the YPG’s engineering support branch. “Much of this data is also armed for range awareness display.”

For example, prior to an indirect fire artillery test, personnel use YPG data within GEE to assess the safest positions on base to conduct the test—when to fire, where to fire from, and what to fire at. That information is disseminated throughout YPG for awareness.

“Many of these munitions have extensive footprints. We want to find out how much air and land space [the blast] is going to consume. Safety is a big component of how these overlays are planned,” Hernandez said.

NGA is another major GEE stakeholder. In 2008, the agency’s new GEOINT Visualization Services (GVS) program invested in the enterprise server. GVS has since produced a proprietary version of Google Earth for warfighters featuring classified NGA data.

According to GVS program manager Air Force Lt. Col. Mike Russell, “GVS was built around providing a version of Google Earth in the secret and top secret domains so users could visualize classified information geospatially and temporally in a common operating picture.”

Now, NGA’s private Google Earth globes are mission critical for more than 30,000 customers daily, including DoD Combatant Commands, the FBI, CIA, NRO, National Security Agency, and Federal Emergency Management Agency. NGA’s current release is the second largest Google Earth globe in the world and is used across the DoD and IC for common situational awareness, tracking vehicles and personnel, delivering intelligence briefings, and more.

Russell praised Google’s efficient rendering of data files in the Keyhole Markup Language (KML) format. KML was created for file building in Keyhole’s EarthViewer platform and has since become an industry standard for visualizing geospatial data.

“[Users] will create data files like the location of an IED or a live dynamic track of an aircraft. They can build these files rapidly and not to spec, put them in Google Earth, and they’ll run somehow. [Competitors] can only render smaller KMLs or those built to spec. That’s really the reason why no other applications have been able to enter this space as dominantly as Google Earth,” Russell said.

The Unbundling

GEE served a far more specific client and purpose than the commercial Google Earth services, but its rate of adoption was noticeably low compared to most Google products.

According to McClendon, “Continuing to innovate on a hosted service exclusively for the enterprise community was not financially viable.”

In March 2015, Google announced the depreciation of GEE. After a two-year transitional maintenance period, the company stopped supporting GEE software in March 2017. Though it was being phased out of Google’s product line, GEE remained in use by invested customers relying on it to meet mission demands and house their data.

Hernandez recalled pushback from teams at Yuma who were not keen to change their data storage and visualization system. According to Russell, GVS feared losing its primary product and stranding customers without an application to replace it.

To accommodate the ongoing need, Google announced in January it would publish all 470,000 lines of GEE’s code on GitHub, allowing customers to continue using the software they’d grown loyal to and to improve the product independently.

For customers who prefer transitioning to a supported enterprise software, Google has coordinated with Esri to offer free software and training for GEE customers who migrate to Esri’s ArcGIS platform. 

The open-source GEE (GEE-OS) suite includes the Earth server, Fusion, and a portable server allowing users to run GEE on a mobile device or desktop computer not connected to a centralized server. The GEE Client software, which is required to connect to the Earth server and view 3D globes, was not carried forward into the open-source environment. Instead, it will continue to be maintained and provided by commercial Google Earth.

Thermopylae Sciences and Technology (TST), NT Concepts, and Navigis—three longtime Google partners—supported GEE’s transition to open source. In the spring, each of the three companies sent a developer to Google in Mountain View, Calif., to spend several weeks learning the code from Google developers who had been maintaining the software baseline. 

TST began a partnership with Google in 2007 through a series of federal government customer engagements supporting Thermopylae’s own Google Earth-based tracking console. When the open-source announcement was made, TST’s Earth Engineering team was reassigned to the company’s Open Source Development Office to create the GEE GitHub site and migrate the source code.

On Sept. 14, TST’s open source team released GEE-OS version 5.2.0, which matches the last proprietary release as well as fixes bugs that emerged during the two-year depreciation period.

“When we pulled the code out from [Google’s] proprietary side, there were a lot of things that needed to be built back up or replaced with open-source components,” said Thermopylae CEO AJ Clark. “Really these first few months are just about providing feature parity with where the code was at its last state inside Google.”

TST’s team aims to release GEE-OS 5.2.1 by the end of 2017.

Now that parity is achieved and the program’s performance is stabilized, developers will begin submitting expanded code contributions. According to Clark, the first value-add propositions will most likely begin to flow in early 2018. Meanwhile, DoD and IC users are eager to discover how they can further adapt the software for their specific missions.

Chris Powell, CTO of NT Concepts, said the company is working with its defense and intelligence community customers to support GEE and their transition to the GEE-OS baseline. 

“We’re also actively looking for opportunities to contribute back to the open source baseline for feature improvements and capabilities,” Powell said, adding some possibilities are scaling the GEE processing power to a larger compute platform and examining how the software can be optimized for the cloud.

Hernandez said the planning crew at Yuma is looking forward to new software capabilities that could be built out at the request of the test community. Among these features, he said, is the ability to “grab geospatial objects and collaborate on them between multiple users; to grab, extend, and change the shape of a [weapon] footprint in 2D or 3D; and to provide a simulation of an object’s line trajectory.”

According to Jon Estridge, director of NGA’s Expeditionary GEOINT Office, the agency has committed to provide enhancements and ongoing sustainment to open-source GEE on Github through at least 2022.

“A few specific examples would be multi-threading the fusion process to support massive terrain and imagery updates, enhanced 3D mesh management, and inclusion of ground-based GEOINT content like Street View,” Estridge said. 

Open source means more customizability for users with niche wants and needs. No two proprietary Google Earth globes look the same, and teams will have more command over the unique data they store, visualize, and analyze within the program.

“It’s very positive,” Russell said. “[Open source is] an opportunity for NGA to partner with Thermopylae to tie the proprietary and non-proprietary pieces together, and it allows us to sustain Google Earth for our user community for a longer period of time.” 

The decision to make GEE code open source only improves the program’s accessibility and potential use cases, and will bolster the software’s longevity. Code sharing is a growing trend in the IC, and Google has provided government, military, and industry unlimited access and control of one of the most useful enterprise GEOINT tools on the market. 

The post The Genesis of Google Earth appeared first on Trajectory Magazine.

]]> 2 35010
Google Street View Upgrades Fleet Fri, 08 Sep 2017 18:11:32 +0000 Google updates its Street View car cameras with an eye for AI

The post Google Street View Upgrades Fleet appeared first on Trajectory Magazine.

For the first time in eight years, Google’s Street View cars are hitting the road equipped with new hardware. The updated car-top rigs feature seven (instead of 15) 20mp cameras, two HD cameras, and two LiDAR sensors—a setup that will result in clearer images, better color rendering, more vertical building imagery, and fewer stitching errors.

More importantly, the camera upgrade will supply swaths of high-quality data for Google to feed to its image recognition algorithms. This will provide Maps and Street View with more—and more accurate—mapping information, including place names, street signs, and even hours of operation or accepted payment methods for businesses advertising such information in their windows.

Wired reports this information will help Google’s mapping services answer difficult contextual questions posed by its users, such as, “What Thai place is currently open and delivers to my address?” or “What’s the name of that pink store next to the church on the corner?”

According to TechCrunch, this type of accurate, up-to-date business information will translate to better results for businesses that choose to advertise via Google’s platforms.

The new Street View equipment was rolled out in last month in Santa Cruz, Calif., and will play a large role in the continuous mapping of developing areas such as India and Nigeria, where imaging updates are required to keep pace with infrastructural growth.

The post Google Street View Upgrades Fleet appeared first on Trajectory Magazine.

Subterranean Street Maps Fri, 11 Aug 2017 15:22:15 +0000 Mapping New York City's underground infrastructure

The post Subterranean Street Maps appeared first on Trajectory Magazine.

Beneath the cities of the developed world lie hidden labyrinths of piping, wires, and tunnels that carry water, gas, telecommunications, and sewage to and from residing populations.

Private actors in New York City (with support from the Mayor’s office) are developing a comprehensive floor plan of the city’s subterranean labyrinth, leading the way in a new faction of GEOINT: underground infrastructure mapping.

Bloomberg News recounts the historical narrative of this practice, beginning with an electric power substation located at a severe flood zone on the banks of Manhattan’s East River. When Hurricane Sandy blew through New York in 2012, the overflowing river submerged the substation and destroyed its transformers. A three-day blackout followed. Base maps depicting the city’s water and sewer lines underneath topographical and built features existed at the time—but a comprehensive underground road map did not.

Access to centralized geospatial data outlining a city’s delicate layers of underground infrastructure would be instrumental in responding to disasters like the 2012 floods, and the availability of such data during city planning could help mitigate emergencies. Bloomberg reports that mistakes made during underground repairs or construction (like rupturing a gas line) cost New York City more than $300 million each year. With more precise maps, many of these mistakes could be avoided.

Now, the trend is catching on as more municipalities note the value of underground infrastructure mapping. In response to a fatal gas line accident, Flanders, Belgium, created a de-centralized utilities map available upon request to contractors planning to dig underground. Chicago launched a similar project with local tech initiative City Digital. London and Singapore are working on pilot programs as well.

Photo Credit: Reconstruct and UI Labs

The post Subterranean Street Maps appeared first on Trajectory Magazine.