LiDAR – Trajectory Magazine We are the official publication of the United States Geospatial Intelligence Foundation (USGIF) – the nonprofit, educational organization supporting the geospatial intelligence tradecraft Fri, 19 Jan 2018 19:39:44 +0000 en-US hourly 1 LiDAR – Trajectory Magazine 32 32 127732085 The Tipping Point for 3D Fri, 12 Jan 2018 14:59:36 +0000 The ability to fully harness 3D data is rooted in acquisition and scalability

The post The Tipping Point for 3D appeared first on Trajectory Magazine.

The application of location intelligence and the incorporation of 2D maps and positioning have become ubiquitous since the advent of smartphones. Now, we are entering a new era in which we can harness the power of 3D data to improve consumer experiences, as well as applications for enterprise, public safety, homeland security, and urban planning. 3D will play a more significant role in these experiences as we overcome the technical barriers that have made it previously difficult and cost-prohibitive to acquire, visualize, simulate, and apply to real-world applications.

Outdoor Data: Our World Is 3D

In a geo-enabled community in which we strive for more precise answers to complex spatial intelligence questions, traditional 2D correlation is a limiting factor. When you think about 3D data and maps, modeling buildings in an urban environment seems obvious. However, 3D is incredibly important when trying to understand the exact height of sea level or the uniformity of roads and runways. For example, one can imagine the vast differences in 2D versus 3D data and its application during the 2017 hurricane season. By including the Z-dimension in analysis, we can achieve true, precise geospatial context for all datasets and enable the next generation of data analytics and applications.

Access to 3D data for most geospatial analysts has been limited. Legacy 3D data from Light Detection and Ranging (LiDAR) and synthetic aperture radar sensors has traditionally required specialized exploitation software, and point-by-point stereo-extraction techniques for generating 3D data are time-consuming, often making legacy 3D data cost-prohibitive. Both products cost hundreds to thousands of dollars per square kilometer and involve weeks of production time. Fortunately, new solutions provide a scalable and affordable 3D environment that can be accessed online as a web service or offline for disconnected users. Users can stream, visualize, and exploit 3D information from any desktop and many mobile devices. Models of Earth’s terrain—digital elevation models (DEMs)—are increasingly used to improve the accuracy of satellite imagery. Although viewed on a 2D monitor, DEMs deliver the magic through a true 3D likeness for bare-earth terrain, objects like buildings and trees, contours, or floor models, and unlimited contextual information can be applied to each measurement. This provides a true 3D capability, replacing current “2.5D” applications that aim to create 3D models out of 2D information, at a cost point closer to $10 to $20 per square kilometer and only hours of production time.

Indoor Data: Scalable 3D Acquisition

As 3D becomes more common in outdoor applications, its use for indoor location is being explored. Unsurprisingly, similar challenges need to be overcome. Until now, real-time indoor location intelligence has been difficult to achieve. This is largely due to the absence of, or difficulty in obtaining, real-time maps and positioning data to form the foundation for the insights businesses can derive about their spaces.

Image courtesy of InnerSpace.

To create a 3D model of a room, businesses stitch together blueprints, photos, laser scans, measurements, and hand-drawn diagrams. Once the maps and models are in hand, operators must create location and positioning infrastructures that accurately track people and things within the space. Operators need to position these sensors—typically beacons—precisely according to the building map, but the beacons have no knowledge of the map and cannot self-regulate to reflect any changes to the environment. If the beacon is moved, its accuracy is degraded and its correlation to the map breaks down. Overall, this process is lengthy, cost-prohibitive, and fraught with error.

Using current methodology, professional services teams work with a wide variety of tools—LiDAR trolleys, beacons, 3D cameras, and existing architectural drawings—to compose an accurate representation of a space. Additionally, the resulting system becomes difficult to maintain. Physical spaces are dynamic, and changes quickly render maps obsolete. Changes to technology used to create the models or track assets require ongoing management, and the process rapidly becomes overly complex. This complexity is stalling innovation across myriad industries, including training and simulation, public safety and security, and many consumer applications. While organizations and consumers can easily find data for outside locations using GPS, no equivalent data source exists for indoor location data.

Emerging location intelligence platforms leverage interconnected sensors that take advantage of the decreasing cost of off-the-shelf LiDAR components and the ubiquity of smartphones, wearables, and wireless infrastructure (WiFi, Bluetooth, ultra-wideband) to track people and assets. LiDAR remains an ideal technology for these sensors because its high fidelity is maintained regardless of lighting conditions and, unlike video, maintains citizen privacy. The result is a platform that is autonomous and scalable, and operates in real time to deliver 3D models and 2D maps while incorporating location and positioning within a single solution. These sensors are small and can be deployed on top of a building’s existing infrastructure—not dissimilar to a WiFi router.

The advantage of this approach is that the platform is able to capture the data needed to create 3D models of the indoor space while also understanding the sensors’ own positions relative to each other. Each sensor captures point clouds of the same space from different perspectives to create a composite point cloud that is automatically converted into a single structured model. This solves the two critical roadblocks that industry has faced when trying to acquire indoor data: Maps no longer need to be created/updated by professional services teams, and the maps and positioning data are always integrated and updated in real time.

The sensors track electromagnetic signals from people and assets within the space. This approach respects citizen privacy by capturing a unique identifier rather than personal information. Algorithms eliminate redundant data (such as signals from computers or WiFi routers) when identifying humans within a space and model the traffic patterns and behaviors over time. The data includes the person’s or asset’s longitude and latitude coordinates, along with altitude—which, as more people live and work in high-rise buildings, is becoming increasingly necessary for emerging enhanced 911 requirements in the United States. The scalability and real-time nature of a platform-based approach results in a stream of data that can be used to drive a variety of applications, including wayfinding, evacuation planning, training and simulation scenarios, airport security, and more.

Integrating Indoor and Outdoor 3D

Accurate correlation in four dimensions will drive the framework for future information transfer and corroboration. Fixed objects at a point in time must be properly located for all of their properties to be associated. This work is more challenging than it might appear. Many objects look very similar, and various sensors have differing levels of resolution and accuracy—bleed-over of attribution and misattribution of properties is possible. The better the 3D base layer, or foundation, the more likely all scene elements will be properly defined. Once objects move within the scene, the correlation of observables, initial position, and the changes to it often allow inference of intent or purpose.

Connecting data from outside to inside to deliver a seamless experience has yet to be solved, although there is progress. By capturing indoor 3D quickly and in real time, the opportunity to integrate it with outdoor 3D models is now possible. We expect the integration of 3D and its related positioning data will soon be ubiquitous regardless of where a person is located. In areas where data providers can work together, the same approach used by Google to track traffic could allow for the establishment of routes from outdoor to indoor and vice versa to evolve rapidly. Companies creating the 3D data are defining the standards, and, as more data becomes available, accessing information can be as easy as “get.location” for software developers creating outdoor navigation apps. A centralized database with established formats, standards, and access protocols is recommended to ensure that analysts and developers work with the same datasets and that decisions and insights are derived from the same foundation, no matter where stakeholders are located.

3D Accessibility for Success

As it becomes easier to quickly and cost-effectively create and integrate indoor and outdoor 3D data, managing how that information is stored and accessed will be the next opportunity for the geospatial community. In order for 3D to be truly valuable, it must be easily—if not immediately—accessible for today’s devices. Ensuring 3D can be captured in real time will drive the need to deliver it quickly and across a wider variety of applications. A smart compression and standardization strategy is critical to the portability of the information. As the use of 3D by consumers increases, there will be more natural demand for ready access from user devices, which will help streamline and optimize applications (as it has for 2D mapping over the last decade).

Applying 3D to the real world, in real time, provides:

  • Improved situational awareness to users from their own devices.
  • Seamless wayfinding from outdoors to indoors.
  • Exceptionally detailed and portable data for military/emergency planners and operators.
  • Readily available data and web access for first responders and non-governmental organizations.
  • Global GPS-denied navigation capability for mission critical systems (e.g., commercial flight avionics).
  • A globally accurate positioning grid immediately available for analysis.

Moving Forward

3D is ready to play a bigger role in how we experience the world. The manipulation of location in 3D should be as natural as controlling a video game console. As long as the GEOINT Community keeps in mind what has been learned from both 2D mobile mappers and gaming aficionados, the move into the Z-dimension should prove as easy as it is worthwhile. Moving forward, 3D data developers and users have an important role to play—to provide feedback on what “feels” natural, and what doesn’t. After all, that’s what reinserting the third dimension is all about.

Headline image: The Great Pyramid of Giza, from 3D compositing of DigitalGlobe imagery. Courtesy of Vricon.

The post The Tipping Point for 3D appeared first on Trajectory Magazine.

Weekly GEOINT Community News Mon, 08 Jan 2018 15:18:08 +0000 Bomb Cyclone Hits East Coast; New Jersey Poised to Ban Drunk Droning; MIT Researchers Improve LiDAR Accuracy; More

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

Bomb Cyclone Hits East Coast

A severe winter storm known as a “bomb cyclone” swept through the Eastern United States last week, disrupting travel and commerce for days with white-out conditions and freezing temperatures. In support of public safety, Esri published an interactive weather map displaying rain, snow, ice, and mixed precipitation throughout the country in real time. The map employed public social media posts to visualize the storm’s effects on local communities. NOAA published detailed thermal imagery of the cyclone taken by its NOAA-20 and GOES-East satellites. Data sets include visible and infrared, colorized infrared, water vapor, and composite true color images. NASA tracked the storm as well, creating a 24-hour infrared loop of the cyclone’s evolution as well as temperature and moisture visualizations.

New Jersey Poised to Ban Drunk Droning

Lawmakers in New Jersey are expected to pass a bill banning the operation of drones while drunk or under the influence of drugs. Offenders could face up to a $1,000 fine or a maximum of six months in prison. Reuters reports New Jersey is the first of 38 states considering such restrictions on the operation of unmanned aircraft, though nine states (including Texas, Nevada, and Oregon) already prohibit flying drones over prison facilities.

MIT Researchers Improve LiDAR Accuracy

An MIT research team aims to solve the pervasive “curse of light speed” that hinders LiDAR sensors from clearly seeing far-away objects. Because of the speed at which light travels (a foot in just one nanosecond), it can be difficult to determine precisely how long light takes to bounce back to a LiDAR sensor. The MIT team’s solution is not a more powerful optic sensor, but a light filtration system that uses fiber optics to measure the light beam’s exact path before it reaches the detector. This method can be applied to self-driving technology, allowing cars to see into the distance during foggy conditions.

Peer Intel

Maxar Technologies appointed Mike Greenley group president of MDA, a Maxar Technology company specializing in robotics, surveillance, and satellite systems. Greenley served as sector president of Canadian imaging systems company L-3 Wescam since 2016. Effective Jan. 15, he’ll lead all MDA lines of business and personnel.

Altamira Technologies Corporation promoted Jonathan Moneymaker to president. Moneymaker joined Altamira in 2014 as executive vice president and CSO, and will now oversee strategy, sales, customer deliveries, and growth for the company.

Photo Credit: NOAA

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

Weekly GEOINT Community News Mon, 25 Sep 2017 16:53:03 +0000 Community Continues to Support Natural Disaster Response Efforts; Discount for Live Online Augmented Reality Course; ManTech to Acquire InfoZen; LizardTech Awarded U.S. Patent for LiDAR Point Cloud Compression

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

Community Continues to Support Natural Disaster Response Efforts

The GEOINT Community continues to support response efforts following the series of natural disasters that have occurred in the past weeks. Esri released an interactive impact summary map of the 7.1 magnitude earthquake that struck Central Mexico last week. This analysis, built in the “Enrich Layer” in ArcGIS Online, also presents a Shakemap of the earthquake.

DigitalGlobe released high-resolution satellite imagery showing the impact of the earthquake in Mexico City. Images include collapsed buildings near Parque Espana and Rancho del Arco and rescue teams at the Enrique Rebsamen School. DigitalGlobe separately released pre- and post-earthquake imagery of affected areas through its Open Data Program.

On Friday, DigitalGlobe also released images showing in high resolution the damage Hurricane Maria caused on the island of Dominica. The company is also working to capture and make available imagery of Puerto Rico once cloud cover lifts over the region. DigitalGlobe’s tomnod crowdsourcing platform has also kicked off a campaign in support of Hurricane Maria response.

Live Online Training Course: AR and the Future of Work

Tech authors and futurists Robert Scoble and Shel Israel will produce a live online training course called “AR and the Future of Work” based on their recent best-selling book The Fourth Transformation. The course will be held Monday, Oct. 2, and is intended for executives seeking to understand how and why AR will transform their businesses. The standard rate for the class is $247, with a discounted rate of $147 for those who register before Sept. 29. Register here and enter the code GSIF1 for a $20 discount.

ManTech to Acquire InfoZen

ManTech International will acquire IT company InfoZen in a $180 million deal expected to close next month. InfoZen’s portfolio will bolster ManTech’s already strong footprint in the homeland security and aerospace marketplaces as well as grow Mantech’s cloud migration and agile development capabilities.

LizardTech Awarded U.S. Patent for LiDAR Point Cloud Compression

Software solutions provider LizardTech was awarded a U.S. patent for the lossless compression of LiDAR point clouds. LizardTech’s flagship software, GeoExpress, uses wavelet transformation algorithms to compress LiDAR data to MrSID and LAZ formats with no content loss. These algorithms can be licensed via the LizardTech SDK for integration with third-party geospatial solutions.

SpaceX Files to Trademark “Starlink” Satellite Network

SpaceX filed to trademark the name “Starlink” for a satellite network that will provide low-cost broadband internet access worldwide. The trademark also included provisions for remote sensing and aerial photography services. Elon Musk first announced the satellite project in 2015, and Geekwire reports prototype satellites could be launched as early as this year.

Peer Intel

UK Chief of Defence Staff Air Chief Marshal Sir Stuart Peach has been appointed the new NATO Chairman of the Military Committee, the Alliance’s senior military office position. Peach led a long and successful military career after joining the Royal Air Force in 1977, commanding the UK’s intervention in Libya and becoming first Commander of UK Joint Forces in 2011. He will succeed Czech General Petr Pavel as Chairman in June 2018.

Photo Credit: DigitalGlobe

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

Continental Mapping: Small Business, Big Capabilities Wed, 16 Aug 2017 20:12:38 +0000 Q&A with Andy Dougherty, chief executive officer

The post Continental Mapping: Small Business, Big Capabilities appeared first on Trajectory Magazine.

Q: How would you describe Continental Mapping’s role in the geospatial intelligence community?

We’ve been working with the National Geospatial-Intelligence Agency (NGA) since it was still known as the National Imagery and Mapping Agency. We’ve been an engineering, survey, and mapping firm for 20 years and have grown up in the GEOINT world. We started out bringing survey and planimetric mapping to the industry and then got into content management and feature extraction and attribution.

We have a significant workload in our defense intelligence sector, and we supplement that with work for the U.S. Air Force and Army as well as significant contract work with the U.S. Army Corps of Engineers for survey, mapping, and LiDAR. We aren’t so much in the sensor game except for our mobile LiDAR scanning and mapping services.

Q What are your core offerings?

Feature extraction, content management, mapping, survey, GIS, and photogrammetry are the heart of what we do. We have a pretty sizeable group called GeoFoundry that is dedicated to creating tools to improve our mapping skills. In the last five years, we have developed more than 50 tools that are used by our quality control and quality assurance people as well as our planimetrics and photogrammetry programs. GeoFoundry has its own website where we sell those tools. We used to keep them internal, but in an increasingly open-source world, there’s no reason not to share these tools.

One area we’ve seen real promise in is information brokerage. Our tools automate the process of looking for errors—they capture statistically relevant mistakes by going after things you normally cannot automate. These capabilities minimize the amount of human labor necessary to implement a quality control effort. In the world of information brokerage, we see a role for those tools to rapidly map an area or quickly decide whether your data set is relevant. We create a score index for the data sets we review—what we refer to as “Data Fitness.”

Q: How has Continental Mapping grown since its founding in 1999?

We started as a small firm with two photogrammetrists who wanted to branch out on their own from a larger company. Since then, Continental has developed a robust defense and intelligence portfolio. We recently expanded to transportation and infrastructure in the last year to answer the need for autonomous vehicle mapping and asset management for the Department of Transportation at the federal and state levels.

In photogrammetry and the GEOINT world, the autonomous vehicle has the potential to launch a whole sector unto itself for mapping infrastructure so these vehicles can operate safely on our roads. There is a ton of effort in that—it’s probably our next largest growth area.

Q: What differentiates Continental Mapping from similar organizations?

Large business primes want small business contractors who are forthright and honest in their dealings and who deliver quality products on time and on schedule. We’ve built and maintained a reputation for delivering quality work the first time, on schedule and for a good price. That reputation and our “above and beyond” GEOINT products have kept us in the marketplace where, if you can’t deliver, you’ll quickly be removed from the contract or you won’t see return work. We’ve kept large businesses happy with our performance while still growing and picking up side contracts where we can.

Q: What emerging GEOINT trends is your company currently responding to?

We’ve been monitoring the continuing use of new sensors and GPU processing, as well as taking advantage of some artificial intelligence that’s been coming out of Silicon Valley. Sensors have matured dramatically in the last 10 years to where they have multiple uses for gathering desired information. We’ve continued to update our software processing capability to keep it fast, keep it clean, deliver quality products, and deliver to the customer the sensors and data sets they really need.

Featured image: Continental Mapping’s dense, high-accuracy LiDAR point cloud mapping for infrastructure supports asset inventory and engineering design. Photo credit: Continental Mapping


The post Continental Mapping: Small Business, Big Capabilities appeared first on Trajectory Magazine.

Weekly GEOINT Community News Mon, 13 Mar 2017 08:56:00 +0000 /?p=27910 ORNL and Esri Sign Enterprise Agreement; CACI Wins Task Order to Support DoD IT Services; Dewberry to Collect LiDAR Data for USGS in Texas

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

ORNL and Esri Sign Enterprise Agreement

Esri will sign an enterprise agreement with the Department of Energy’s Oak Ridge National Laboratory (ORNL) to give ORNL access to the full set of spatial analysis tools in the ArcGIS platform. Under the agreement, ORNL can add licenses as needed to use data and imagery in support of the many federal research and development projects it undertakes—from public utility management to disaster response. 

CACI Wins Task Order to Support DoD IT Services

CACI International was awarded a $190 million task order to support the Department of Defense Joint Service Provider Information Technology Service Delivery Support Requirement program. Under the three-and-a-half-year task order, CACI will provide service management, including a range of consolidated IT for mission-essential functions, telecommunications, and enterprise print management for approximately 22,000 users.

Dewberry to Collect LiDAR Data for USGS in Texas

Dewberry has been tasked with facilitating, acquiring, and processing LiDAR data across Texas’ Red River and Neches Basin areas under the U.S. Geological Survey’s (USGS) Geospatial Products and Services Contract. The data will support USGS’s 3D Elevation Program and the Federal Emergency Management Agency’s (FEMA) Risk Mapping, Assessment, and Planning program. According to the press release, East Central Texas is known to be an at-risk area for severe flooding and this project will help FEMA provide residents with updated reference data.

Peer Intel

Jeff Bohling recently joined Vencore as senior vice president and general manager of the company’s Civilian and Defense Group. Bohling is responsible for the business operations and growth of the group, and expansion of Vencore’s civilian and defense programs and portfolio.

General Dynamics recently announced new appointments effective April 1. M. Amy Gilliland will become the deputy for operations for General Dynamics Information Technology. Kimberly A. Kuryea will become the company’s senior vice president, human resources and administration. The company’s board of directors elected William A. Moss as a vice president of General Dynamics.

Photo Credit: Dewberry

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]> 0 27910
More Than Meets the Eye Tue, 01 Nov 2016 22:13:56 +0000 /?p=27837 Sophisticated sensors can see things humans can’t. GEOINT’s next challenge: turning spectral science into actionable insight

The post More Than Meets the Eye appeared first on Trajectory Magazine.

The human body is a marvelous machine. Its largest organ—the skin—contains approximately 5 million touch receptors capable of telling hot from cold, wet from dry, and hard from soft. It also has a tongue with up to 10,000 taste buds discerning sweet, sour, salty, bitter, and savory; an auditory system with more than 25,000 minuscule hairs translating tiny vibrations into noise, music, and conversation; and eyes, which comprise more than 2 million working parts that together can distinguish approximately 10 million colors. Still, the human body has limitations. For every sight its eyes can see, there are exponentially more that remain indiscernible, invisible, and otherwise imperceptible.

Remote sensing—taking images of Earth from land, sea, air, and space—is one way humans can transcend their five senses to learn more about the world. By augmenting senses with sensors, remote sensing supersedes biology in favor of physics to unlock distinguishing information about people, places, and things. The product is intelligence. The objective, however, is intervention.

For decades, the information returned by remote sensing platforms was restricted to literal images in black and white or color. The invention of synthetic aperture radar (SAR) in 1951, however, commenced a new era in sensor innovation. Along with electro-optical cameras and SAR—which can acquire imagery at night and penetrate clouds and fog—modern remote sensing platforms are bedazzled with a litany of sensors that exploit increasingly diverse phenomenology capable of seeing and sensing things never before possible.

“Just taking pictures in black and white or panchromatic provides a really limited set of information,” explained Dr. Michael Egan, head of the Spectral Research Pod at the National Geospatial-Intelligence Agency (NGA). “What we really want to be able to do—and what the new and different types of sensors allow us to do—is determine what things are by seeing things in a way the naked eye can’t.”

Sensors’ Ascension

Before phenomenology, there was photography, according to remote sensing expert Daniel Ngoroi, a geospatial team leader at Woolpert. Ngoroi traces modern sensors back to NASA’s 1969 Apollo 9 space mission, during which astronauts took the first multispectral terrain photographs from space. By making the case for multispectral orbital imagery, Apollo 9 influenced the 1972 launch of Landsat 1, the world’s first civil Earth-observation satellite and—thanks to a sensor spanning four spectral bands—its first multispectral imaging satellite.

“[The first Landsat satellites] were designed from a multispectral point of view because people … realized there are vast expanses of the electromagnetic spectrum that we ought to be taking advantage of to see information we can’t see with our own eyes,” Ngoroi said.

The proliferation of diverse remote sensing phenomenology catalyzed by Landsat 1 was further stimulated by the 1992 passage of the Land Remote Sensing Policy Act and the dawn of the Information Age. The former accelerated sensor innovation through commercialization by spawning companies like DigitalGlobe, whose WorldView satellites embody the movement to develop new and more powerful sensors for commercial customers.

“DigitalGlobe began developing … sensors with spectral bands for really unique applications,” said Dr. Kumar Navulur, senior director of global strategy programs at DigitalGlobe, citing the development of DigitalGlobe’s WorldView-1, -2, -3, and -4 satellites. Launched in 2007, 2009, 2014, and planned for 2016, respectively, each was outfitted with progressively more sophisticated sensors for applications in industries such as agriculture, forestry, and mining.

Equally important was the digital revolution, which enabled sensor evolution through advances in data storage, processing, and communication. The revolution brought sensors out of the laboratory and into real life.

“Twenty years ago … these national assets were so important that governments would spend billions of dollars on them. Today, that same kind of power is available in the private sector, to civilian agencies, and even to the world’s poorest countries,” said Dr. Michael Hauck, executive director of the American Society for Photogrammetry & Remote Sensing (ASPRS). “It’s really remarkable.”

Spectral Solutions

History is one requisite subject for grasping diverse remote sensing phenomenology. Science is another.

“Remote sensing at its heart is really applied physics,” said USGIF Vice President of Professional Development Dr. Darryl Murdock. “You have to understand physics to understand remote sensing.”

Most modern sensors are designed to exploit the full range of the electromagnetic spectrum, the basic premise of which is this: Everything in the universe—the sun, the Earth, and even the human body—continuously emits energy in the form of electromagnetic radiation. This energy varies in frequency and wavelength, from radio waves with low frequencies and large wavelengths to gamma rays with high frequencies and small wavelengths. Because all objects emit, reflect, and absorb electromagnetic energy differently, capturing it allows analysts to glean information not revealed in literal images.

“Being able to observe how materials react or behave in different portions of the electromagnetic spectrum allows us to make determinations and inferences about what’s happening on the ground,” explained Dr. Frank Avila, a senior scientist in NGA’s Office of Sciences and Methodologies. “For example, WorldView-3 gives us data across 16 [spectral bands] that we can use to look at the same portion of the ground. Each one gives us a slightly different piece of information, which together may be able to give us a complete picture.”

The spectrum can be spliced into infinite “bands,” the majority of which are invisible to the naked eye. Sensors that read approximately 10 or fewer visible and invisible bands are known as multispectral, those that read between 10 and 20 superspectral, and those that read more than 20 hyperspectral.

DigitalGlobe’s just-launched WorldView-4 will sense across five spectral bands, while Planet’s Dove satellites sense across four. UrtheCast’s Iris and Theia sensors—mounted on the International Space Station—cover three and four bands, respectively, while its Deimos-1 and Deimos-2 satellites cover three and five bands, respectively. UrtheCast’s future plans include UrtheDaily, a constellation of eight electro-optical satellites that will provide daily coverage across six bands.

Of all the bands multispectral sensors can capture, visible bands are the most common. Perhaps the most useful, however, are invisible bands, such as near-infrared (NIR) bands, according to Navulur, who said agriculture and forestry are two standout applications since vegetation—including trees, plants, and crops—has a particularly strong signature in NIR imagery.

“For example, when we developed our WorldView-2 satellite, one of eight bands that we derived was a band called the ‘red edge’ band, which allows us to identify whether vegetation is healthy or unhealthy,” explained Navulur, noting photosynthesis causes NIR energy to bounce off healthy vegetation but pass through unhealthy vegetation, making it easy to identify plants affected by drought or disease. That kind of information is valuable not only to farmers and forest managers, but also to governments and militaries.

“We’re using [multispectral data] to address food and water security issues by doing agricultural assessments and trend analysis to determine whether there’s agricultural expansion at a country level or agricultural failure that could be an indicator for potential civil unrest down the road,” Avila said.

Along with forests and farmland, the reflection of NIR energy—or lack thereof—makes it easy to distinguish manmade structures, bare earth, water, and shadows, all of which can help analysts answer questions about land composition and usage. Or in the case of Vricon, build digital terrain models for applications such as hydrology, geology, defense, construction, and disaster management.

“In order to generate bare-earth terrain models as accurately and precisely as we can, we use the multispectral bands out of imagery from DigitalGlobe’s satellites … to automatically classify and identify vegetation and buildings so we can remove them from the scene,” said Vricon Vice President Isaac Zaworski.

While multispectral sensors are appropriate for general inquiries, superspectral and hyperspectral sensors—like those on Landsat 8, WorldView-3, and NASA’s EO-1, whose sensors detect 11, 16, and 220 bands, respectively—are best for detailed inquiries.

“With superspectral and hyperspectral bands you get much finer information,” remarked Navulur, who said the most valuable bands on superspectral and hyperspectral sensors are those that measure shortwave infrared (SWIR) light, which sits above NIR light on the electromagnetic spectrum. “With shortwave infrared you can move from general—‘Is there agriculture?’—to be specific: ‘What type of agriculture is it? Is it coffee? Is it corn? Is it soybeans?’”

SWIR bands can also distinguish among types of trees, minerals, and building materials. They can penetrate smoke, smog, fog, and dust, as can another type of band common to superspectral and hyperspectral sensors: thermal infrared, which detects electromagnetic energy from heat instead of light. Both SWIR and thermal infrared sensors can be leveraged by firefighters to find hotspots during wildfires, and thermal infrared can be used by war-fighters to track the enemy.

“With thermal infrared you can tell whether a truck or tank engine is on, whether a building is occupied, or whether an aircraft on a runway has just landed,” said Robert Zitz, vice president and strategic account executive at Leidos. It also can be used to detect heat signatures for missile defense. For example, Leidos’ Commercially Hosted Infrared Payload sensor collected more than 300 terabytes of data on more than 200 thermal events during an Air Force-sponsored mission that concluded in December 2013.

Up and down the spectrum, the possibilities are at once overwhelming and exciting.

“Eventually, we’ll get to practical-use ultraspectral sensors … with millions of discrete bands,” continued Zitz, who said ultraspectral sensors will be able to distinguish seemingly identical objects manufactured at the same time by identifying their one-of-a-kind spectral fingerprint. “It is being proven in the labs right now.”

Let There Be LiDAR

Conversations about remote sensing phenomenology may begin with space, but that’s not where they end. Case in point: light detection and ranging, or LiDAR, whose chief advantages over spectral sensors are the ability to map 3D elevation and to penetrate tree cover.

Unlike passive sensors that measure electromagnetic energy emitted or reflected by external objects, LiDAR is an active sensor that emits and measures its own energy from an internal source: a laser—typically in the NIR band. Because of the power required to operate them, LiDAR sensors must be flown from aerial rather than space-based platforms. The sensors send laser pulses to the ground, where they bounce off buildings, vehicles, rocks, and earth before returning.

“What’s measured is the time it takes for the pulse to travel from the sensor to the object you were shooting, and then bounce back to the sensor,” explained Ngoroi, who said the resulting measurement is used in 3D terrain mapping to calculate elevation. “That time is what gives you elevation.”

Each LiDAR pulse is recorded as a three-dimensional point on a map; collectively, millions of points in the same vicinity constitute a 3D point cloud that can be interpreted as an object.

According to Ngoroi, elevation data can be used for applications such as flood modeling and emergency response. For example, the State of Indiana commissioned Woolpert to conduct a statewide LiDAR survey of its buildings to improve its E911 system. Knowing a building’s elevation, the state theorizes, will help emergency responders save lives.

“If someone’s calling for help from a cellphone, you can’t tell if they’re on the ground floor of a building or the 12th floor,” Ngoroi said. “If you use LiDAR to map buildings and provide that data to emergency responders, they’ll know which fire truck with which kind of ladder to bring based on the height of the buildings in that area.”

The same information could help law enforcement and the military determine line of sight when planning operations, architects orient buildings for maximum solar exposure, and humanitarians target resources after a disaster.

“After the Haiti earthquake [in 2010] there was extensive LiDAR coverage to map in three dimensions the destruction and the growth of camps to help with disaster relief,” Egan said. “Using 3D data [from LiDAR] for disaster response is going to be a growth area for continued development by many, including NGA.”

By measuring the strength of laser pulses when they return to the sensor, LiDAR systems can also assist in material classification, as different materials—grass, for instance, versus asphalt—reflect infrared light with varying intensity.

However, if you ask Dr. David Maune, associate vice president at Dewberry, LiDAR’s most important attribute is its ability to penetrate tree cover, which makes it possible to detect and map terrain that would otherwise be concealed. This capability can help seismologists discover tree-covered fault lines, surveyors classify obscured terrain, and intelligence analysts detect hidden buildings, roads, or weapons.

LiDAR can penetrate trees because every LiDAR pulse is a beam of light with a given diameter; as that beam travels through a forest, it sends multiple “returns” back to the sensor as it encounters obstructions. “While part of the light beam hits a leaf on the top of the tree, the rest of it continues on,” Maune said. “It may hit other leaves and branches on the way down, but if there’s an opening its last return will be the ground.”

Although “single-pulse” or “linear-mode” LiDAR is the default, there are many specialized varieties of LiDAR optimized for different missions, including bathymetric LiDAR, which uses a water-piercing laser to measure water depth, and Raman LiDAR, which uses ground-based lasers to measure atmospheric water vapor. One of the most buzzed about LiDAR varieties, however, is Geiger-mode LiDAR. Instead of returning laser beams, it measures returns of the individual photons that constitute those beams. This approach produces more data points per square meter, consumes less power, and requires lower-intensity returns, allowing sensors to cover more ground, at faster speeds, from higher altitudes.

“The Harris Geiger-mode LiDAR system was designed for wide-area mapping,” said Stuart Blundell, director of strategy and business development at Harris Geospatial Solutions. “Whereas a linear-mode system flies at a lower altitude—typically 2,000 feet on a single-airplane engine traveling around 90 miles per hour—we fly on a jet at 30,000 feet traveling at three times the speed of linear-mode systems. As a result, we’re flying up to 850 square miles per hour, compared to 50 square miles with a linear-mode sensor.”

While a linear sensor collects just two points per square meter, Geiger-mode can collect more than 100 points per square meter.

Eventually, LiDAR sensors will behave like point-and-shoot cameras, according to Hauck, who sees technologies such as Geiger-mode LiDAR, flash LiDAR, multi-band LiDAR, and photon-counting LiDAR as the future.

“Most LiDAR units don’t take a complete image at one time the way a camera does—yet,” he said. “Soon, they’ll generate lots and lots of photons of different wavelengths (i.e., colors) all at once, and measure lots and lots of returns all at once … When that happens, we’ll be able to get the shape of things and the material properties of things all in one shot, which will be very, very powerful.”

Making Sense of Sensors

In a world growing ever more crowded with diverse remote sensing phenomenology, there’s an elephant in the room: Without the ability to leverage the data they collect, sensors are senseless.

“We’ve spent literally billions of dollars building sensors, but investment in downstream processing and analysis of data has not kept pace,” Murdock said. “If you simply build sensors, and assume someone else will figure out how to use data from them, that’s a broken paradigm.”

Turning spectral capabilities into strategic insights requires the GEOINT Community to solve several critical challenges, the first of which is data processing and exploitation.

“Even if we pressed pause for a while on sensor development, there is still a ton of work to be done on advancing exploitation,” said Michael Nelson, director of intelligence and defense solutions at Riverside Research.

Added Phil Downen, vice president of government programs at UrtheCast, “It’s widely recognized that the deluge of data from sensors is increasing exponentially … The tradecraft bottleneck, however, is no longer computing resources, storage resources, or downlink and backhaul. The real challenge now is the geo-analytics—the algorithms, equations, and heuristics that are brought to bear on an ever-increasing diversity of raw data to extract information from it.”

To that end, UrtheCast and other hosts of remote sensing data—including NGA—are divining in-house processing solutions with algorithms that can automatically extract features from imagery and notify analysts of temporal changes and trends, allowing them to supply customers with insights derived from pixels instead of the pixels themselves.

“We don’t have enough analysts to review all the imagery we’re going to be getting in the very near future, so one of the things we’re looking at now is how we can best use … machine learning and neural networks to make sense of all that data,” Avila said.

Vricon’s “The Globe in 3D” and Harris’ ENVI geospatial analytics software are solutions on the forefront of machine learning. To power its large-scale 3D mapping products, the former is building a fully automated data processing engine capable of continuously ingesting and correlating data from virtually any available sensor. Based on the principles of stereo photogrammetry, Vricon’s engine extracts relevant features from disparate images, then mixes and matches them to create accurate 3D models.

“As a byproduct of the fact that we’re trying to generate the most accurate 3D representation of the static scene in any given location, our algorithms are essentially identifying anything that is changing in that entire scene,” Zaworski said.

ENVI’s image processing software automates feature extraction and change detection in much the same way. Going forward, its goal is to refine its algorithms to perform on a larger scale and at a finer resolution, according to Rebecca Lasica, enterprise sales manager for Harris Geospatial Solutions, which acquired ENVI in 2015. “Instead of analyzing an image, for example, we’ll be analyzing a whole country,” she said. “Likewise, we’ll be able to look not just forensically back in time, but also at trends that help us predict [future change] accurately enough to take action.”

Better algorithms and sophisticated machine learning will go a long way toward helping users tame an overwhelming amount of remote sensing data. The magic bullet that will help them fully exploit sensors’ capabilities, however, is data fusion, or multi-source integration.

“Multi-source integration is a huge area of research and application development because each type of sensor has its own strengths or weaknesses,” Nelson said. “If I have to turn off four of my five senses I am greatly restricted, but if I can use them all I’m fully functional.”

Added Lasica, “Taking different modalities and putting them together can build a picture that’s greater than the sum of its parts. For example, a grower may have some [multispectral] imagery that reveals information about the health of their crops. But they might also be co-collecting LiDAR that gives them information about the height of those crops. Putting those data sets together allows you to cross-reference the health of a plant with the height of a plant, giving a three-dimensional picture about when the harvest might be ready.”

It sounds easier than it is.

“Each phenomenology is different…at the data level; combining them in a way that’s meaningful takes time and effort,” continued Nelson, adding that complementary images from disparate platforms and sensors have not only different electromagnetic characteristics, but also different geographic and temporal parameters that make amalgamation difficult. “Even routine things like how to get [complementary] data sets into the same analyst’s bucket at the same time are challenging. You have to have awareness, for example, that there were four sensors that collected on a given target; then you have to get all four data sets together and make sure your analyst is appropriately trained to exploit each of those modalities.”

As sensors get smaller, more powerful, and more energy-efficient—shattering current size, weight, and power constraints—data fusion will be able to take place not only on the back end, à la Vricon and ENVI, but also on the front end. That will make multi-sensor integration easier, according to Blundell.

“The best way to register multi-sensor information is to collect it at the same time in a miniaturized fashion from multi-sensor pods,” he said.

Exactly when and how diverse remote sensing phenomenology will be fused is anyone’s guess. What’s clear, however, is that sensors will continue to mine new frontiers of physics that surpass the limits of human biology.

“This is not going to slow down,” Nelson concluded. “As platforms become easier and cheaper to launch, the prevalence of sensors in the commercial world is only going to accelerate. Commercial remote sensing is a growing global phenomenon.”

Featured image courtesy of Harris

The post More Than Meets the Eye appeared first on Trajectory Magazine.

]]> 0 27837
Friday’s Food for Thought: Mapping Coral Reefs Fri, 30 Sep 2016 11:05:48 +0000 A team of researchers from Australia, Bermuda, and the United States are using a NASA Gulfstream jet to modernize the way the world studies coral reefs

The post Friday’s Food for Thought: Mapping Coral Reefs appeared first on Trajectory Magazine.

Scientists are employing mapping technology to learn more about coral reefs and visualize the lasting effects caused by climate change.

A team of researchers from Australia, Bermuda, and the United States are using a NASA Gulfstream jet to modernize the way the world studies coral reefs, reports The New York Times. The team equipped the aircraft with a hyper spectral sensor to map the conditions of the Great Barrier Reef from 28,000 feet and produce a real-time picture of how much sand, coral, and algae make up large stretches of the reef. Scientists also want to know how rising sea temperatures, water acidity, pollution, sediment, and overfishing affect reefs over time. According to the article, collecting data from the sky will be more efficient than deploying scuba divers to study a reef that is more than 1,400 miles long and in some places stretches more than 180 miles out to sea. The scientists hope the flights will prove the sensor’s worth and it can later be deployed on a satellite.

LiDAR is also proving useful in helping scientists learn more about reef systems. According to a Forbes article, scientists from Queensland University of Technology used LiDAR data collected by the Australian navy to learn a system of bioherms—mounds of ancient calcified algae scattered outside the Great Barrier Reef—was much larger than previously believed. Scientists who documented the bioherms in the ’80s using acoustic sound waves thought the bioherm network covered an area of about 800 square miles, but the new map showed a network covering more than 2,300 square miles. The team published their findings in the journal Coral Reefs, positing the bioherm network’s vast size and volume may “rival that of the northern Great Barrier Reef coral reefs.”

Photo Credit: Tchami/CreativeCommons

The post Friday’s Food for Thought: Mapping Coral Reefs appeared first on Trajectory Magazine.

]]> 1 4254
Friday’s Food for Thought: Discovering the Past Fri, 03 Jun 2016 15:52:56 +0000 Mapping technology helps dig up history

The post Friday’s Food for Thought: Discovering the Past appeared first on Trajectory Magazine.

Satellite imagery has helped discover ancient Mayan ruins, according to a BBC article. Mayan ruins in San Bartolo, Guatemala, were discovered in 2001, but the dense jungle made exploring the ancient site on foot very difficult. With the help of NASA, many features including a lost Mayan pyramid were identified using satellite imagery and LiDAR. Because Mayan buildings were constructed using limestone, the chemical composition around the ruins was altered over time, making them visible in the imagery. Pinpointing archeological sites with the help of satellite imagery is becoming more common, as demonstrated by space archeologist and 2016 TED Prize winner Dr. Sarah Parcak.

LiDAR is also helping preserve history. Project Map by the University of Colorado Boulder is using the technology to help uncover details about a historic buffalo fur trading outpost built in 1835 at Fort Vasquez in Colorado. University of Colorado Boulder Assistant Professor Gerardo Gutierrez and his students used LiDAR at the location to create a fully interactive architectural rendering of the buildings, terrain, and other features of this historic Colorado monument. According to an article from the University of Colorado’s News Center, the team plans to use the 3D images to create an online database where the public will be able to explore historical sites virtually, as well as provide a permanent record of the site in the event of future deterioration or loss. The team’s next will employ LiDAR-equipped UAVs to map the Chacoan site of Chimney Rock National Monument in Pagosa Springs, Colo.

Photo Credit: NASA

The post Friday’s Food for Thought: Discovering the Past appeared first on Trajectory Magazine.

]]> 0 4150
Driverless, but Not Directionless Sat, 07 May 2016 01:53:32 +0000 Autonomous vehicles of tomorrow will require sensor synthesis for precision navigation

The post Driverless, but Not Directionless appeared first on Trajectory Magazine.

Market research suggests that by 2035, consumers around the world will purchase 85 million autonomous-capable vehicles a year. But before the driverless revolution can take off, the automotive industry must solve some significant technological challenges—not the least of which is precision navigation.

“A normal GPS receiver is accurate to about 15 feet,” said Tim Harris, co-founder and CEO of Swift Navigation, a San Francisco-based company developing a high-accuracy GPS receiver for use in autonomous vehicles, including driverless cars, unmanned aerial vehicles, and ground robots. “That’s good enough to find the restaurant, but it’s not good enough for your car to know what lane it’s in or for a drone to drop a package on your doorstep instead of in the neighbor’s pool.”

Companies like Swift are pioneering affordable precision receivers that promise to help autonomous vehicles navigate with centimeter-level accuracy either by receiving dual signals—signals of different frequencies or from separate global navigation satellite system (GNSS) constellations—or by calculating positioning using an approach known as Real Time Kinematic (RTK) satellite navigation, which leverages actual GNSS radio waves instead of the code they transmit.

While the cost and performance of these receivers is constantly improving, the silver bullet will be an integrated network of diverse sensors, of which GNSS is only one.

“Do you rely on an external positioning system like GPS, or do you treat an autonomous car like a computer with sensors that do the driving like a human does, with no real knowledge of [absolute] location?” asked Brian Markwalter, senior vice president of research and standards at the Consumer Technology Association. “Those are two extremes, and what we’re seeing is an integration of the two—the idea of crowd-sourced integration, where cars will record the ground truth based on their sensors and integrate that with a mapping function in the cloud.”

This sensor network might include cameras, LiDAR, and radar, while the mapping function will likely belong to a solution like HD Live Map, unveiled in January by automotive mapping service HERE, whose cloud-based system supplies autonomous vehicles with a detailed and dynamic representation of the immediate road environment so they can pre-emptively anticipate obstacles in their path. In a virtuous circle, the maps are updated in real time based on road information collected by complementary sensors.

Concluded Markwalter, “Nobody’s counting on a single system for exact positioning. There’s going to be a lot of synthesis taking place within the car to help it understand where it is.”

Return to Feature Story: ‘Recalculating’ GPS

The post Driverless, but Not Directionless appeared first on Trajectory Magazine.

]]> 0 4078
Weekly GEOINT Community News Mon, 18 Apr 2016 06:14:17 +0000 American Space Renaissance Act Introduced; DigitalGlobe Completes First Phase of PSMA Australia Mapping Initiative; Dewberry Awarded USGS Contract

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

American Space Renaissance Act Introduced

On April 12, Rep. James Bridenstine introduced a new space policy bill intended to update a wide range of civil, commercial, and national security space issues to keep the United States competitive, reports Space News. Bridenstine formally introduced the bill during his keynote address at the 32nd Space Symposium last week. The bill, named the American Space Renaissance Act, includes separate sections covering military, civil, and commercial policy topics such as changes to responsibilities for space situational awareness. Bridenstine said the bill is intended to start discussions on space policy issues and identify sections that can be added to other bills.

DigitalGlobe Completes First Phase of PSMA Australia Mapping Initiative

DigitalGlobe completed the first phase of a continent-scale mapping initiative that will enable Geoscape, a new information product from PSMA Australia to support Australia’s digital economy. Under the second phase, DigitalGlobe will leverage a range of geospatial technologies to map the locations and characterize the physical attributes of more than 15 million structures spread across the entire continent of Australia. PSMA Australia provides national geospatial data sets derived from authoritative sources for a range of public and private business uses. 

Dewberry Awarded USGS Contract for Geospatial Products & Services

The U.S. Geological Survey (USGS) selected Dewberry for a five-year geospatial products and services contract in support of USGS’s National Geospatial Program, which provides leadership for USGS coordination, production, and service activities. Under the contract, USGS will assign task orders to Dewberry, primarily requiring airborne LiDAR and IfSAR acquisition and processing, high-resolution topographic product generation, orthoimagery acquisition and processing, photogrammetric mapping, and cadastral surveying.

Red Hen Mobile App Used for Bat Research in Uganda

Red Hen Systems’ MediaMapper Mobile App for Android is helping a Colorado State University (CSU) research project dedicated to the study of bats and pathogens in Uganda. The project is funded by the Defense Threat Reduction Agency (DTRA) in conjunction with CSU and the Center for Disease Control and Prevention (CDC). The team will use the app to locate, geo-tag, and photograph bats. The research findings will be presented to CSU, DITRA, and CDC to provide biosurveillance training to Ugandan partners and support bat biology and ecology research in Uganda.

Oracle Contributes to Computer Science Education

Oracle announced it plans to invest $200 million in direct and in-kind support for computer science education in the U.S. over the next 18 months. The company aims to reach more than 232,000 students across 1,100 U.S. institutions through Oracle Academy, the company’s computer science educational program. Oracle Academy will provide free academic curriculum, professional development for teachers, software, certification resources, and more.

Remote Sensing Market Expected to Grow

The global market for remote sensing products reached $8.4 billion in 2015, and is expected to reach nearly $8.9 billion in 2016 and $13.8 billion by 2021, according to a report by BCC Research. This represents a compound annual growth rate of 9.3 percent over the five-year period from 2016 to 2021.

Peer Intel

John D. Johns became vice president of business development and account executive for national agencies at KEYW. Johns has more than 25 years of experience in strategic business development in the international commercial, government, and intelligence markets.

The MITRE Corporation appointed Dr. Jay Schnitzer vice president and chief technology officer. Previously, he was director of biomedical sciences at MITRE, overseeing the organization’s internal health transformation research and development program.

Photo Credit: PSMA Australia 

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]> 0 4062